text
stringlengths
7
3.71M
id
stringlengths
12
166
metadata
dict
__index_level_0__
int64
0
658
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]] # Load LoRAs for inference There are many adapter types (with [LoRAs](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) being the most popular) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. In this tutorial, you'll learn how to easily load and manage adapters for inference with the 🤗 [PEFT](https://huggingface.co/docs/peft/index) integration in 🤗 Diffusers. You'll use LoRA as the main adapter technique, so you'll see the terms LoRA and adapter used interchangeably. Let's first install all the required libraries. ```bash !pip install -q transformers accelerate peft diffusers ``` Now, load a pipeline with a [Stable Diffusion XL (SDXL)](../api/pipelines/stable_diffusion/stable_diffusion_xl) checkpoint: ```python from diffusers import DiffusionPipeline import torch pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") ``` Next, load a [CiroN2022/toy-face](https://huggingface.co/CiroN2022/toy-face) adapter with the [`~diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights`] method. With the 🤗 PEFT integration, you can assign a specific `adapter_name` to the checkpoint, which let's you easily switch between different LoRA checkpoints. Let's call this adapter `"toy"`. ```python pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") ``` Make sure to include the token `toy_face` in the prompt and then you can perform inference: ```python prompt = "toy_face of a hacker with a hoodie" lora_scale = 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_8_1.png) With the `adapter_name` parameter, it is really easy to use another adapter for inference! Load the [nerijs/pixel-art-xl](https://huggingface.co/nerijs/pixel-art-xl) adapter that has been fine-tuned to generate pixel art images and call it `"pixel"`. The pipeline automatically sets the first loaded adapter (`"toy"`) as the active adapter, but you can activate the `"pixel"` adapter with the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method: ```python pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipe.set_adapters("pixel") ``` Make sure you include the token `pixel art` in your prompt to generate a pixel art image: ```python prompt = "a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![pixel-art](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_12_1.png) ## Merge adapters You can also merge different adapter checkpoints for inference to blend their styles together. Once again, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method to activate the `pixel` and `toy` adapters and specify the weights for how they should be merged. ```python pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) ``` <Tip> LoRA checkpoints in the diffusion community are almost always obtained with [DreamBooth](https://huggingface.co/docs/diffusers/main/en/training/dreambooth). DreamBooth training often relies on "trigger" words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it's important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. </Tip> Remember to use the trigger words for [CiroN2022/toy-face](https://hf.co/CiroN2022/toy-face) and [nerijs/pixel-art-xl](https://hf.co/nerijs/pixel-art-xl) (these are found in their repositories) in the prompt to generate an image. ```python prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face-pixel-art](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_16_1.png) Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters. > [!TIP] > Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the [Merge LoRAs](../using-diffusers/merge_loras) guide! To return to only using one adapter, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method to activate the `"toy"` adapter: ```python pipe.set_adapters("toy") prompt = "toy_face of a hacker with a hoodie" lora_scale = 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` Or to disable all adapters entirely, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.disable_lora`] method to return the base model. ```python pipe.disable_lora() prompt = "toy_face of a hacker with a hoodie" image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![no-lora](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_20_1.png) ### Customize adapters strength For even more customization, you can control how strongly the adapter affects each part of the pipeline. For this, pass a dictionary with the control strengths (called "scales") to [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`]. For example, here's how you can turn on the adapter for the `down` parts, but turn it off for the `mid` and `up` parts: ```python pipe.enable_lora() # enable lora again, after we disabled it above prompt = "toy_face of a hacker with a hoodie, pixel art" adapter_weight_scales = { "unet": { "down": 1, "mid": 0, "up": 0} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-down](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_down.png) Let's see how turning off the `down` part and turning on the `mid` and `up` part respectively changes the image. ```python adapter_weight_scales = { "unet": { "down": 0, "mid": 1, "up": 0} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-mid](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_mid.png) ```python adapter_weight_scales = { "unet": { "down": 0, "mid": 0, "up": 1} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-up](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_up.png) Looks cool! This is a really powerful feature. You can use it to control the adapter strengths down to per-transformer level. And you can even use it for multiple adapters. ```python adapter_weight_scales_toy = 0.5 adapter_weight_scales_pixel = { "unet": { "down": 0.9, # all transformers in the down-part will use scale 0.9 # "mid" # because, in this example, "mid" is not given, all transformers in the mid part will use the default scale 1.0 "up": { "block_0": 0.6, # all 3 transformers in the 0th block in the up-part will use scale 0.6 "block_1": [0.4, 0.8, 1.0], # the 3 transformers in the 1st block in the up-part will use scales 0.4, 0.8 and 1.0 respectively } } } pipe.set_adapters(["toy", "pixel"], [adapter_weight_scales_toy, adapter_weight_scales_pixel]) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-mixed](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_mixed.png) ## Manage active adapters You have attached multiple adapters in this tutorial, and if you're feeling a bit lost on what adapters have been attached to the pipeline's components, use the [`~diffusers.loaders.LoraLoaderMixin.get_active_adapters`] method to check the list of active adapters: ```py active_adapters = pipe.get_active_adapters() active_adapters ["toy", "pixel"] ``` You can also get the active adapters of each pipeline component with [`~diffusers.loaders.LoraLoaderMixin.get_list_adapters`]: ```py list_adapters_component_wise = pipe.get_list_adapters() list_adapters_component_wise {"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} ```
diffusers/docs/source/en/tutorials/using_peft_for_inference.md/0
{ "file_path": "diffusers/docs/source/en/tutorials/using_peft_for_inference.md", "repo_id": "diffusers", "token_count": 3355 }
200
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Load adapters [[open-in-colab]] There are several [training](../training/overview) techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. <Tip> Feel free to browse the [Stable Diffusion Conceptualizer](https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer), [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer), and the [Diffusers Models Gallery](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery) for checkpoints and embeddings to use. </Tip> ## DreamBooth [DreamBooth](https://dreambooth.github.io/) finetunes an *entire diffusion model* on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let's load the [herge_style](https://huggingface.co/sd-dreambooth-library/herge-style) checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word `herge_style` in your prompt to trigger the checkpoint: ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" image = pipeline(prompt).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_dreambooth.png" /> </div> ## Textual inversion [Textual inversion](https://textual-inversion.github.io/) is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") ``` Now you can load the textual inversion embeddings with the [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] method and generate some images. Let's load the [sd-concepts-library/gta5-artwork](https://huggingface.co/sd-concepts-library/gta5-artwork) embeddings and you'll need to include the special word `<gta5-artwork>` in your prompt to trigger it: ```py pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, <gta5-artwork> style" image = pipeline(prompt).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_txt_embed.png" /> </div> Textual inversion can also be trained on undesirable things to create *negative embeddings* to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You'll also load the embeddings with [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`], but this time, you'll need two more parameters: - `weight_name`: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format - `token`: specifies the special word to use in the prompt to trigger the embeddings Let's load the [sayakpaul/EasyNegative-test](https://huggingface.co/sayakpaul/EasyNegative-test) embeddings: ```py pipeline.load_textual_inversion( "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" ) ``` Now you can use the `token` to generate an image with the negative embeddings: ```py prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" negative_prompt = "EasyNegative" image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png" /> </div> ## LoRA [Low-Rank Adaptation (LoRA)](https://huggingface.co/papers/2106.09685) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. <Tip> LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. It is also increasingly common to load and merge multiple LoRAs to create new and unique images. You can learn more about it in the in-depth [Merge LoRAs](merge_loras) guide since merging is outside the scope of this loading guide. </Tip> LoRAs also need to be used with another model: ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") ``` Then use the [`~loaders.LoraLoaderMixin.load_lora_weights`] method to load the [ostris/super-cereal-sdxl-lora](https://huggingface.co/ostris/super-cereal-sdxl-lora) weights and specify the weights filename from the repository: ```py pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") prompt = "bears, pizza bites" image = pipeline(prompt).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_lora.png" /> </div> The [`~loaders.LoraLoaderMixin.load_lora_weights`] method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: - the LoRA weights don't have separate identifiers for the UNet and text encoder - the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the [`~loaders.UNet2DConditionLoadersMixin.load_attn_procs`] method. Let's load the [jbilcke-hf/sdxl-cinematic-1](https://huggingface.co/jbilcke-hf/sdxl-cinematic-1) LoRA: ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") # use cnmt in the prompt to trigger the LoRA prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" image = pipeline(prompt).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_attn_proc.png" /> </div> To unload the LoRA weights, use the [`~loaders.LoraLoaderMixin.unload_lora_weights`] method to discard the LoRA weights and restore the model to its original weights: ```py pipeline.unload_lora_weights() ``` ### Adjust LoRA weight scale For both [`~loaders.LoraLoaderMixin.load_lora_weights`] and [`~loaders.UNet2DConditionLoadersMixin.load_attn_procs`], you can pass the `cross_attention_kwargs={"scale": 0.5}` parameter to adjust how much of the LoRA weights to use. A value of `0` is the same as only using the base model weights, and a value of `1` is equivalent to using the fully finetuned LoRA. For more granular control on the amount of LoRA weights used per layer, you can use [`~loaders.LoraLoaderMixin.set_adapters`] and pass a dictionary specifying by how much to scale the weights in each layer by. ```python pipe = ... # create pipeline pipe.load_lora_weights(..., adapter_name="my_adapter") scales = { "text_encoder": 0.5, "text_encoder_2": 0.5, # only usable if pipe has a 2nd text encoder "unet": { "down": 0.9, # all transformers in the down-part will use scale 0.9 # "mid" # in this example "mid" is not given, therefore all transformers in the mid part will use the default scale 1.0 "up": { "block_0": 0.6, # all 3 transformers in the 0th block in the up-part will use scale 0.6 "block_1": [0.4, 0.8, 1.0], # the 3 transformers in the 1st block in the up-part will use scales 0.4, 0.8 and 1.0 respectively } } } pipe.set_adapters("my_adapter", scales) ``` This also works with multiple adapters - see [this guide](https://huggingface.co/docs/diffusers/tutorials/using_peft_for_inference#customize-adapters-strength) for how to do it. <Tip warning={true}> Currently, [`~loaders.LoraLoaderMixin.set_adapters`] only supports scaling attention weights. If a LoRA has other parts (e.g., resnets or down-/upsamplers), they will keep a scale of 1.0. </Tip> ### Kohya and TheLastBen Other popular LoRA trainers from the community include those by [Kohya](https://github.com/kohya-ss/sd-scripts/) and [TheLastBen](https://github.com/TheLastBen/fast-stable-diffusion). These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. <hfoptions id="other-trainers"> <hfoption id="Kohya"> To load a Kohya LoRA, let's download the [Blueprintify SD XL 1.0](https://civitai.com/models/150986/blueprintify-sd-xl-10) checkpoint from [Civitai](https://civitai.com/) as an example: ```sh !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors ``` Load the LoRA checkpoint with the [`~loaders.LoraLoaderMixin.load_lora_weights`] method, and specify the filename in the `weight_name` parameter: ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") ``` Generate an image: ```py # use bl3uprint in the prompt to trigger the LoRA prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" image = pipeline(prompt).images[0] image ``` <Tip warning={true}> Some limitations of using Kohya LoRAs with 🤗 Diffusers include: - Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained [here](https://github.com/huggingface/diffusers/pull/4287/#issuecomment-1655110736). - [LyCORIS checkpoints](https://github.com/KohakuBlueleaf/LyCORIS) aren't fully supported. The [`~loaders.LoraLoaderMixin.load_lora_weights`] method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. </Tip> </hfoption> <hfoption id="TheLastBen"> Loading a checkpoint from TheLastBen is very similar. For example, to load the [TheLastBen/William_Eggleston_Style_SDXL](https://huggingface.co/TheLastBen/William_Eggleston_Style_SDXL) checkpoint: ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") pipeline.load_lora_weights("TheLastBen/William_Eggleston_Style_SDXL", weight_name="wegg.safetensors") # use by william eggleston in the prompt to trigger the LoRA prompt = "a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful" image = pipeline(prompt=prompt).images[0] image ``` </hfoption> </hfoptions> ## IP-Adapter [IP-Adapter](https://ip-adapter.github.io/) is a lightweight adapter that enables image prompting for any diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. You can learn more about how to use IP-Adapter for different tasks and specific use cases in the [IP-Adapter](../using-diffusers/ip_adapter) guide. > [!TIP] > Diffusers currently only supports IP-Adapter for some of the most popular pipelines. Feel free to open a feature request if you have a cool use case and want to integrate IP-Adapter with an unsupported pipeline! > Official IP-Adapter checkpoints are available from [h94/IP-Adapter](https://huggingface.co/h94/IP-Adapter). To start, load a Stable Diffusion checkpoint. ```py from diffusers import AutoPipelineForText2Image import torch from diffusers.utils import load_image pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") ``` Then load the IP-Adapter weights and add it to the pipeline with the [`~loaders.IPAdapterMixin.load_ip_adapter`] method. ```py pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") ``` Once loaded, you can use the pipeline with an image and text prompt to guide the image generation process. ```py image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") generator = torch.Generator(device="cpu").manual_seed(33) images = pipeline(     prompt='best quality, high quality, wearing sunglasses',     ip_adapter_image=image,     negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",     num_inference_steps=50,     generator=generator, ).images[0] images ``` <div class="flex justify-center">     <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip-bear.png" /> </div> ### IP-Adapter Plus IP-Adapter relies on an image encoder to generate image features. If the IP-Adapter repository contains an `image_encoder` subfolder, the image encoder is automatically loaded and registered to the pipeline. Otherwise, you'll need to explicitly load the image encoder with a [`~transformers.CLIPVisionModelWithProjection`] model and pass it to the pipeline. This is the case for *IP-Adapter Plus* checkpoints which use the ViT-H image encoder. ```py from transformers import CLIPVisionModelWithProjection image_encoder = CLIPVisionModelWithProjection.from_pretrained( "h94/IP-Adapter", subfolder="models/image_encoder", torch_dtype=torch.float16 ) pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", image_encoder=image_encoder, torch_dtype=torch.float16 ).to("cuda") pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.safetensors") ``` ### IP-Adapter Face ID models The IP-Adapter FaceID models are experimental IP Adapters that use image embeddings generated by `insightface` instead of CLIP image embeddings. Some of these models also use LoRA to improve ID consistency. You need to install `insightface` and all its requirements to use these models. <Tip warning={true}> As InsightFace pretrained models are available for non-commercial research purposes, IP-Adapter-FaceID models are released exclusively for research purposes and are not intended for commercial use. </Tip> ```py pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_ip_adapter("h94/IP-Adapter-FaceID", subfolder=None, weight_name="ip-adapter-faceid_sdxl.bin", image_encoder_folder=None) ``` If you want to use one of the two IP-Adapter FaceID Plus models, you must also load the CLIP image encoder, as this models use both `insightface` and CLIP image embeddings to achieve better photorealism. ```py from transformers import CLIPVisionModelWithProjection image_encoder = CLIPVisionModelWithProjection.from_pretrained( "laion/CLIP-ViT-H-14-laion2B-s32B-b79K", torch_dtype=torch.float16, ) pipeline = AutoPipelineForText2Image.from_pretrained( "runwayml/stable-diffusion-v1-5", image_encoder=image_encoder, torch_dtype=torch.float16 ).to("cuda") pipeline.load_ip_adapter("h94/IP-Adapter-FaceID", subfolder=None, weight_name="ip-adapter-faceid-plus_sd15.bin") ```
diffusers/docs/source/en/using-diffusers/loading_adapters.md/0
{ "file_path": "diffusers/docs/source/en/using-diffusers/loading_adapters.md", "repo_id": "diffusers", "token_count": 5703 }
201
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Load safetensors [[open-in-colab]] [safetensors](https://github.com/huggingface/safetensors) is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or *pickled* into a `.bin` file with Python's [`pickle`](https://docs.python.org/3/library/pickle.html) utility. However, `pickle` is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to `pickle`, making it ideal for sharing model weights. This guide will show you how you load `.safetensor` files, and how to convert Stable Diffusion model weights stored in other formats to `.safetensor`. Before you start, make sure you have safetensors installed: ```py # uncomment to install the necessary libraries in Colab #!pip install safetensors ``` If you look at the [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main) repository, you'll see weights inside the `text_encoder`, `unet` and `vae` subfolders are stored in the `.safetensors` format. By default, 🤗 Diffusers automatically loads these `.safetensors` files from their subfolders if they're available in the model repository. For more explicit control, you can optionally set `use_safetensors=True` (if `safetensors` is not installed, you'll get an error message asking you to install it): ```py from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) ``` However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single `.safetensors` file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the [`~diffusers.loaders.FromSingleFileMixin.from_single_file`] method: ```py from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_single_file( "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" ) ``` ## Convert to safetensors Not all weights on the Hub are available in the `.safetensors` format, and you may encounter weights stored as `.bin`. In this case, use the [Convert Space](https://huggingface.co/spaces/diffusers/convert) to convert the weights to `.safetensors`. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted `.safetensors` file on the Hub. This way, if there is any malicious code contained in the pickled files, they're uploaded to the Hub - which has a [security scanner](https://huggingface.co/docs/hub/security-pickle#hubs-security-scanner) to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new `.safetensors` weights by specifying the reference to the Pull Request in the `revision` parameter (you can also test it in this [Check PR](https://huggingface.co/spaces/diffusers/check_pr) Space on the Hub), for example `refs/pr/22`: ```py from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True ) ``` ## Why use safetensors? There are several reasons for using safetensors: - Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don't contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. - Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to `pickle` if you're loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You'll only notice the performance difference if the model is already loaded, and not if you're downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: ```py from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) "Loaded in safetensors 0:00:02.033658" "Loaded in PyTorch 0:00:02.663379" ``` But the actual time it takes to load 500MB of the model weights is only: ```bash safetensors: 3.4873ms PyTorch: 172.7537ms ``` - Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the [BLOOM](https://huggingface.co/bigscience/bloom) model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights.
diffusers/docs/source/en/using-diffusers/using_safetensors.md/0
{ "file_path": "diffusers/docs/source/en/using-diffusers/using_safetensors.md", "repo_id": "diffusers", "token_count": 1535 }
202
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # 메모리와 속도 메모리 또는 속도에 대해 🤗 Diffusers *추론*을 최적화하기 위한 몇 가지 기술과 아이디어를 제시합니다. 일반적으로, memory-efficient attention을 위해 [xFormers](https://github.com/facebookresearch/xformers) 사용을 추천하기 때문에, 추천하는 [설치 방법](xformers)을 보고 설치해 보세요. 다음 설정이 성능과 메모리에 미치는 영향에 대해 설명합니다. | | 지연시간 | 속도 향상 | | ---------------- | ------- | ------- | | 별도 설정 없음 | 9.50s | x1 | | cuDNN auto-tuner | 9.37s | x1.01 | | fp16 | 3.61s | x2.63 | | Channels Last 메모리 형식 | 3.30s | x2.88 | | traced UNet | 3.21s | x2.96 | | memory-efficient attention | 2.63s | x3.61 | <em> NVIDIA TITAN RTX에서 50 DDIM 스텝의 "a photo of an astronaut riding a horse on mars" 프롬프트로 512x512 크기의 단일 이미지를 생성하였습니다. </em> ## cuDNN auto-tuner 활성화하기 [NVIDIA cuDNN](https://developer.nvidia.com/cudnn)은 컨볼루션을 계산하는 많은 알고리즘을 지원합니다. Autotuner는 짧은 벤치마크를 실행하고 주어진 입력 크기에 대해 주어진 하드웨어에서 최고의 성능을 가진 커널을 선택합니다. **컨볼루션 네트워크**를 활용하고 있기 때문에 (다른 유형들은 현재 지원되지 않음), 다음 설정을 통해 추론 전에 cuDNN autotuner를 활성화할 수 있습니다: ```python import torch torch.backends.cudnn.benchmark = True ``` ### fp32 대신 tf32 사용하기 (Ampere 및 이후 CUDA 장치들에서) Ampere 및 이후 CUDA 장치에서 행렬곱 및 컨볼루션은 TensorFloat32(TF32) 모드를 사용하여 더 빠르지만 약간 덜 정확할 수 있습니다. 기본적으로 PyTorch는 컨볼루션에 대해 TF32 모드를 활성화하지만 행렬 곱셈은 활성화하지 않습니다. 네트워크에 완전한 float32 정밀도가 필요한 경우가 아니면 행렬 곱셈에 대해서도 이 설정을 활성화하는 것이 좋습니다. 이는 일반적으로 무시할 수 있는 수치의 정확도 손실이 있지만, 계산 속도를 크게 높일 수 있습니다. 그것에 대해 [여기](https://huggingface.co/docs/transformers/v4.18.0/en/performance#tf32)서 더 읽을 수 있습니다. 추론하기 전에 다음을 추가하기만 하면 됩니다: ```python import torch torch.backends.cuda.matmul.allow_tf32 = True ``` ## 반정밀도 가중치 더 많은 GPU 메모리를 절약하고 더 빠른 속도를 얻기 위해 모델 가중치를 반정밀도(half precision)로 직접 불러오고 실행할 수 있습니다. 여기에는 `fp16`이라는 브랜치에 저장된 float16 버전의 가중치를 불러오고, 그 때 `float16` 유형을 사용하도록 PyTorch에 지시하는 작업이 포함됩니다. ```Python pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] ``` <Tip warning={true}> 어떤 파이프라인에서도 [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) 를 사용하는 것은 검은색 이미지를 생성할 수 있고, 순수한 float16 정밀도를 사용하는 것보다 항상 느리기 때문에 사용하지 않는 것이 좋습니다. </Tip> ## 추가 메모리 절약을 위한 슬라이스 어텐션 추가 메모리 절약을 위해, 한 번에 모두 계산하는 대신 단계적으로 계산을 수행하는 슬라이스 버전의 어텐션(attention)을 사용할 수 있습니다. <Tip> Attention slicing은 모델이 하나 이상의 어텐션 헤드를 사용하는 한, 배치 크기가 1인 경우에도 유용합니다. 하나 이상의 어텐션 헤드가 있는 경우 *QK^T* 어텐션 매트릭스는 상당한 양의 메모리를 절약할 수 있는 각 헤드에 대해 순차적으로 계산될 수 있습니다. </Tip> 각 헤드에 대해 순차적으로 어텐션 계산을 수행하려면, 다음과 같이 추론 전에 파이프라인에서 [`~StableDiffusionPipeline.enable_attention_slicing`]를 호출하면 됩니다: ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_attention_slicing() image = pipe(prompt).images[0] ``` 추론 시간이 약 10% 느려지는 약간의 성능 저하가 있지만 이 방법을 사용하면 3.2GB 정도의 작은 VRAM으로도 Stable Diffusion을 사용할 수 있습니다! ## 더 큰 배치를 위한 sliced VAE 디코드 제한된 VRAM에서 대규모 이미지 배치를 디코딩하거나 32개 이상의 이미지가 포함된 배치를 활성화하기 위해, 배치의 latent 이미지를 한 번에 하나씩 디코딩하는 슬라이스 VAE 디코드를 사용할 수 있습니다. 이를 [`~StableDiffusionPipeline.enable_attention_slicing`] 또는 [`~StableDiffusionPipeline.enable_xformers_memory_efficient_attention`]과 결합하여 메모리 사용을 추가로 최소화할 수 있습니다. VAE 디코드를 한 번에 하나씩 수행하려면 추론 전에 파이프라인에서 [`~StableDiffusionPipeline.enable_vae_slicing`]을 호출합니다. 예를 들어: ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_vae_slicing() images = pipe([prompt] * 32).images ``` 다중 이미지 배치에서 VAE 디코드가 약간의 성능 향상이 이루어집니다. 단일 이미지 배치에서는 성능 영향은 없습니다. <a name="sequential_offloading"></a> ## 메모리 절약을 위해 가속 기능을 사용하여 CPU로 오프로딩 추가 메모리 절약을 위해 가중치를 CPU로 오프로드하고 순방향 전달을 수행할 때만 GPU로 로드할 수 있습니다. CPU 오프로딩을 수행하려면 [`~StableDiffusionPipeline.enable_sequential_cpu_offload`]를 호출하기만 하면 됩니다: ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ) prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_sequential_cpu_offload() image = pipe(prompt).images[0] ``` 그러면 메모리 소비를 3GB 미만으로 줄일 수 있습니다. 참고로 이 방법은 전체 모델이 아닌 서브모듈 수준에서 작동합니다. 이는 메모리 소비를 최소화하는 가장 좋은 방법이지만 프로세스의 반복적 특성으로 인해 추론 속도가 훨씬 느립니다. 파이프라인의 UNet 구성 요소는 여러 번 실행됩니다('num_inference_steps' 만큼). 매번 UNet의 서로 다른 서브모듈이 순차적으로 온로드된 다음 필요에 따라 오프로드되므로 메모리 이동 횟수가 많습니다. <Tip> 또 다른 최적화 방법인 <a href="#model_offloading">모델 오프로딩</a>을 사용하는 것을 고려하십시오. 이는 훨씬 빠르지만 메모리 절약이 크지는 않습니다. </Tip> 또한 ttention slicing과 연결해서 최소 메모리(< 2GB)로도 동작할 수 있습니다. ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ) prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing(1) image = pipe(prompt).images[0] ``` **참고**: 'enable_sequential_cpu_offload()'를 사용할 때, 미리 파이프라인을 CUDA로 이동하지 **않는** 것이 중요합니다.그렇지 않으면 메모리 소비의 이득이 최소화됩니다. 더 많은 정보를 위해 [이 이슈](https://github.com/huggingface/diffusers/issues/1934)를 보세요. <a name="model_offloading"></a> ## 빠른 추론과 메모리 메모리 절약을 위한 모델 오프로딩 [순차적 CPU 오프로딩](#sequential_offloading)은 이전 섹션에서 설명한 것처럼 많은 메모리를 보존하지만 필요에 따라 서브모듈을 GPU로 이동하고 새 모듈이 실행될 때 즉시 CPU로 반환되기 때문에 추론 속도가 느려집니다. 전체 모델 오프로딩은 각 모델의 구성 요소인 _modules_을 처리하는 대신, 전체 모델을 GPU로 이동하는 대안입니다. 이로 인해 추론 시간에 미치는 영향은 미미하지만(파이프라인을 'cuda'로 이동하는 것과 비교하여) 여전히 약간의 메모리를 절약할 수 있습니다. 이 시나리오에서는 파이프라인의 주요 구성 요소 중 하나만(일반적으로 텍스트 인코더, unet 및 vae) GPU에 있고, 나머지는 CPU에서 대기할 것입니다. 여러 반복을 위해 실행되는 UNet과 같은 구성 요소는 더 이상 필요하지 않을 때까지 GPU에 남아 있습니다. 이 기능은 아래와 같이 파이프라인에서 `enable_model_cpu_offload()`를 호출하여 활성화할 수 있습니다. ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ) prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_model_cpu_offload() image = pipe(prompt).images[0] ``` 이는 추가적인 메모리 절약을 위한 attention slicing과도 호환됩니다. ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ) prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_model_cpu_offload() pipe.enable_attention_slicing(1) image = pipe(prompt).images[0] ``` <Tip> 이 기능을 사용하려면 'accelerate' 버전 0.17.0 이상이 필요합니다. </Tip> ## Channels Last 메모리 형식 사용하기 Channels Last 메모리 형식은 차원 순서를 보존하는 메모리에서 NCHW 텐서 배열을 대체하는 방법입니다. Channels Last 텐서는 채널이 가장 조밀한 차원이 되는 방식으로 정렬됩니다(일명 픽셀당 이미지를 저장). 현재 모든 연산자 Channels Last 형식을 지원하는 것은 아니라 성능이 저하될 수 있으므로, 사용해보고 모델에 잘 작동하는지 확인하는 것이 좋습니다. 예를 들어 파이프라인의 UNet 모델이 channels Last 형식을 사용하도록 설정하려면 다음을 사용할 수 있습니다: ```python print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) pipe.unet.to(memory_format=torch.channels_last) # in-place 연산 # 2번째 차원에서 스트라이드 1을 가지는 (2880, 1, 960, 320)로, 연산이 작동함을 증명합니다. print(pipe.unet.conv_out.state_dict()["weight"].stride()) ``` ## 추적(tracing) 추적은 모델을 통해 예제 입력 텐서를 통해 실행되는데, 해당 입력이 모델의 레이어를 통과할 때 호출되는 작업을 캡처하여 실행 파일 또는 'ScriptFunction'이 반환되도록 하고, 이는 just-in-time 컴파일로 최적화됩니다. UNet 모델을 추적하기 위해 다음을 사용할 수 있습니다: ```python import time import torch from diffusers import StableDiffusionPipeline import functools # torch 기울기 비활성화 torch.set_grad_enabled(False) # 변수 설정 n_experiments = 2 unet_runs_per_experiment = 50 # 입력 불러오기 def generate_inputs(): sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) return sample, timestep, encoder_hidden_states pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ).to("cuda") unet = pipe.unet unet.eval() unet.to(memory_format=torch.channels_last) # Channels Last 메모리 형식 사용 unet.forward = functools.partial(unet.forward, return_dict=False) # return_dict=False을 기본값으로 설정 # 워밍업 for _ in range(3): with torch.inference_mode(): inputs = generate_inputs() orig_output = unet(*inputs) # 추적 print("tracing..") unet_traced = torch.jit.trace(unet, inputs) unet_traced.eval() print("done tracing") # 워밍업 및 그래프 최적화 for _ in range(5): with torch.inference_mode(): inputs = generate_inputs() orig_output = unet_traced(*inputs) # 벤치마킹 with torch.inference_mode(): for _ in range(n_experiments): torch.cuda.synchronize() start_time = time.time() for _ in range(unet_runs_per_experiment): orig_output = unet_traced(*inputs) torch.cuda.synchronize() print(f"unet traced inference took {time.time() - start_time:.2f} seconds") for _ in range(n_experiments): torch.cuda.synchronize() start_time = time.time() for _ in range(unet_runs_per_experiment): orig_output = unet(*inputs) torch.cuda.synchronize() print(f"unet inference took {time.time() - start_time:.2f} seconds") # 모델 저장 unet_traced.save("unet_traced.pt") ``` 그 다음, 파이프라인의 `unet` 특성을 다음과 같이 추적된 모델로 바꿀 수 있습니다. ```python from diffusers import StableDiffusionPipeline import torch from dataclasses import dataclass @dataclass class UNet2DConditionOutput: sample: torch.Tensor pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ).to("cuda") # jitted unet 사용 unet_traced = torch.jit.load("unet_traced.pt") # pipe.unet 삭제 class TracedUNet(torch.nn.Module): def __init__(self): super().__init__() self.in_channels = pipe.unet.config.in_channels self.device = pipe.unet.device def forward(self, latent_model_input, t, encoder_hidden_states): sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] return UNet2DConditionOutput(sample=sample) pipe.unet = TracedUNet() with torch.inference_mode(): image = pipe([prompt] * 1, num_inference_steps=50).images[0] ``` ## Memory-efficient attention 어텐션 블록의 대역폭을 최적화하는 최근 작업으로 GPU 메모리 사용량이 크게 향상되고 향상되었습니다. @tridao의 가장 최근의 플래시 어텐션: [code](https://github.com/HazyResearch/flash-attention), [paper](https://arxiv.org/pdf/2205.14135.pdf). 배치 크기 1(프롬프트 1개)의 512x512 크기로 추론을 실행할 때 몇 가지 Nvidia GPU에서 얻은 속도 향상은 다음과 같습니다: | GPU | 기준 어텐션 FP16 | 메모리 효율적인 어텐션 FP16 | |------------------ |--------------------- |--------------------------------- | | NVIDIA Tesla T4 | 3.5it/s | 5.5it/s | | NVIDIA 3060 RTX | 4.6it/s | 7.8it/s | | NVIDIA A10G | 8.88it/s | 15.6it/s | | NVIDIA RTX A6000 | 11.7it/s | 21.09it/s | | NVIDIA TITAN RTX | 12.51it/s | 18.22it/s | | A100-SXM4-40GB | 18.6it/s | 29.it/s | | A100-SXM-80GB | 18.7it/s | 29.5it/s | 이를 활용하려면 다음을 만족해야 합니다: - PyTorch > 1.12 - Cuda 사용 가능 - [xformers 라이브러리를 설치함](xformers) ```python from diffusers import StableDiffusionPipeline import torch pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, ).to("cuda") pipe.enable_xformers_memory_efficient_attention() with torch.inference_mode(): sample = pipe("a small cat") # 선택: 이를 비활성화 하기 위해 다음을 사용할 수 있습니다. # pipe.disable_xformers_memory_efficient_attention() ```
diffusers/docs/source/ko/optimization/fp16.md/0
{ "file_path": "diffusers/docs/source/ko/optimization/fp16.md", "repo_id": "diffusers", "token_count": 10775 }
203
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # DreamBooth [DreamBooth](https://arxiv.org/abs/2208.12242)는 한 주제에 대한 적은 이미지(3~5개)만으로도 stable diffusion과 같이 text-to-image 모델을 개인화할 수 있는 방법입니다. 이를 통해 모델은 다양한 장면, 포즈 및 장면(뷰)에서 피사체에 대해 맥락화(contextualized)된 이미지를 생성할 수 있습니다. ![프로젝트 블로그에서의 DreamBooth 예시](https://dreambooth.github.io/DreamBooth_files/teaser_static.jpg) <small>에서의 Dreambooth 예시 <a href="https://dreambooth.github.io">project's blog.</a></small> 이 가이드는 다양한 GPU, Flax 사양에 대해 [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) 모델로 DreamBooth를 파인튜닝하는 방법을 보여줍니다. 더 깊이 파고들어 작동 방식을 확인하는 데 관심이 있는 경우, 이 가이드에 사용된 DreamBooth의 모든 학습 스크립트를 [여기](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)에서 찾을 수 있습니다. 스크립트를 실행하기 전에 라이브러리의 학습에 필요한 dependencies를 설치해야 합니다. 또한 `main` GitHub 브랜치에서 🧨 Diffusers를 설치하는 것이 좋습니다. ```bash pip install git+https://github.com/huggingface/diffusers pip install -U -r diffusers/examples/dreambooth/requirements.txt ``` xFormers는 학습에 필요한 요구 사항은 아니지만, 가능하면 [설치](../optimization/xformers)하는 것이 좋습니다. 학습 속도를 높이고 메모리 사용량을 줄일 수 있기 때문입니다. 모든 dependencies을 설정한 후 다음을 사용하여 [🤗 Accelerate](https://github.com/huggingface/accelerate/) 환경을 다음과 같이 초기화합니다: ```bash accelerate config ``` 별도 설정 없이 기본 🤗 Accelerate 환경을 설치하려면 다음을 실행합니다: ```bash accelerate config default ``` 또는 현재 환경이 노트북과 같은 대화형 셸을 지원하지 않는 경우 다음을 사용할 수 있습니다: ```py from accelerate.utils import write_basic_config write_basic_config() ``` ## 파인튜닝 <Tip warning={true}> DreamBooth 파인튜닝은 하이퍼파라미터에 매우 민감하고 과적합되기 쉽습니다. 적절한 하이퍼파라미터를 선택하는 데 도움이 되도록 다양한 권장 설정이 포함된 [심층 분석](https://huggingface.co/blog/dreambooth)을 살펴보는 것이 좋습니다. </Tip> <frameworkcontent> <pt> [몇 장의 강아지 이미지들](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ)로 DreamBooth를 시도해봅시다. 이를 다운로드해 디렉터리에 저장한 다음 `INSTANCE_DIR` 환경 변수를 해당 경로로 설정합니다: ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export INSTANCE_DIR="path_to_training_images" export OUTPUT_DIR="path_to_saved_model" ``` 그런 다음, 다음 명령을 사용하여 학습 스크립트를 실행할 수 있습니다 (전체 학습 스크립트는 [여기](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py)에서 찾을 수 있습니다): ```bash accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 \ --learning_rate=5e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --max_train_steps=400 ``` </pt> <jax> TPU에 액세스할 수 있거나 더 빠르게 훈련하고 싶다면 [Flax 학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flax.py)를 사용해 볼 수 있습니다. Flax 학습 스크립트는 gradient checkpointing 또는 gradient accumulation을 지원하지 않으므로, 메모리가 30GB 이상인 GPU가 필요합니다. 스크립트를 실행하기 전에 요구 사항이 설치되어 있는지 확인하십시오. ```bash pip install -U -r requirements.txt ``` 그러면 다음 명령어로 학습 스크립트를 실행시킬 수 있습니다: ```bash export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" export INSTANCE_DIR="path-to-instance-images" export OUTPUT_DIR="path-to-save-model" python train_dreambooth_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ --learning_rate=5e-6 \ --max_train_steps=400 ``` </jax> </frameworkcontent> ### Prior-preserving(사전 보존) loss를 사용한 파인튜닝 과적합과 language drift를 방지하기 위해 사전 보존이 사용됩니다(관심이 있는 경우 [논문](https://arxiv.org/abs/2208.12242)을 참조하세요). 사전 보존을 위해 동일한 클래스의 다른 이미지를 학습 프로세스의 일부로 사용합니다. 좋은 점은 Stable Diffusion 모델 자체를 사용하여 이러한 이미지를 생성할 수 있다는 것입니다! 학습 스크립트는 생성된 이미지를 우리가 지정한 로컬 경로에 저장합니다. 저자들에 따르면 사전 보존을 위해 `num_epochs * num_samples`개의 이미지를 생성하는 것이 좋습니다. 200-300개에서 대부분 잘 작동합니다. <frameworkcontent> <pt> ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export INSTANCE_DIR="path_to_training_images" export CLASS_DIR="path_to_class_images" export OUTPUT_DIR="path_to_saved_model" accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="a photo of sks dog" \ --class_prompt="a photo of dog" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 \ --learning_rate=5e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=200 \ --max_train_steps=800 ``` </pt> <jax> ```bash export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" export INSTANCE_DIR="path-to-instance-images" export CLASS_DIR="path-to-class-images" export OUTPUT_DIR="path-to-save-model" python train_dreambooth_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="a photo of sks dog" \ --class_prompt="a photo of dog" \ --resolution=512 \ --train_batch_size=1 \ --learning_rate=5e-6 \ --num_class_images=200 \ --max_train_steps=800 ``` </jax> </frameworkcontent> ## 텍스트 인코더와 and UNet로 파인튜닝하기 해당 스크립트를 사용하면 `unet`과 함께 `text_encoder`를 파인튜닝할 수 있습니다. 실험에서(자세한 내용은 [🧨 Diffusers를 사용해 DreamBooth로 Stable Diffusion 학습하기](https://huggingface.co/blog/dreambooth) 게시물을 확인하세요), 특히 얼굴 이미지를 생성할 때 훨씬 더 나은 결과를 얻을 수 있습니다. <Tip warning={true}> 텍스트 인코더를 학습시키려면 추가 메모리가 필요해 16GB GPU로는 동작하지 않습니다. 이 옵션을 사용하려면 최소 24GB VRAM이 필요합니다. </Tip> `--train_text_encoder` 인수를 학습 스크립트에 전달하여 `text_encoder` 및 `unet`을 파인튜닝할 수 있습니다: <frameworkcontent> <pt> ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export INSTANCE_DIR="path_to_training_images" export CLASS_DIR="path_to_class_images" export OUTPUT_DIR="path_to_saved_model" accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_text_encoder \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="a photo of sks dog" \ --class_prompt="a photo of dog" \ --resolution=512 \ --train_batch_size=1 \ --use_8bit_adam --gradient_checkpointing \ --learning_rate=2e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=200 \ --max_train_steps=800 ``` </pt> <jax> ```bash export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" export INSTANCE_DIR="path-to-instance-images" export CLASS_DIR="path-to-class-images" export OUTPUT_DIR="path-to-save-model" python train_dreambooth_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_text_encoder \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="a photo of sks dog" \ --class_prompt="a photo of dog" \ --resolution=512 \ --train_batch_size=1 \ --learning_rate=2e-6 \ --num_class_images=200 \ --max_train_steps=800 ``` </jax> </frameworkcontent> ## LoRA로 파인튜닝하기 DreamBooth에서 대규모 모델의 학습을 가속화하기 위한 파인튜닝 기술인 LoRA(Low-Rank Adaptation of Large Language Models)를 사용할 수 있습니다. 자세한 내용은 [LoRA 학습](training/lora#dreambooth) 가이드를 참조하세요. ### 학습 중 체크포인트 저장하기 Dreambooth로 훈련하는 동안 과적합하기 쉬우므로, 때때로 학습 중에 정기적인 체크포인트를 저장하는 것이 유용합니다. 중간 체크포인트 중 하나가 최종 모델보다 더 잘 작동할 수 있습니다! 체크포인트 저장 기능을 활성화하려면 학습 스크립트에 다음 인수를 전달해야 합니다: ```bash --checkpointing_steps=500 ``` 이렇게 하면 `output_dir`의 하위 폴더에 전체 학습 상태가 저장됩니다. 하위 폴더 이름은 접두사 `checkpoint-`로 시작하고 지금까지 수행된 step 수입니다. 예시로 `checkpoint-1500`은 1500 학습 step 후에 저장된 체크포인트입니다. #### 저장된 체크포인트에서 훈련 재개하기 저장된 체크포인트에서 훈련을 재개하려면, `--resume_from_checkpoint` 인수를 전달한 다음 사용할 체크포인트의 이름을 지정하면 됩니다. 특수 문자열 `"latest"`를 사용하여 저장된 마지막 체크포인트(즉, step 수가 가장 많은 체크포인트)에서 재개할 수도 있습니다. 예를 들어 다음은 1500 step 후에 저장된 체크포인트에서부터 학습을 재개합니다: ```bash --resume_from_checkpoint="checkpoint-1500" ``` 원하는 경우 일부 하이퍼파라미터를 조정할 수 있습니다. #### 저장된 체크포인트를 사용하여 추론 수행하기 저장된 체크포인트는 훈련 재개에 적합한 형식으로 저장됩니다. 여기에는 모델 가중치뿐만 아니라 옵티마이저, 데이터 로더 및 학습률의 상태도 포함됩니다. **`"accelerate>=0.16.0"`**이 설치된 경우 다음 코드를 사용하여 중간 체크포인트에서 추론을 실행합니다. ```python from diffusers import DiffusionPipeline, UNet2DConditionModel from transformers import CLIPTextModel import torch # 학습에 사용된 것과 동일한 인수(model, revision)로 파이프라인을 불러옵니다. model_id = "CompVis/stable-diffusion-v1-4" unet = UNet2DConditionModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/unet") # `args.train_text_encoder`로 학습한 경우면 텍스트 인코더를 꼭 불러오세요 text_encoder = CLIPTextModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/text_encoder") pipeline = DiffusionPipeline.from_pretrained(model_id, unet=unet, text_encoder=text_encoder, dtype=torch.float16) pipeline.to("cuda") # 추론을 수행하거나 저장하거나, 허브에 푸시합니다. pipeline.save_pretrained("dreambooth-pipeline") ``` If you have **`"accelerate<0.16.0"`** installed, you need to convert it to an inference pipeline first: ```python from accelerate import Accelerator from diffusers import DiffusionPipeline # 학습에 사용된 것과 동일한 인수(model, revision)로 파이프라인을 불러옵니다. model_id = "CompVis/stable-diffusion-v1-4" pipeline = DiffusionPipeline.from_pretrained(model_id) accelerator = Accelerator() # 초기 학습에 `--train_text_encoder`가 사용된 경우 text_encoder를 사용합니다. unet, text_encoder = accelerator.prepare(pipeline.unet, pipeline.text_encoder) # 체크포인트 경로로부터 상태를 복원합니다. 여기서는 절대 경로를 사용해야 합니다. accelerator.load_state("/sddata/dreambooth/daruma-v2-1/checkpoint-100") # unwrapped 모델로 파이프라인을 다시 빌드합니다.(.unet and .text_encoder로의 할당도 작동해야 합니다) pipeline = DiffusionPipeline.from_pretrained( model_id, unet=accelerator.unwrap_model(unet), text_encoder=accelerator.unwrap_model(text_encoder), ) # 추론을 수행하거나 저장하거나, 허브에 푸시합니다. pipeline.save_pretrained("dreambooth-pipeline") ``` ## 각 GPU 용량에서의 최적화 하드웨어에 따라 16GB에서 8GB까지 GPU에서 DreamBooth를 최적화하는 몇 가지 방법이 있습니다! ### xFormers [xFormers](https://github.com/facebookresearch/xformers)는 Transformers를 최적화하기 위한 toolbox이며, 🧨 Diffusers에서 사용되는[memory-efficient attention](https://facebookresearch.github.io/xformers/components/ops.html#module-xformers.ops) 메커니즘을 포함하고 있습니다. [xFormers를 설치](./optimization/xformers)한 다음 학습 스크립트에 다음 인수를 추가합니다: ```bash --enable_xformers_memory_efficient_attention ``` xFormers는 Flax에서 사용할 수 없습니다. ### 그래디언트 없음으로 설정 메모리 사용량을 줄일 수 있는 또 다른 방법은 [기울기 설정](https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html)을 0 대신 `None`으로 하는 것입니다. 그러나 이로 인해 특정 동작이 변경될 수 있으므로 문제가 발생하면 이 인수를 제거해 보십시오. 학습 스크립트에 다음 인수를 추가하여 그래디언트를 `None`으로 설정합니다. ```bash --set_grads_to_none ``` ### 16GB GPU Gradient checkpointing과 [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)의 8비트 옵티마이저의 도움으로, 16GB GPU에서 dreambooth를 훈련할 수 있습니다. bitsandbytes가 설치되어 있는지 확인하세요: ```bash pip install bitsandbytes ``` 그 다음, 학습 스크립트에 `--use_8bit_adam` 옵션을 명시합니다: ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export INSTANCE_DIR="path_to_training_images" export CLASS_DIR="path_to_class_images" export OUTPUT_DIR="path_to_saved_model" accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="a photo of sks dog" \ --class_prompt="a photo of dog" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=2 --gradient_checkpointing \ --use_8bit_adam \ --learning_rate=5e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=200 \ --max_train_steps=800 ``` ### 12GB GPU 12GB GPU에서 DreamBooth를 실행하려면 gradient checkpointing, 8비트 옵티마이저, xFormers를 활성화하고 그래디언트를 `None`으로 설정해야 합니다. ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export INSTANCE_DIR="path-to-instance-images" export CLASS_DIR="path-to-class-images" export OUTPUT_DIR="path-to-save-model" accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="a photo of sks dog" \ --class_prompt="a photo of dog" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 --gradient_checkpointing \ --use_8bit_adam \ --enable_xformers_memory_efficient_attention \ --set_grads_to_none \ --learning_rate=2e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=200 \ --max_train_steps=800 ``` ### 8GB GPU에서 학습하기 8GB GPU에 대해서는 [DeepSpeed](https://www.deepspeed.ai/)를 사용해 일부 텐서를 VRAM에서 CPU 또는 NVME로 오프로드하여 더 적은 GPU 메모리로 학습할 수도 있습니다. 🤗 Accelerate 환경을 구성하려면 다음 명령을 실행하세요: ```bash accelerate config ``` 환경 구성 중에 DeepSpeed를 사용할 것을 확인하세요. 그러면 DeepSpeed stage 2, fp16 혼합 정밀도를 결합하고 모델 매개변수와 옵티마이저 상태를 모두 CPU로 오프로드하면 8GB VRAM 미만에서 학습할 수 있습니다. 단점은 더 많은 시스템 RAM(약 25GB)이 필요하다는 것입니다. 추가 구성 옵션은 [DeepSpeed 문서](https://huggingface.co/docs/accelerate/usage_guides/deepspeed)를 참조하세요. 또한 기본 Adam 옵티마이저를 DeepSpeed의 최적화된 Adam 버전으로 변경해야 합니다. 이는 상당한 속도 향상을 위한 Adam인 [`deepspeed.ops.adam.DeepSpeedCPUAdam`](https://deepspeed.readthedocs.io/en/latest/optimizers.html#adam-cpu)입니다. `DeepSpeedCPUAdam`을 활성화하려면 시스템의 CUDA toolchain 버전이 PyTorch와 함께 설치된 것과 동일해야 합니다. 8비트 옵티마이저는 현재 DeepSpeed와 호환되지 않는 것 같습니다. 다음 명령으로 학습을 시작합니다: ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export INSTANCE_DIR="path_to_training_images" export CLASS_DIR="path_to_class_images" export OUTPUT_DIR="path_to_saved_model" accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="a photo of sks dog" \ --class_prompt="a photo of dog" \ --resolution=512 \ --train_batch_size=1 \ --sample_batch_size=1 \ --gradient_accumulation_steps=1 --gradient_checkpointing \ --learning_rate=5e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=200 \ --max_train_steps=800 \ --mixed_precision=fp16 ``` ## 추론 모델을 학습한 후에는, 모델이 저장된 경로를 지정해 [`StableDiffusionPipeline`]로 추론을 수행할 수 있습니다. 프롬프트에 학습에 사용된 특수 `식별자`(이전 예시의 `sks`)가 포함되어 있는지 확인하세요. **`"accelerate>=0.16.0"`**이 설치되어 있는 경우 다음 코드를 사용하여 중간 체크포인트에서 추론을 실행할 수 있습니다: ```python from diffusers import StableDiffusionPipeline import torch model_id = "path_to_saved_model" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") prompt = "A photo of sks dog in a bucket" image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] image.save("dog-bucket.png") ``` [저장된 학습 체크포인트](#inference-from-a-saved-checkpoint)에서도 추론을 실행할 수도 있습니다.
diffusers/docs/source/ko/training/dreambooth.md/0
{ "file_path": "diffusers/docs/source/ko/training/dreambooth.md", "repo_id": "diffusers", "token_count": 11697 }
204
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # 텍스트 기반 image-to-image 생성 [[open-in-colab]] [`StableDiffusionImg2ImgPipeline`]을 사용하면 텍스트 프롬프트와 시작 이미지를 전달하여 새 이미지 생성의 조건을 지정할 수 있습니다. 시작하기 전에 필요한 라이브러리가 모두 설치되어 있는지 확인하세요: ```bash !pip install diffusers transformers ftfy accelerate ``` [`nitrosocke/Ghibli-Diffusion`](https://huggingface.co/nitrosocke/Ghibli-Diffusion)과 같은 사전학습된 stable diffusion 모델로 [`StableDiffusionImg2ImgPipeline`]을 생성하여 시작하세요. ```python import torch import requests from PIL import Image from io import BytesIO from diffusers import StableDiffusionImg2ImgPipeline device = "cuda" pipe = StableDiffusionImg2ImgPipeline.from_pretrained("nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16).to( device ) ``` 초기 이미지를 다운로드하고 사전 처리하여 파이프라인에 전달할 수 있습니다: ```python url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) init_image = Image.open(BytesIO(response.content)).convert("RGB") init_image.thumbnail((768, 768)) init_image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg"/> </div> <Tip> 💡 `strength`는 입력 이미지에 추가되는 노이즈의 양을 제어하는 0.0에서 1.0 사이의 값입니다. 1.0에 가까운 값은 다양한 변형을 허용하지만 입력 이미지와 의미적으로 일치하지 않는 이미지를 생성합니다. </Tip> 프롬프트를 정의하고(지브리 스타일(Ghibli-style)에 맞게 조정된 이 체크포인트의 경우 프롬프트 앞에 `ghibli style` 토큰을 붙여야 합니다) 파이프라인을 실행합니다: ```python prompt = "ghibli style, a fantasy landscape with castles" generator = torch.Generator(device=device).manual_seed(1024) image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ghibli-castles.png"/> </div> 다른 스케줄러로 실험하여 출력에 어떤 영향을 미치는지 확인할 수도 있습니다: ```python from diffusers import LMSDiscreteScheduler lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config) pipe.scheduler = lms generator = torch.Generator(device=device).manual_seed(1024) image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lms-ghibli.png"/> </div> 아래 공백을 확인하고 `strength` 값을 다르게 설정하여 이미지를 생성해 보세요. `strength`를 낮게 설정하면 원본 이미지와 더 유사한 이미지가 생성되는 것을 확인할 수 있습니다. 자유롭게 스케줄러를 [`LMSDiscreteScheduler`]로 전환하여 출력에 어떤 영향을 미치는지 확인해 보세요. <iframe src="https://stevhliu-ghibli-img2img.hf.space" frameborder="0" width="850" height="500" ></iframe>
diffusers/docs/source/ko/using-diffusers/img2img.md/0
{ "file_path": "diffusers/docs/source/ko/using-diffusers/img2img.md", "repo_id": "diffusers", "token_count": 2084 }
205
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400"/> <br> </p> # Diffusers 🤗 Diffusers é uma biblioteca de modelos de difusão de última geração para geração de imagens, áudio e até mesmo estruturas 3D de moléculas. Se você está procurando uma solução de geração simples ou queira treinar seu próprio modelo de difusão, 🤗 Diffusers é uma modular caixa de ferramentas que suporta ambos. Nossa biblioteca é desenhada com foco em [usabilidade em vez de desempenho](conceptual/philosophy#usability-over-performance), [simples em vez de fácil](conceptual/philosophy#simple-over-easy) e [customizável em vez de abstrações](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction). A Biblioteca tem três componentes principais: - Pipelines de última geração para a geração em poucas linhas de código. Têm muitos pipelines no 🤗 Diffusers, veja a tabela no pipeline [Visão geral](api/pipelines/overview) para uma lista completa de pipelines disponíveis e as tarefas que eles resolvem. - Intercambiáveis [agendadores de ruído](api/schedulers/overview) para balancear as compensações entre velocidade e qualidade de geração. - [Modelos](api/models) pré-treinados que podem ser usados como se fossem blocos de construção, e combinados com agendadores, para criar seu próprio sistema de difusão de ponta a ponta. <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/tutorial_overview" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutoriais</div> <p class="text-gray-700">Aprenda as competências fundamentais que precisa para iniciar a gerar saídas, construa seu próprio sistema de difusão, e treine um modelo de difusão. Nós recomendamos começar por aqui se você está utilizando o 🤗 Diffusers pela primeira vez!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./using-diffusers/loading_overview" ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Guias de utilização</div> <p class="text-gray-700">Guias práticos para ajudar você carregar pipelines, modelos, e agendadores. Você também aprenderá como usar os pipelines para tarefas específicas, controlar como as saídas são geradas, otimizar a velocidade de geração, e outras técnicas diferentes de treinamento.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Guias conceituais</div> <p class="text-gray-700">Compreenda porque a biblioteca foi desenhada da forma que ela é, e aprenda mais sobre as diretrizes éticas e implementações de segurança para o uso da biblioteca.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/models/overview" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Referência</div> <p class="text-gray-700">Descrições técnicas de como funcionam as classes e métodos do 🤗 Diffusers</p> </a> </div> </div>
diffusers/docs/source/pt/index.md/0
{ "file_path": "diffusers/docs/source/pt/index.md", "repo_id": "diffusers", "token_count": 1653 }
206
# Community Scripts **Community scripts** consist of inference examples using Diffusers pipelines that have been added by the community. Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste code example that you can try out. If a community script doesn't work as expected, please open an issue and ping the author on it. | Example | Description | Code Example | Colab | Author | |:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------:| | Using IP-Adapter with negative noise | Using negative noise with IP-adapter to better control the generation (see the [original post](https://github.com/huggingface/diffusers/discussions/7167) on the forum for more details) | [IP-Adapter Negative Noise](#ip-adapter-negative-noise) | | [Álvaro Somoza](https://github.com/asomoza)| | asymmetric tiling |configure seamless image tiling independently for the X and Y axes | [Asymmetric Tiling](#asymmetric-tiling ) | | [alexisrolland](https://github.com/alexisrolland)| ## Example usages ### IP Adapter Negative Noise Diffusers pipelines are fully integrated with IP-Adapter, which allows you to prompt the diffusion model with an image. However, it does not support negative image prompts (there is no `negative_ip_adapter_image` argument) the same way it supports negative text prompts. When you pass an `ip_adapter_image,` it will create a zero-filled tensor as a negative image. This script shows you how to create a negative noise from `ip_adapter_image` and use it to significantly improve the generation quality while preserving the composition of images. [cubiq](https://github.com/cubiq) initially developed this feature in his [repository](https://github.com/cubiq/ComfyUI_IPAdapter_plus). The community script was contributed by [asomoza](https://github.com/Somoza). You can find more details about this experimentation [this discussion](https://github.com/huggingface/diffusers/discussions/7167) IP-Adapter without negative noise |source|result| |---|---| |![20240229150812](https://github.com/huggingface/diffusers/assets/5442875/901d8bd8-7a59-4fe7-bda1-a0e0d6c7dffd)|![20240229163923_normal](https://github.com/huggingface/diffusers/assets/5442875/3432e25a-ece6-45f4-a3f4-fca354f40b5b)| IP-Adapter with negative noise |source|result| |---|---| |![20240229150812](https://github.com/huggingface/diffusers/assets/5442875/901d8bd8-7a59-4fe7-bda1-a0e0d6c7dffd)|![20240229163923](https://github.com/huggingface/diffusers/assets/5442875/736fd15a-36ba-40c0-a7d8-6ec1ac26f788)| ```python import torch from diffusers import AutoencoderKL, DPMSolverMultistepScheduler, StableDiffusionXLPipeline from diffusers.models import ImageProjection from diffusers.utils import load_image def encode_image( image_encoder, feature_extractor, image, device, num_images_per_prompt, output_hidden_states=None, negative_image=None, ): dtype = next(image_encoder.parameters()).dtype if not isinstance(image, torch.Tensor): image = feature_extractor(image, return_tensors="pt").pixel_values image = image.to(device=device, dtype=dtype) if output_hidden_states: image_enc_hidden_states = image_encoder(image, output_hidden_states=True).hidden_states[-2] image_enc_hidden_states = image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) if negative_image is None: uncond_image_enc_hidden_states = image_encoder( torch.zeros_like(image), output_hidden_states=True ).hidden_states[-2] else: if not isinstance(negative_image, torch.Tensor): negative_image = feature_extractor(negative_image, return_tensors="pt").pixel_values negative_image = negative_image.to(device=device, dtype=dtype) uncond_image_enc_hidden_states = image_encoder(negative_image, output_hidden_states=True).hidden_states[-2] uncond_image_enc_hidden_states = uncond_image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) return image_enc_hidden_states, uncond_image_enc_hidden_states else: image_embeds = image_encoder(image).image_embeds image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_embeds = torch.zeros_like(image_embeds) return image_embeds, uncond_image_embeds @torch.no_grad() def prepare_ip_adapter_image_embeds( unet, image_encoder, feature_extractor, ip_adapter_image, do_classifier_free_guidance, device, num_images_per_prompt, ip_adapter_negative_image=None, ): if not isinstance(ip_adapter_image, list): ip_adapter_image = [ip_adapter_image] if len(ip_adapter_image) != len(unet.encoder_hid_proj.image_projection_layers): raise ValueError( f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(unet.encoder_hid_proj.image_projection_layers)} IP Adapters." ) image_embeds = [] for single_ip_adapter_image, image_proj_layer in zip( ip_adapter_image, unet.encoder_hid_proj.image_projection_layers ): output_hidden_state = not isinstance(image_proj_layer, ImageProjection) single_image_embeds, single_negative_image_embeds = encode_image( image_encoder, feature_extractor, single_ip_adapter_image, device, 1, output_hidden_state, negative_image=ip_adapter_negative_image, ) single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0) single_negative_image_embeds = torch.stack([single_negative_image_embeds] * num_images_per_prompt, dim=0) if do_classifier_free_guidance: single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) single_image_embeds = single_image_embeds.to(device) image_embeds.append(single_image_embeds) return image_embeds vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, ).to("cuda") pipeline = StableDiffusionXLPipeline.from_pretrained( "RunDiffusion/Juggernaut-XL-v9", torch_dtype=torch.float16, vae=vae, variant="fp16", ).to("cuda") pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) pipeline.scheduler.config.use_karras_sigmas = True pipeline.load_ip_adapter( "h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.safetensors", image_encoder_folder="models/image_encoder", ) pipeline.set_ip_adapter_scale(0.7) ip_image = load_image("source.png") negative_ip_image = load_image("noise.png") image_embeds = prepare_ip_adapter_image_embeds( unet=pipeline.unet, image_encoder=pipeline.image_encoder, feature_extractor=pipeline.feature_extractor, ip_adapter_image=[[ip_image]], do_classifier_free_guidance=True, device="cuda", num_images_per_prompt=1, ip_adapter_negative_image=negative_ip_image, ) prompt = "cinematic photo of a cyborg in the city, 4k, high quality, intricate, highly detailed" negative_prompt = "blurry, smooth, plastic" image = pipeline( prompt=prompt, negative_prompt=negative_prompt, ip_adapter_image_embeds=image_embeds, guidance_scale=6.0, num_inference_steps=25, generator=torch.Generator(device="cpu").manual_seed(1556265306), ).images[0] image.save("result.png") ``` ### Asymmetric Tiling Stable Diffusion is not trained to generate seamless textures. However, you can use this simple script to add tiling to your generation. This script is contributed by [alexisrolland](https://github.com/alexisrolland). See more details in the [this issue](https://github.com/huggingface/diffusers/issues/556) |Generated|Tiled| |---|---| |![20240313003235_573631814](https://github.com/huggingface/diffusers/assets/5442875/eca174fb-06a4-464e-a3a7-00dbb024543e)|![wall](https://github.com/huggingface/diffusers/assets/5442875/b4aa774b-2a6a-4316-a8eb-8f30b5f4d024)| ```py import torch from typing import Optional from diffusers import StableDiffusionPipeline from diffusers.models.lora import LoRACompatibleConv def seamless_tiling(pipeline, x_axis, y_axis): def asymmetric_conv2d_convforward(self, input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None): self.paddingX = (self._reversed_padding_repeated_twice[0], self._reversed_padding_repeated_twice[1], 0, 0) self.paddingY = (0, 0, self._reversed_padding_repeated_twice[2], self._reversed_padding_repeated_twice[3]) working = torch.nn.functional.pad(input, self.paddingX, mode=x_mode) working = torch.nn.functional.pad(working, self.paddingY, mode=y_mode) return torch.nn.functional.conv2d(working, weight, bias, self.stride, torch.nn.modules.utils._pair(0), self.dilation, self.groups) x_mode = 'circular' if x_axis else 'constant' y_mode = 'circular' if y_axis else 'constant' targets = [pipeline.vae, pipeline.text_encoder, pipeline.unet] convolution_layers = [] for target in targets: for module in target.modules(): if isinstance(module, torch.nn.Conv2d): convolution_layers.append(module) for layer in convolution_layers: if isinstance(layer, LoRACompatibleConv) and layer.lora_layer is None: layer.lora_layer = lambda * x: 0 layer._conv_forward = asymmetric_conv2d_convforward.__get__(layer, torch.nn.Conv2d) return pipeline pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True) pipeline.enable_model_cpu_offload() prompt = ["texture of a red brick wall"] seed = 123456 generator = torch.Generator(device='cuda').manual_seed(seed) pipeline = seamless_tiling(pipeline=pipeline, x_axis=True, y_axis=True) image = pipeline( prompt=prompt, width=512, height=512, num_inference_steps=20, guidance_scale=7, num_images_per_prompt=1, generator=generator ).images[0] seamless_tiling(pipeline=pipeline, x_axis=False, y_axis=False) torch.cuda.empty_cache() image.save('image.png') ```
diffusers/examples/community/README_community_scripts.md/0
{ "file_path": "diffusers/examples/community/README_community_scripts.md", "repo_id": "diffusers", "token_count": 5643 }
207
import inspect import time from pathlib import Path from typing import Callable, List, Optional, Union import numpy as np import torch from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer from diffusers.configuration_utils import FrozenDict from diffusers.models import AutoencoderKL, UNet2DConditionModel from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler from diffusers.utils import deprecate, logging logger = logging.get_logger(__name__) # pylint: disable=invalid-name def slerp(t, v0, v1, DOT_THRESHOLD=0.9995): """helper function to spherically interpolate two arrays v1 v2""" if not isinstance(v0, np.ndarray): inputs_are_torch = True input_device = v0.device v0 = v0.cpu().numpy() v1 = v1.cpu().numpy() dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1))) if np.abs(dot) > DOT_THRESHOLD: v2 = (1 - t) * v0 + t * v1 else: theta_0 = np.arccos(dot) sin_theta_0 = np.sin(theta_0) theta_t = theta_0 * t sin_theta_t = np.sin(theta_t) s0 = np.sin(theta_0 - theta_t) / sin_theta_0 s1 = sin_theta_t / sin_theta_0 v2 = s0 * v0 + s1 * v1 if inputs_are_torch: v2 = torch.from_numpy(v2).to(input_device) return v2 class StableDiffusionWalkPipeline(DiffusionPipeline, StableDiffusionMixin): r""" Pipeline for text-to-image generation using Stable Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details. feature_extractor ([`CLIPImageProcessor`]): Model that extracts features from generated images to be used as inputs for the `safety_checker`. """ def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPImageProcessor, ): super().__init__() if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " "to update the config accordingly as leaving `steps_offset` might led to incorrect results" " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" " file" ) deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["steps_offset"] = 1 scheduler._internal_dict = FrozenDict(new_config) if safety_checker is None: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, ) @torch.no_grad() def __call__( self, prompt: Optional[Union[str, List[str]]] = None, height: int = 512, width: int = 512, num_inference_steps: int = 50, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[torch.Generator] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, text_embeddings: Optional[torch.Tensor] = None, **kwargs, ): r""" Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*, defaults to `None`): The prompt or prompts to guide the image generation. If not provided, `text_embeddings` is required. height (`int`, *optional*, defaults to 512): The height in pixels of the generated image. width (`int`, *optional*, defaults to 512): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 7.5): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to [`schedulers.DDIMScheduler`], will be ignored for others. generator (`torch.Generator`, *optional*): A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that will be called every `callback_steps` steps during inference. The function will be called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function will be called. If not specified, the callback will be called at every step. text_embeddings (`torch.Tensor`, *optional*, defaults to `None`): Pre-generated text embeddings to be used as inputs for image generation. Can be used in place of `prompt` to avoid re-computing the embeddings. If not provided, the embeddings will be generated from the supplied `prompt`. Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the `safety_checker`. """ if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if (callback_steps is None) or ( callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) ): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if text_embeddings is None: if isinstance(prompt, str): batch_size = 1 elif isinstance(prompt, list): batch_size = len(prompt) else: raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") # get prompt text embeddings text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, return_tensors="pt", ) text_input_ids = text_inputs.input_ids if text_input_ids.shape[-1] > self.tokenizer.model_max_length: removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) print( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0] else: batch_size = text_embeddings.shape[0] # duplicate text embeddings for each generation per prompt, using mps friendly method bs_embed, seq_len, _ = text_embeddings.shape text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt max_length = self.tokenizer.model_max_length uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = uncond_embeddings.shape[1] uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) # get the initial random noise unless the user supplied it # Unlike in other pipelines, latents need to be generated in the target device # for 1-to-1 results reproducibility with the CompVis implementation. # However this currently doesn't work in `mps`. latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8) latents_dtype = text_embeddings.dtype if latents is None: if self.device.type == "mps": # randn does not work reproducibly on mps latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to( self.device ) else: latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype) else: if latents.shape != latents_shape: raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") latents = latents.to(self.device) # set timesteps self.scheduler.set_timesteps(num_inference_steps) # Some schedulers like PNDM have timesteps as arrays # It's more optimized to move all timesteps to correct device beforehand timesteps_tensor = self.scheduler.timesteps.to(self.device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta for i, t in enumerate(self.progress_bar(timesteps_tensor)): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) latents = 1 / 0.18215 * latents image = self.vae.decode(latents).sample image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() if self.safety_checker is not None: safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to( self.device ) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype) ) else: has_nsfw_concept = None if output_type == "pil": image = self.numpy_to_pil(image) if not return_dict: return (image, has_nsfw_concept) return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) def embed_text(self, text): """takes in text and turns it into text embeddings""" text_input = self.tokenizer( text, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) with torch.no_grad(): embed = self.text_encoder(text_input.input_ids.to(self.device))[0] return embed def get_noise(self, seed, dtype=torch.float32, height=512, width=512): """Takes in random seed and returns corresponding noise vector""" return torch.randn( (1, self.unet.config.in_channels, height // 8, width // 8), generator=torch.Generator(device=self.device).manual_seed(seed), device=self.device, dtype=dtype, ) def walk( self, prompts: List[str], seeds: List[int], num_interpolation_steps: Optional[int] = 6, output_dir: Optional[str] = "./dreams", name: Optional[str] = None, batch_size: Optional[int] = 1, height: Optional[int] = 512, width: Optional[int] = 512, guidance_scale: Optional[float] = 7.5, num_inference_steps: Optional[int] = 50, eta: Optional[float] = 0.0, ) -> List[str]: """ Walks through a series of prompts and seeds, interpolating between them and saving the results to disk. Args: prompts (`List[str]`): List of prompts to generate images for. seeds (`List[int]`): List of seeds corresponding to provided prompts. Must be the same length as prompts. num_interpolation_steps (`int`, *optional*, defaults to 6): Number of interpolation steps to take between prompts. output_dir (`str`, *optional*, defaults to `./dreams`): Directory to save the generated images to. name (`str`, *optional*, defaults to `None`): Subdirectory of `output_dir` to save the generated images to. If `None`, the name will be the current time. batch_size (`int`, *optional*, defaults to 1): Number of images to generate at once. height (`int`, *optional*, defaults to 512): Height of the generated images. width (`int`, *optional*, defaults to 512): Width of the generated images. guidance_scale (`float`, *optional*, defaults to 7.5): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to [`schedulers.DDIMScheduler`], will be ignored for others. Returns: `List[str]`: List of paths to the generated images. """ if not len(prompts) == len(seeds): raise ValueError( f"Number of prompts and seeds must be equalGot {len(prompts)} prompts and {len(seeds)} seeds" ) name = name or time.strftime("%Y%m%d-%H%M%S") save_path = Path(output_dir) / name save_path.mkdir(exist_ok=True, parents=True) frame_idx = 0 frame_filepaths = [] for prompt_a, prompt_b, seed_a, seed_b in zip(prompts, prompts[1:], seeds, seeds[1:]): # Embed Text embed_a = self.embed_text(prompt_a) embed_b = self.embed_text(prompt_b) # Get Noise noise_dtype = embed_a.dtype noise_a = self.get_noise(seed_a, noise_dtype, height, width) noise_b = self.get_noise(seed_b, noise_dtype, height, width) noise_batch, embeds_batch = None, None T = np.linspace(0.0, 1.0, num_interpolation_steps) for i, t in enumerate(T): noise = slerp(float(t), noise_a, noise_b) embed = torch.lerp(embed_a, embed_b, t) noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise], dim=0) embeds_batch = embed if embeds_batch is None else torch.cat([embeds_batch, embed], dim=0) batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == T.shape[0] if batch_is_ready: outputs = self( latents=noise_batch, text_embeddings=embeds_batch, height=height, width=width, guidance_scale=guidance_scale, eta=eta, num_inference_steps=num_inference_steps, ) noise_batch, embeds_batch = None, None for image in outputs["images"]: frame_filepath = str(save_path / f"frame_{frame_idx:06d}.png") image.save(frame_filepath) frame_filepaths.append(frame_filepath) frame_idx += 1 return frame_filepaths
diffusers/examples/community/interpolate_stable_diffusion.py/0
{ "file_path": "diffusers/examples/community/interpolate_stable_diffusion.py", "repo_id": "diffusers", "token_count": 11215 }
208
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect from typing import Any, Callable, Dict, List, Optional, Tuple, Union import numpy as np import torch import torch.nn.functional as F from PIL import Image from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection from diffusers.image_processor import PipelineImageInput, VaeImageProcessor from diffusers.loaders import IPAdapterMixin, LoraLoaderMixin, TextualInversionLoaderMixin from diffusers.models import AutoencoderKL, ControlNetModel, ImageProjection, UNet2DConditionModel, UNetMotionModel from diffusers.models.lora import adjust_lora_scale_text_encoder from diffusers.models.unets.unet_motion_model import MotionAdapter from diffusers.pipelines.animatediff.pipeline_output import AnimateDiffPipelineOutput from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin from diffusers.schedulers import ( DDIMScheduler, DPMSolverMultistepScheduler, EulerAncestralDiscreteScheduler, EulerDiscreteScheduler, LMSDiscreteScheduler, PNDMScheduler, ) from diffusers.utils import USE_PEFT_BACKEND, deprecate, logging, scale_lora_layers, unscale_lora_layers from diffusers.utils.torch_utils import is_compiled_module, randn_tensor logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter >>> from diffusers.pipelines import DiffusionPipeline >>> from diffusers.schedulers import DPMSolverMultistepScheduler >>> from PIL import Image >>> motion_id = "guoyww/animatediff-motion-adapter-v1-5-2" >>> adapter = MotionAdapter.from_pretrained(motion_id) >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16) >>> vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16) >>> model_id = "SG161222/Realistic_Vision_V5.1_noVAE" >>> pipe = DiffusionPipeline.from_pretrained( ... model_id, ... motion_adapter=adapter, ... controlnet=controlnet, ... vae=vae, ... custom_pipeline="pipeline_animatediff_controlnet", ... ).to(device="cuda", dtype=torch.float16) >>> pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained( ... model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1, beta_schedule="linear", ... ) >>> pipe.enable_vae_slicing() >>> conditioning_frames = [] >>> for i in range(1, 16 + 1): ... conditioning_frames.append(Image.open(f"frame_{i}.png")) >>> prompt = "astronaut in space, dancing" >>> negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly" >>> result = pipe( ... prompt=prompt, ... negative_prompt=negative_prompt, ... width=512, ... height=768, ... conditioning_frames=conditioning_frames, ... num_inference_steps=12, ... ) >>> from diffusers.utils import export_to_gif >>> export_to_gif(result.frames[0], "result.gif") ``` """ # Copied from diffusers.pipelines.animatediff.pipeline_animatediff.tensor2vid def tensor2vid(video: torch.Tensor, processor, output_type="np"): batch_size, channels, num_frames, height, width = video.shape outputs = [] for batch_idx in range(batch_size): batch_vid = video[batch_idx].permute(1, 0, 2, 3) batch_output = processor.postprocess(batch_vid, output_type) outputs.append(batch_output) if output_type == "np": outputs = np.stack(outputs) elif output_type == "pt": outputs = torch.stack(outputs) elif not output_type == "pil": raise ValueError(f"{output_type} does not exist. Please choose one of ['np', 'pt', 'pil']") return outputs class AnimateDiffControlNetPipeline( DiffusionPipeline, StableDiffusionMixin, TextualInversionLoaderMixin, IPAdapterMixin, LoraLoaderMixin ): r""" Pipeline for text-to-video generation. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.LoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer (`CLIPTokenizer`): A [`~transformers.CLIPTokenizer`] to tokenize text. unet ([`UNet2DConditionModel`]): A [`UNet2DConditionModel`] used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter ([`MotionAdapter`]): A [`MotionAdapter`] to be used in combination with `unet` to denoise the encoded video latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. """ model_cpu_offload_seq = "text_encoder->unet->vae" _optional_components = ["feature_extractor", "image_encoder"] _callback_tensor_inputs = ["latents", "prompt_embeds", "negative_prompt_embeds"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, motion_adapter: MotionAdapter, controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel], scheduler: Union[ DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, ], feature_extractor: Optional[CLIPImageProcessor] = None, image_encoder: Optional[CLIPVisionModelWithProjection] = None, ): super().__init__() unet = UNetMotionModel.from_unet2d(unet, motion_adapter) if isinstance(controlnet, (list, tuple)): controlnet = MultiControlNetModel(controlnet) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, motion_adapter=motion_adapter, controlnet=controlnet, scheduler=scheduler, feature_extractor=feature_extractor, image_encoder=image_encoder, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) self.control_image_processor = VaeImageProcessor( vae_scale_factor=self.vae_scale_factor, do_convert_rgb=True, do_normalize=False ) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_prompt with num_images_per_prompt -> num_videos_per_prompt def encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, clip_skip: Optional[int] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. """ # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, LoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) else: scale_lora_layers(self.text_encoder, lora_scale) if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, self.tokenizer) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = self.tokenizer.batch_decode( untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] ) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None if clip_skip is None: prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask) prompt_embeds = prompt_embeds[0] else: prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, output_hidden_states=True ) # Access the `hidden_states` first, that contains a tuple of # all the hidden states from the encoder layers. Then index into # the tuple to access the hidden states from the desired layer. prompt_embeds = prompt_embeds[-1][-(clip_skip + 1)] # We also need to apply the final LayerNorm here to not mess with the # representations. The `last_hidden_states` that we typically use for # obtaining the final prompt representations passes through the LayerNorm # layer. prompt_embeds = self.text_encoder.text_model.final_layer_norm(prompt_embeds) if self.text_encoder is not None: prompt_embeds_dtype = self.text_encoder.dtype elif self.unet is not None: prompt_embeds_dtype = self.unet.dtype else: prompt_embeds_dtype = prompt_embeds.dtype prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance and negative_prompt_embeds is None: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) max_length = prompt_embeds.shape[1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) if isinstance(self, LoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder, lora_scale) return prompt_embeds, negative_prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_image def encode_image(self, image, device, num_images_per_prompt, output_hidden_states=None): dtype = next(self.image_encoder.parameters()).dtype if not isinstance(image, torch.Tensor): image = self.feature_extractor(image, return_tensors="pt").pixel_values image = image.to(device=device, dtype=dtype) if output_hidden_states: image_enc_hidden_states = self.image_encoder(image, output_hidden_states=True).hidden_states[-2] image_enc_hidden_states = image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_enc_hidden_states = self.image_encoder( torch.zeros_like(image), output_hidden_states=True ).hidden_states[-2] uncond_image_enc_hidden_states = uncond_image_enc_hidden_states.repeat_interleave( num_images_per_prompt, dim=0 ) return image_enc_hidden_states, uncond_image_enc_hidden_states else: image_embeds = self.image_encoder(image).image_embeds image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_embeds = torch.zeros_like(image_embeds) return image_embeds, uncond_image_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_ip_adapter_image_embeds def prepare_ip_adapter_image_embeds( self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt ): if ip_adapter_image_embeds is None: if not isinstance(ip_adapter_image, list): ip_adapter_image = [ip_adapter_image] if len(ip_adapter_image) != len(self.unet.encoder_hid_proj.image_projection_layers): raise ValueError( f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." ) image_embeds = [] for single_ip_adapter_image, image_proj_layer in zip( ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ): output_hidden_state = not isinstance(image_proj_layer, ImageProjection) single_image_embeds, single_negative_image_embeds = self.encode_image( single_ip_adapter_image, device, 1, output_hidden_state ) single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0) single_negative_image_embeds = torch.stack( [single_negative_image_embeds] * num_images_per_prompt, dim=0 ) if self.do_classifier_free_guidance: single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) single_image_embeds = single_image_embeds.to(device) image_embeds.append(single_image_embeds) else: image_embeds = ip_adapter_image_embeds return image_embeds # Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents def decode_latents(self, latents): latents = 1 / self.vae.config.scaling_factor * latents batch_size, channels, num_frames, height, width = latents.shape latents = latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width) image = self.vae.decode(latents).sample video = ( image[None, :] .reshape( ( batch_size, num_frames, -1, ) + image.shape[2:] ) .permute(0, 2, 1, 3, 4) ) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 video = video.float() return video # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs def check_inputs( self, prompt, height, width, num_frames, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None, callback_on_step_end_tensor_inputs=None, image=None, controlnet_conditioning_scale=1.0, control_guidance_start=0.0, control_guidance_end=1.0, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) # `prompt` needs more sophisticated handling when there are multiple # conditionings. if isinstance(self.controlnet, MultiControlNetModel): if isinstance(prompt, list): logger.warning( f"You have {len(self.controlnet.nets)} ControlNets and you have passed {len(prompt)}" " prompts. The conditionings will be fixed across the prompts." ) # Check `image` is_compiled = hasattr(F, "scaled_dot_product_attention") and isinstance( self.controlnet, torch._dynamo.eval_frame.OptimizedModule ) if ( isinstance(self.controlnet, ControlNetModel) or is_compiled and isinstance(self.controlnet._orig_mod, ControlNetModel) ): if not isinstance(image, list): raise TypeError(f"For single controlnet, `image` must be of type `list` but got {type(image)}") if len(image) != num_frames: raise ValueError(f"Excepted image to have length {num_frames} but got {len(image)=}") elif ( isinstance(self.controlnet, MultiControlNetModel) or is_compiled and isinstance(self.controlnet._orig_mod, MultiControlNetModel) ): if not isinstance(image, list) or not isinstance(image[0], list): raise TypeError(f"For multiple controlnets: `image` must be type list of lists but got {type(image)=}") if len(image[0]) != num_frames: raise ValueError(f"Expected length of image sublist as {num_frames} but got {len(image[0])=}") if any(len(img) != len(image[0]) for img in image): raise ValueError("All conditioning frame batches for multicontrolnet must be same size") else: assert False # Check `controlnet_conditioning_scale` if ( isinstance(self.controlnet, ControlNetModel) or is_compiled and isinstance(self.controlnet._orig_mod, ControlNetModel) ): if not isinstance(controlnet_conditioning_scale, float): raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.") elif ( isinstance(self.controlnet, MultiControlNetModel) or is_compiled and isinstance(self.controlnet._orig_mod, MultiControlNetModel) ): if isinstance(controlnet_conditioning_scale, list): if any(isinstance(i, list) for i in controlnet_conditioning_scale): raise ValueError("A single batch of multiple conditionings are supported at the moment.") elif isinstance(controlnet_conditioning_scale, list) and len(controlnet_conditioning_scale) != len( self.controlnet.nets ): raise ValueError( "For multiple controlnets: When `controlnet_conditioning_scale` is specified as `list`, it must have" " the same length as the number of controlnets" ) else: assert False if not isinstance(control_guidance_start, (tuple, list)): control_guidance_start = [control_guidance_start] if not isinstance(control_guidance_end, (tuple, list)): control_guidance_end = [control_guidance_end] if len(control_guidance_start) != len(control_guidance_end): raise ValueError( f"`control_guidance_start` has {len(control_guidance_start)} elements, but `control_guidance_end` has {len(control_guidance_end)} elements. Make sure to provide the same number of elements to each list." ) if isinstance(self.controlnet, MultiControlNetModel): if len(control_guidance_start) != len(self.controlnet.nets): raise ValueError( f"`control_guidance_start`: {control_guidance_start} has {len(control_guidance_start)} elements but there are {len(self.controlnet.nets)} controlnets available. Make sure to provide {len(self.controlnet.nets)}." ) for start, end in zip(control_guidance_start, control_guidance_end): if start >= end: raise ValueError( f"control guidance start: {start} cannot be larger or equal to control guidance end: {end}." ) if start < 0.0: raise ValueError(f"control guidance start: {start} can't be smaller than 0.") if end > 1.0: raise ValueError(f"control guidance end: {end} can't be larger than 1.0.") # Copied from diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline.check_image def check_image(self, image, prompt, prompt_embeds): image_is_pil = isinstance(image, Image.Image) image_is_tensor = isinstance(image, torch.Tensor) image_is_np = isinstance(image, np.ndarray) image_is_pil_list = isinstance(image, list) and isinstance(image[0], Image.Image) image_is_tensor_list = isinstance(image, list) and isinstance(image[0], torch.Tensor) image_is_np_list = isinstance(image, list) and isinstance(image[0], np.ndarray) if ( not image_is_pil and not image_is_tensor and not image_is_np and not image_is_pil_list and not image_is_tensor_list and not image_is_np_list ): raise TypeError( f"image must be passed and be one of PIL image, numpy array, torch tensor, list of PIL images, list of numpy arrays or list of torch tensors, but is {type(image)}" ) if image_is_pil: image_batch_size = 1 else: image_batch_size = len(image) if prompt is not None and isinstance(prompt, str): prompt_batch_size = 1 elif prompt is not None and isinstance(prompt, list): prompt_batch_size = len(prompt) elif prompt_embeds is not None: prompt_batch_size = prompt_embeds.shape[0] if image_batch_size != 1 and image_batch_size != prompt_batch_size: raise ValueError( f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}" ) # Copied from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.prepare_latents def prepare_latents( self, batch_size, num_channels_latents, num_frames, height, width, dtype, device, generator, latents=None ): shape = ( batch_size, num_channels_latents, num_frames, height // self.vae_scale_factor, width // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents # Copied from diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline.prepare_image def prepare_image( self, image, width, height, batch_size, num_images_per_prompt, device, dtype, do_classifier_free_guidance=False, guess_mode=False, ): image = self.control_image_processor.preprocess(image, height=height, width=width).to(dtype=torch.float32) image_batch_size = image.shape[0] if image_batch_size == 1: repeat_by = batch_size else: # image batch size is the same as prompt batch size repeat_by = num_images_per_prompt image = image.repeat_interleave(repeat_by, dim=0) image = image.to(device=device, dtype=dtype) if do_classifier_free_guidance and not guess_mode: image = torch.cat([image] * 2) return image @property def guidance_scale(self): return self._guidance_scale @property def clip_skip(self): return self._clip_skip # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. @property def do_classifier_free_guidance(self): return self._guidance_scale > 1 @property def cross_attention_kwargs(self): return self._cross_attention_kwargs @property def num_timesteps(self): return self._num_timesteps @torch.no_grad() def __call__( self, prompt: Union[str, List[str]] = None, num_frames: Optional[int] = 16, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_videos_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, ip_adapter_image: Optional[PipelineImageInput] = None, ip_adapter_image_embeds: Optional[PipelineImageInput] = None, conditioning_frames: Optional[List[PipelineImageInput]] = None, output_type: Optional[str] = "pil", return_dict: bool = True, cross_attention_kwargs: Optional[Dict[str, Any]] = None, controlnet_conditioning_scale: Union[float, List[float]] = 1.0, guess_mode: bool = False, control_guidance_start: Union[float, List[float]] = 0.0, control_guidance_end: Union[float, List[float]] = 1.0, clip_skip: Optional[int] = None, callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], **kwargs, ): r""" The call function to the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The height in pixels of the generated video. width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The width in pixels of the generated video. num_frames (`int`, *optional*, defaults to 16): The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds amounts to 2 seconds of video. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality videos at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 7.5): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random `generator`. Latents should be of shape `(batch_size, num_channel, num_frames, height, width)`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. ip_adapter_image (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters. ip_adapter_image_embeds (`List[torch.Tensor]`, *optional*): Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument. conditioning_frames (`List[PipelineImageInput]`, *optional*): The ControlNet input condition to provide guidance to the `unet` for generation. If multiple ControlNets are specified, images must be passed as a list such that each element of the list can be correctly batched for input to a single ControlNet. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] instead of a plain tuple. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). controlnet_conditioning_scale (`float` or `List[float]`, *optional*, defaults to 1.0): The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set the corresponding scale as a list. guess_mode (`bool`, *optional*, defaults to `False`): The ControlNet encoder tries to recognize the content of the input image even if you remove all prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended. control_guidance_start (`float` or `List[float]`, *optional*, defaults to 0.0): The percentage of total steps at which the ControlNet starts applying. control_guidance_end (`float` or `List[float]`, *optional*, defaults to 1.0): The percentage of total steps at which the ControlNet stops applying. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. Examples: Returns: [`~pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated frames. """ callback = kwargs.pop("callback", None) callback_steps = kwargs.pop("callback_steps", None) if callback is not None: deprecate( "callback", "1.0.0", "Passing `callback` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`", ) if callback_steps is not None: deprecate( "callback_steps", "1.0.0", "Passing `callback_steps` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`", ) controlnet = self.controlnet._orig_mod if is_compiled_module(self.controlnet) else self.controlnet # align format for control guidance if not isinstance(control_guidance_start, list) and isinstance(control_guidance_end, list): control_guidance_start = len(control_guidance_end) * [control_guidance_start] elif not isinstance(control_guidance_end, list) and isinstance(control_guidance_start, list): control_guidance_end = len(control_guidance_start) * [control_guidance_end] elif not isinstance(control_guidance_start, list) and not isinstance(control_guidance_end, list): mult = len(controlnet.nets) if isinstance(controlnet, MultiControlNetModel) else 1 control_guidance_start, control_guidance_end = ( mult * [control_guidance_start], mult * [control_guidance_end], ) # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor num_videos_per_prompt = 1 # 1. Check inputs. Raise error if not correct self.check_inputs( prompt=prompt, height=height, width=width, num_frames=num_frames, callback_steps=callback_steps, negative_prompt=negative_prompt, callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, image=conditioning_frames, controlnet_conditioning_scale=controlnet_conditioning_scale, control_guidance_start=control_guidance_start, control_guidance_end=control_guidance_end, ) self._guidance_scale = guidance_scale self._clip_skip = clip_skip self._cross_attention_kwargs = cross_attention_kwargs # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] device = self._execution_device if isinstance(controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float): controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(controlnet.nets) global_pool_conditions = ( controlnet.config.global_pool_conditions if isinstance(controlnet, ControlNetModel) else controlnet.nets[0].config.global_pool_conditions ) guess_mode = guess_mode or global_pool_conditions # 3. Encode input prompt text_encoder_lora_scale = ( cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None ) prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt, device, num_videos_per_prompt, self.do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=text_encoder_lora_scale, clip_skip=self.clip_skip, ) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes if self.do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) if ip_adapter_image is not None: image_embeds = self.prepare_ip_adapter_image_embeds( ip_adapter_image, ip_adapter_image_embeds, device, batch_size * num_videos_per_prompt ) if isinstance(controlnet, ControlNetModel): conditioning_frames = self.prepare_image( image=conditioning_frames, width=width, height=height, batch_size=batch_size * num_videos_per_prompt * num_frames, num_images_per_prompt=num_videos_per_prompt, device=device, dtype=controlnet.dtype, do_classifier_free_guidance=self.do_classifier_free_guidance, guess_mode=guess_mode, ) elif isinstance(controlnet, MultiControlNetModel): cond_prepared_frames = [] for frame_ in conditioning_frames: prepared_frame = self.prepare_image( image=frame_, width=width, height=height, batch_size=batch_size * num_videos_per_prompt * num_frames, num_images_per_prompt=num_videos_per_prompt, device=device, dtype=controlnet.dtype, do_classifier_free_guidance=self.do_classifier_free_guidance, guess_mode=guess_mode, ) cond_prepared_frames.append(prepared_frame) conditioning_frames = cond_prepared_frames else: assert False # 4. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps self._num_timesteps = len(timesteps) # 5. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_videos_per_prompt, num_channels_latents, num_frames, height, width, prompt_embeds.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 7. Add image embeds for IP-Adapter added_cond_kwargs = ( {"image_embeds": image_embeds} if ip_adapter_image is not None or ip_adapter_image_embeds is not None else None ) # 7.1 Create tensor stating which controlnets to keep controlnet_keep = [] for i in range(len(timesteps)): keeps = [ 1.0 - float(i / len(timesteps) < s or (i + 1) / len(timesteps) > e) for s, e in zip(control_guidance_start, control_guidance_end) ] controlnet_keep.append(keeps[0] if isinstance(controlnet, ControlNetModel) else keeps) # 8. Denoising loop num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) if guess_mode and self.do_classifier_free_guidance: # Infer ControlNet only for the conditional batch. control_model_input = latents control_model_input = self.scheduler.scale_model_input(control_model_input, t) controlnet_prompt_embeds = prompt_embeds.chunk(2)[1] else: control_model_input = latent_model_input controlnet_prompt_embeds = prompt_embeds controlnet_prompt_embeds = controlnet_prompt_embeds.repeat_interleave(num_frames, dim=0) if isinstance(controlnet_keep[i], list): cond_scale = [c * s for c, s in zip(controlnet_conditioning_scale, controlnet_keep[i])] else: controlnet_cond_scale = controlnet_conditioning_scale if isinstance(controlnet_cond_scale, list): controlnet_cond_scale = controlnet_cond_scale[0] cond_scale = controlnet_cond_scale * controlnet_keep[i] control_model_input = torch.transpose(control_model_input, 1, 2) control_model_input = control_model_input.reshape( (-1, control_model_input.shape[2], control_model_input.shape[3], control_model_input.shape[4]) ) down_block_res_samples, mid_block_res_sample = self.controlnet( control_model_input, t, encoder_hidden_states=controlnet_prompt_embeds, controlnet_cond=conditioning_frames, conditioning_scale=cond_scale, guess_mode=guess_mode, return_dict=False, ) # predict the noise residual noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, cross_attention_kwargs=self.cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, down_block_additional_residuals=down_block_res_samples, mid_block_additional_residual=mid_block_res_sample, ).sample # perform guidance if self.do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) latents = callback_outputs.pop("latents", latents) prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds) # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: callback(i, t, latents) # 9. Post processing if output_type == "latent": video = latents else: video_tensor = self.decode_latents(latents) video = tensor2vid(video_tensor, self.image_processor, output_type=output_type) # 10. Offload all models self.maybe_free_model_hooks() if not return_dict: return (video,) return AnimateDiffPipelineOutput(frames=video)
diffusers/examples/community/pipeline_animatediff_controlnet.py/0
{ "file_path": "diffusers/examples/community/pipeline_animatediff_controlnet.py", "repo_id": "diffusers", "token_count": 24782 }
209
import math from typing import Dict, Optional import torch import torchvision.transforms.functional as FF from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer from diffusers import StableDiffusionPipeline from diffusers.models import AutoencoderKL, UNet2DConditionModel from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker from diffusers.schedulers import KarrasDiffusionSchedulers from diffusers.utils import USE_PEFT_BACKEND try: from compel import Compel except ImportError: Compel = None KCOMM = "ADDCOMM" KBRK = "BREAK" class RegionalPromptingStableDiffusionPipeline(StableDiffusionPipeline): r""" Args for Regional Prompting Pipeline: rp_args:dict Required rp_args["mode"]: cols, rows, prompt, prompt-ex for cols, rows mode rp_args["div"]: ex) 1;1;1(Divide into 3 regions) for prompt, prompt-ex mode rp_args["th"]: ex) 0.5,0.5,0.6 (threshold for prompt mode) Optional rp_args["save_mask"]: True/False (save masks in prompt mode) Pipeline for text-to-image generation using Stable Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details. feature_extractor ([`CLIPImageProcessor`]): Model that extracts features from generated images to be used as inputs for the `safety_checker`. """ def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: KarrasDiffusionSchedulers, safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPFeatureExtractor, requires_safety_checker: bool = True, ): super().__init__( vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker, ) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, ) @torch.no_grad() def __call__( self, prompt: str, height: int = 512, width: int = 512, num_inference_steps: int = 50, guidance_scale: float = 7.5, negative_prompt: str = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[torch.Generator] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, rp_args: Dict[str, str] = None, ): active = KBRK in prompt[0] if isinstance(prompt, list) else KBRK in prompt if negative_prompt is None: negative_prompt = "" if isinstance(prompt, str) else [""] * len(prompt) device = self._execution_device regions = 0 self.power = int(rp_args["power"]) if "power" in rp_args else 1 prompts = prompt if isinstance(prompt, list) else [prompt] n_prompts = negative_prompt if isinstance(prompt, str) else [negative_prompt] self.batch = batch = num_images_per_prompt * len(prompts) all_prompts_cn, all_prompts_p = promptsmaker(prompts, num_images_per_prompt) all_n_prompts_cn, _ = promptsmaker(n_prompts, num_images_per_prompt) equal = len(all_prompts_cn) == len(all_n_prompts_cn) if Compel: compel = Compel(tokenizer=self.tokenizer, text_encoder=self.text_encoder) def getcompelembs(prps): embl = [] for prp in prps: embl.append(compel.build_conditioning_tensor(prp)) return torch.cat(embl) conds = getcompelembs(all_prompts_cn) unconds = getcompelembs(all_n_prompts_cn) embs = getcompelembs(prompts) n_embs = getcompelembs(n_prompts) prompt = negative_prompt = None else: conds = self.encode_prompt(prompts, device, 1, True)[0] unconds = ( self.encode_prompt(n_prompts, device, 1, True)[0] if equal else self.encode_prompt(all_n_prompts_cn, device, 1, True)[0] ) embs = n_embs = None if not active: pcallback = None mode = None else: if any(x in rp_args["mode"].upper() for x in ["COL", "ROW"]): mode = "COL" if "COL" in rp_args["mode"].upper() else "ROW" ocells, icells, regions = make_cells(rp_args["div"]) elif "PRO" in rp_args["mode"].upper(): regions = len(all_prompts_p[0]) mode = "PROMPT" reset_attnmaps(self) self.ex = "EX" in rp_args["mode"].upper() self.target_tokens = target_tokens = tokendealer(self, all_prompts_p) thresholds = [float(x) for x in rp_args["th"].split(",")] orig_hw = (height, width) revers = True def pcallback(s_self, step: int, timestep: int, latents: torch.Tensor, selfs=None): if "PRO" in mode: # in Prompt mode, make masks from sum of attension maps self.step = step if len(self.attnmaps_sizes) > 3: self.history[step] = self.attnmaps.copy() for hw in self.attnmaps_sizes: allmasks = [] basemasks = [None] * batch for tt, th in zip(target_tokens, thresholds): for b in range(batch): key = f"{tt}-{b}" _, mask, _ = makepmask(self, self.attnmaps[key], hw[0], hw[1], th, step) mask = mask.unsqueeze(0).unsqueeze(-1) if self.ex: allmasks[b::batch] = [x - mask for x in allmasks[b::batch]] allmasks[b::batch] = [torch.where(x > 0, 1, 0) for x in allmasks[b::batch]] allmasks.append(mask) basemasks[b] = mask if basemasks[b] is None else basemasks[b] + mask basemasks = [1 - mask for mask in basemasks] basemasks = [torch.where(x > 0, 1, 0) for x in basemasks] allmasks = basemasks + allmasks self.attnmasks[hw] = torch.cat(allmasks) self.maskready = True return latents def hook_forward(module): # diffusers==0.23.2 def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, temb: Optional[torch.Tensor] = None, scale: float = 1.0, ) -> torch.Tensor: attn = module xshape = hidden_states.shape self.hw = (h, w) = split_dims(xshape[1], *orig_hw) if revers: nx, px = hidden_states.chunk(2) else: px, nx = hidden_states.chunk(2) if equal: hidden_states = torch.cat( [px for i in range(regions)] + [nx for i in range(regions)], 0, ) encoder_hidden_states = torch.cat([conds] + [unconds]) else: hidden_states = torch.cat([px for i in range(regions)] + [nx], 0) encoder_hidden_states = torch.cat([conds] + [unconds]) residual = hidden_states args = () if USE_PEFT_BACKEND else (scale,) if attn.spatial_norm is not None: hidden_states = attn.spatial_norm(hidden_states, temb) input_ndim = hidden_states.ndim if input_ndim == 4: batch_size, channel, height, width = hidden_states.shape hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) batch_size, sequence_length, _ = ( hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape ) if attention_mask is not None: attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1]) if attn.group_norm is not None: hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) args = () if USE_PEFT_BACKEND else (scale,) query = attn.to_q(hidden_states, *args) if encoder_hidden_states is None: encoder_hidden_states = hidden_states elif attn.norm_cross: encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) key = attn.to_k(encoder_hidden_states, *args) value = attn.to_v(encoder_hidden_states, *args) inner_dim = key.shape[-1] head_dim = inner_dim // attn.heads query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) # the output of sdp = (batch, num_heads, seq_len, head_dim) # TODO: add support for attn.scale when we move to Torch 2.1 hidden_states = scaled_dot_product_attention( self, query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False, getattn="PRO" in mode, ) hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim) hidden_states = hidden_states.to(query.dtype) # linear proj hidden_states = attn.to_out[0](hidden_states, *args) # dropout hidden_states = attn.to_out[1](hidden_states) if input_ndim == 4: hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) if attn.residual_connection: hidden_states = hidden_states + residual hidden_states = hidden_states / attn.rescale_output_factor #### Regional Prompting Col/Row mode if any(x in mode for x in ["COL", "ROW"]): reshaped = hidden_states.reshape(hidden_states.size()[0], h, w, hidden_states.size()[2]) center = reshaped.shape[0] // 2 px = reshaped[0:center] if equal else reshaped[0:-batch] nx = reshaped[center:] if equal else reshaped[-batch:] outs = [px, nx] if equal else [px] for out in outs: c = 0 for i, ocell in enumerate(ocells): for icell in icells[i]: if "ROW" in mode: out[ 0:batch, int(h * ocell[0]) : int(h * ocell[1]), int(w * icell[0]) : int(w * icell[1]), :, ] = out[ c * batch : (c + 1) * batch, int(h * ocell[0]) : int(h * ocell[1]), int(w * icell[0]) : int(w * icell[1]), :, ] else: out[ 0:batch, int(h * icell[0]) : int(h * icell[1]), int(w * ocell[0]) : int(w * ocell[1]), :, ] = out[ c * batch : (c + 1) * batch, int(h * icell[0]) : int(h * icell[1]), int(w * ocell[0]) : int(w * ocell[1]), :, ] c += 1 px, nx = (px[0:batch], nx[0:batch]) if equal else (px[0:batch], nx) hidden_states = torch.cat([nx, px], 0) if revers else torch.cat([px, nx], 0) hidden_states = hidden_states.reshape(xshape) #### Regional Prompting Prompt mode elif "PRO" in mode: px, nx = ( torch.chunk(hidden_states) if equal else hidden_states[0:-batch], hidden_states[-batch:], ) if (h, w) in self.attnmasks and self.maskready: def mask(input): out = torch.multiply(input, self.attnmasks[(h, w)]) for b in range(batch): for r in range(1, regions): out[b] = out[b] + out[r * batch + b] return out px, nx = (mask(px), mask(nx)) if equal else (mask(px), nx) px, nx = (px[0:batch], nx[0:batch]) if equal else (px[0:batch], nx) hidden_states = torch.cat([nx, px], 0) if revers else torch.cat([px, nx], 0) return hidden_states return forward def hook_forwards(root_module: torch.nn.Module): for name, module in root_module.named_modules(): if "attn2" in name and module.__class__.__name__ == "Attention": module.forward = hook_forward(module) hook_forwards(self.unet) output = StableDiffusionPipeline(**self.components)( prompt=prompt, prompt_embeds=embs, negative_prompt=negative_prompt, negative_prompt_embeds=n_embs, height=height, width=width, num_inference_steps=num_inference_steps, guidance_scale=guidance_scale, num_images_per_prompt=num_images_per_prompt, eta=eta, generator=generator, latents=latents, output_type=output_type, return_dict=return_dict, callback_on_step_end=pcallback, ) if "save_mask" in rp_args: save_mask = rp_args["save_mask"] else: save_mask = False if mode == "PROMPT" and save_mask: saveattnmaps( self, output, height, width, thresholds, num_inference_steps // 2, regions, ) return output ### Make prompt list for each regions def promptsmaker(prompts, batch): out_p = [] plen = len(prompts) for prompt in prompts: add = "" if KCOMM in prompt: add, prompt = prompt.split(KCOMM) add = add + " " prompts = prompt.split(KBRK) out_p.append([add + p for p in prompts]) out = [None] * batch * len(out_p[0]) * len(out_p) for p, prs in enumerate(out_p): # inputs prompts for r, pr in enumerate(prs): # prompts for regions start = (p + r * plen) * batch out[start : start + batch] = [pr] * batch # P1R1B1,P1R1B2...,P1R2B1,P1R2B2...,P2R1B1... return out, out_p ### make regions from ratios ### ";" makes outercells, "," makes inner cells def make_cells(ratios): if ";" not in ratios and "," in ratios: ratios = ratios.replace(",", ";") ratios = ratios.split(";") ratios = [inratios.split(",") for inratios in ratios] icells = [] ocells = [] def startend(cells, array): current_start = 0 array = [float(x) for x in array] for value in array: end = current_start + (value / sum(array)) cells.append([current_start, end]) current_start = end startend(ocells, [r[0] for r in ratios]) for inratios in ratios: if 2 > len(inratios): icells.append([[0, 1]]) else: add = [] startend(add, inratios[1:]) icells.append(add) return ocells, icells, sum(len(cell) for cell in icells) def make_emblist(self, prompts): with torch.no_grad(): tokens = self.tokenizer( prompts, max_length=self.tokenizer.model_max_length, padding=True, truncation=True, return_tensors="pt", ).input_ids.to(self.device) embs = self.text_encoder(tokens, output_hidden_states=True).last_hidden_state.to(self.device, dtype=self.dtype) return embs def split_dims(xs, height, width): xs = xs def repeat_div(x, y): while y > 0: x = math.ceil(x / 2) y = y - 1 return x scale = math.ceil(math.log2(math.sqrt(height * width / xs))) dsh = repeat_div(height, scale) dsw = repeat_div(width, scale) return dsh, dsw ##### for prompt mode def get_attn_maps(self, attn): height, width = self.hw target_tokens = self.target_tokens if (height, width) not in self.attnmaps_sizes: self.attnmaps_sizes.append((height, width)) for b in range(self.batch): for t in target_tokens: power = self.power add = attn[b, :, :, t[0] : t[0] + len(t)] ** (power) * (self.attnmaps_sizes.index((height, width)) + 1) add = torch.sum(add, dim=2) key = f"{t}-{b}" if key not in self.attnmaps: self.attnmaps[key] = add else: if self.attnmaps[key].shape[1] != add.shape[1]: add = add.view(8, height, width) add = FF.resize(add, self.attnmaps_sizes[0], antialias=None) add = add.reshape_as(self.attnmaps[key]) self.attnmaps[key] = self.attnmaps[key] + add def reset_attnmaps(self): # init parameters in every batch self.step = 0 self.attnmaps = {} # maked from attention maps self.attnmaps_sizes = [] # height,width set of u-net blocks self.attnmasks = {} # maked from attnmaps for regions self.maskready = False self.history = {} def saveattnmaps(self, output, h, w, th, step, regions): masks = [] for i, mask in enumerate(self.history[step].values()): img, _, mask = makepmask(self, mask, h, w, th[i % len(th)], step) if self.ex: masks = [x - mask for x in masks] masks.append(mask) if len(masks) == regions - 1: output.images.extend([FF.to_pil_image(mask) for mask in masks]) masks = [] else: output.images.append(img) def makepmask( self, mask, h, w, th, step ): # make masks from attention cache return [for preview, for attention, for Latent] th = th - step * 0.005 if 0.05 >= th: th = 0.05 mask = torch.mean(mask, dim=0) mask = mask / mask.max().item() mask = torch.where(mask > th, 1, 0) mask = mask.float() mask = mask.view(1, *self.attnmaps_sizes[0]) img = FF.to_pil_image(mask) img = img.resize((w, h)) mask = FF.resize(mask, (h, w), interpolation=FF.InterpolationMode.NEAREST, antialias=None) lmask = mask mask = mask.reshape(h * w) mask = torch.where(mask > 0.1, 1, 0) return img, mask, lmask def tokendealer(self, all_prompts): for prompts in all_prompts: targets = [p.split(",")[-1] for p in prompts[1:]] tt = [] for target in targets: ptokens = ( self.tokenizer( prompts, max_length=self.tokenizer.model_max_length, padding=True, truncation=True, return_tensors="pt", ).input_ids )[0] ttokens = ( self.tokenizer( target, max_length=self.tokenizer.model_max_length, padding=True, truncation=True, return_tensors="pt", ).input_ids )[0] tlist = [] for t in range(ttokens.shape[0] - 2): for p in range(ptokens.shape[0]): if ttokens[t + 1] == ptokens[p]: tlist.append(p) if tlist != []: tt.append(tlist) return tt def scaled_dot_product_attention( self, query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False, scale=None, getattn=False, ) -> torch.Tensor: # Efficient implementation equivalent to the following: L, S = query.size(-2), key.size(-2) scale_factor = 1 / math.sqrt(query.size(-1)) if scale is None else scale attn_bias = torch.zeros(L, S, dtype=query.dtype, device=self.device) if is_causal: assert attn_mask is None temp_mask = torch.ones(L, S, dtype=torch.bool).tril(diagonal=0) attn_bias.masked_fill_(temp_mask.logical_not(), float("-inf")) attn_bias.to(query.dtype) if attn_mask is not None: if attn_mask.dtype == torch.bool: attn_mask.masked_fill_(attn_mask.logical_not(), float("-inf")) else: attn_bias += attn_mask attn_weight = query @ key.transpose(-2, -1) * scale_factor attn_weight += attn_bias attn_weight = torch.softmax(attn_weight, dim=-1) if getattn: get_attn_maps(self, attn_weight) attn_weight = torch.dropout(attn_weight, dropout_p, train=True) return attn_weight @ value
diffusers/examples/community/regional_prompting_stable_diffusion.py/0
{ "file_path": "diffusers/examples/community/regional_prompting_stable_diffusion.py", "repo_id": "diffusers", "token_count": 13635 }
210
# Inspired by: https://github.com/Mikubill/sd-webui-controlnet/discussions/1236 and https://github.com/Mikubill/sd-webui-controlnet/discussions/1280 import inspect from typing import Any, Callable, Dict, List, Optional, Tuple, Union import numpy as np import PIL.Image import torch from packaging import version from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer from diffusers import AutoencoderKL, DiffusionPipeline, UNet2DConditionModel from diffusers.configuration_utils import FrozenDict, deprecate from diffusers.image_processor import VaeImageProcessor from diffusers.loaders import FromSingleFileMixin, IPAdapterMixin, LoraLoaderMixin, TextualInversionLoaderMixin from diffusers.models.attention import BasicTransformerBlock from diffusers.models.lora import adjust_lora_scale_text_encoder from diffusers.models.unets.unet_2d_blocks import CrossAttnDownBlock2D, CrossAttnUpBlock2D, DownBlock2D, UpBlock2D from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import rescale_noise_cfg from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker from diffusers.schedulers import KarrasDiffusionSchedulers from diffusers.utils import ( PIL_INTERPOLATION, USE_PEFT_BACKEND, logging, scale_lora_layers, unscale_lora_layers, ) from diffusers.utils.torch_utils import randn_tensor logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers import UniPCMultistepScheduler >>> from diffusers.utils import load_image >>> input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png") >>> pipe = StableDiffusionReferencePipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", safety_checker=None, torch_dtype=torch.float16 ).to('cuda:0') >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) >>> result_img = pipe(ref_image=input_image, prompt="1girl", num_inference_steps=20, reference_attn=True, reference_adain=True).images[0] >>> result_img.show() ``` """ def torch_dfs(model: torch.nn.Module): r""" Performs a depth-first search on the given PyTorch model and returns a list of all its child modules. Args: model (torch.nn.Module): The PyTorch model to perform the depth-first search on. Returns: list: A list of all child modules of the given model. """ result = [model] for child in model.children(): result += torch_dfs(child) return result class StableDiffusionReferencePipeline( DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin, IPAdapterMixin, FromSingleFileMixin ): r""" " Pipeline for Stable Diffusion Reference. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.LoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. feature_extractor ([`CLIPImageProcessor`]): Model that extracts features from generated images to be used as inputs for the `safety_checker`. """ _optional_components = ["safety_checker", "feature_extractor"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: KarrasDiffusionSchedulers, safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPImageProcessor, requires_safety_checker: bool = True, ): super().__init__() if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " "to update the config accordingly as leaving `steps_offset` might led to incorrect results" " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" " file" ) deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["steps_offset"] = 1 scheduler._internal_dict = FrozenDict(new_config) if hasattr(scheduler.config, "skip_prk_steps") and scheduler.config.skip_prk_steps is False: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} has not set the configuration" " `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make" " sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to" " incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face" " Hub, it would be very nice if you could open a Pull request for the" " `scheduler/scheduler_config.json` file" ) deprecate( "skip_prk_steps not set", "1.0.0", deprecation_message, standard_warn=False, ) new_config = dict(scheduler.config) new_config["skip_prk_steps"] = True scheduler._internal_dict = FrozenDict(new_config) if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( version.parse(unet.config._diffusers_version).base_version ) < version.parse("0.9.0.dev0") is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: deprecation_message = ( "The configuration file of the unet has set the default `sample_size` to smaller than" " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" " in the config might lead to incorrect results in future versions. If you have downloaded this" " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" " the `unet/config.json` file" ) deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(unet.config) new_config["sample_size"] = 64 unet._internal_dict = FrozenDict(new_config) # Check shapes, assume num_channels_latents == 4, num_channels_mask == 1, num_channels_masked == 4 if unet.config.in_channels != 4: logger.warning( f"You have loaded a UNet with {unet.config.in_channels} input channels, whereas by default," f" {self.__class__} assumes that `pipeline.unet` has 4 input channels: 4 for `num_channels_latents`," ". If you did not intend to modify" " this behavior, please check whether you have loaded the right checkpoint." ) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) self.register_to_config(requires_safety_checker=requires_safety_checker) def _default_height_width( self, height: Optional[int], width: Optional[int], image: Union[PIL.Image.Image, torch.Tensor, List[PIL.Image.Image]], ) -> Tuple[int, int]: r""" Calculate the default height and width for the given image. Args: height (int or None): The desired height of the image. If None, the height will be determined based on the input image. width (int or None): The desired width of the image. If None, the width will be determined based on the input image. image (PIL.Image.Image or torch.Tensor or list[PIL.Image.Image]): The input image or a list of images. Returns: Tuple[int, int]: A tuple containing the calculated height and width. """ # NOTE: It is possible that a list of images have different # dimensions for each image, so just checking the first image # is not _exactly_ correct, but it is simple. while isinstance(image, list): image = image[0] if height is None: if isinstance(image, PIL.Image.Image): height = image.height elif isinstance(image, torch.Tensor): height = image.shape[2] height = (height // 8) * 8 # round down to nearest multiple of 8 if width is None: if isinstance(image, PIL.Image.Image): width = image.width elif isinstance(image, torch.Tensor): width = image.shape[3] width = (width // 8) * 8 # round down to nearest multiple of 8 return height, width # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs def check_inputs( self, prompt: Optional[Union[str, List[str]]], height: int, width: int, callback_steps: Optional[int], negative_prompt: Optional[str] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, ip_adapter_image: Optional[torch.Tensor] = None, ip_adapter_image_embeds: Optional[torch.Tensor] = None, callback_on_step_end_tensor_inputs: Optional[List[str]] = None, ) -> None: """ Check the validity of the input arguments for the diffusion model. Args: prompt (Optional[Union[str, List[str]]]): The prompt text or list of prompt texts. height (int): The height of the input image. width (int): The width of the input image. callback_steps (Optional[int]): The number of steps to perform the callback on. negative_prompt (Optional[str]): The negative prompt text. prompt_embeds (Optional[torch.Tensor]): The prompt embeddings. negative_prompt_embeds (Optional[torch.Tensor]): The negative prompt embeddings. ip_adapter_image (Optional[torch.Tensor]): The input adapter image. ip_adapter_image_embeds (Optional[torch.Tensor]): The input adapter image embeddings. callback_on_step_end_tensor_inputs (Optional[List[str]]): The list of tensor inputs to perform the callback on. Raises: ValueError: If `height` or `width` is not divisible by 8. ValueError: If `callback_steps` is not a positive integer. ValueError: If `callback_on_step_end_tensor_inputs` contains invalid tensor inputs. ValueError: If both `prompt` and `prompt_embeds` are provided. ValueError: If neither `prompt` nor `prompt_embeds` are provided. ValueError: If `prompt` is not of type `str` or `list`. ValueError: If both `negative_prompt` and `negative_prompt_embeds` are provided. ValueError: If both `prompt_embeds` and `negative_prompt_embeds` are provided and have different shapes. ValueError: If both `ip_adapter_image` and `ip_adapter_image_embeds` are provided. Returns: None """ if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) if ip_adapter_image is not None and ip_adapter_image_embeds is not None: raise ValueError( "Provide either `ip_adapter_image` or `ip_adapter_image_embeds`. Cannot leave both `ip_adapter_image` and `ip_adapter_image_embeds` defined." ) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt def _encode_prompt( self, prompt: Union[str, List[str]], device: torch.device, num_images_per_prompt: int, do_classifier_free_guidance: bool, negative_prompt: Optional[Union[str, List[str]]] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, **kwargs, ) -> torch.Tensor: r""" Encodes the prompt into embeddings. Args: prompt (Union[str, List[str]]): The prompt text or a list of prompt texts. device (torch.device): The device to use for encoding. num_images_per_prompt (int): The number of images per prompt. do_classifier_free_guidance (bool): Whether to use classifier-free guidance. negative_prompt (Optional[Union[str, List[str]]], optional): The negative prompt text or a list of negative prompt texts. Defaults to None. prompt_embeds (Optional[torch.Tensor], optional): The prompt embeddings. Defaults to None. negative_prompt_embeds (Optional[torch.Tensor], optional): The negative prompt embeddings. Defaults to None. lora_scale (Optional[float], optional): The LoRA scale. Defaults to None. **kwargs: Additional keyword arguments. Returns: torch.Tensor: The encoded prompt embeddings. """ deprecation_message = "`_encode_prompt()` is deprecated and it will be removed in a future version. Use `encode_prompt()` instead. Also, be aware that the output format changed from a concatenated tensor to a tuple." deprecate("_encode_prompt()", "1.0.0", deprecation_message, standard_warn=False) prompt_embeds_tuple = self.encode_prompt( prompt=prompt, device=device, num_images_per_prompt=num_images_per_prompt, do_classifier_free_guidance=do_classifier_free_guidance, negative_prompt=negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=lora_scale, **kwargs, ) # concatenate for backwards comp prompt_embeds = torch.cat([prompt_embeds_tuple[1], prompt_embeds_tuple[0]]) return prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_prompt def encode_prompt( self, prompt: Optional[str], device: torch.device, num_images_per_prompt: int, do_classifier_free_guidance: bool, negative_prompt: Optional[str] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, clip_skip: Optional[int] = None, ) -> torch.Tensor: r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. """ # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, LoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) else: scale_lora_layers(self.text_encoder, lora_scale) if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, self.tokenizer) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = self.tokenizer.batch_decode( untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] ) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None if clip_skip is None: prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask) prompt_embeds = prompt_embeds[0] else: prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, output_hidden_states=True ) # Access the `hidden_states` first, that contains a tuple of # all the hidden states from the encoder layers. Then index into # the tuple to access the hidden states from the desired layer. prompt_embeds = prompt_embeds[-1][-(clip_skip + 1)] # We also need to apply the final LayerNorm here to not mess with the # representations. The `last_hidden_states` that we typically use for # obtaining the final prompt representations passes through the LayerNorm # layer. prompt_embeds = self.text_encoder.text_model.final_layer_norm(prompt_embeds) if self.text_encoder is not None: prompt_embeds_dtype = self.text_encoder.dtype elif self.unet is not None: prompt_embeds_dtype = self.unet.dtype else: prompt_embeds_dtype = prompt_embeds.dtype prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance and negative_prompt_embeds is None: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) max_length = prompt_embeds.shape[1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) if isinstance(self, LoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder, lora_scale) return prompt_embeds, negative_prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents( self, batch_size: int, num_channels_latents: int, height: int, width: int, dtype: torch.dtype, device: torch.device, generator: Union[torch.Generator, List[torch.Generator]], latents: Optional[torch.Tensor] = None, ) -> torch.Tensor: r""" Prepare the latent vectors for diffusion. Args: batch_size (int): The number of samples in the batch. num_channels_latents (int): The number of channels in the latent vectors. height (int): The height of the latent vectors. width (int): The width of the latent vectors. dtype (torch.dtype): The data type of the latent vectors. device (torch.device): The device to place the latent vectors on. generator (Union[torch.Generator, List[torch.Generator]]): The generator(s) to use for random number generation. latents (Optional[torch.Tensor]): The pre-existing latent vectors. If None, new latent vectors will be generated. Returns: torch.Tensor: The prepared latent vectors. """ shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs( self, generator: Union[torch.Generator, List[torch.Generator]], eta: float ) -> Dict[str, Any]: r""" Prepare extra keyword arguments for the scheduler step. Args: generator (Union[torch.Generator, List[torch.Generator]]): The generator used for sampling. eta (float): The value of eta (η) used with the DDIMScheduler. Should be between 0 and 1. Returns: Dict[str, Any]: A dictionary containing the extra keyword arguments for the scheduler step. """ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def prepare_image( self, image: Union[torch.Tensor, PIL.Image.Image, List[Union[torch.Tensor, PIL.Image.Image]]], width: int, height: int, batch_size: int, num_images_per_prompt: int, device: torch.device, dtype: torch.dtype, do_classifier_free_guidance: bool = False, guess_mode: bool = False, ) -> torch.Tensor: r""" Prepares the input image for processing. Args: image (torch.Tensor or PIL.Image.Image or list): The input image(s). width (int): The desired width of the image. height (int): The desired height of the image. batch_size (int): The batch size for processing. num_images_per_prompt (int): The number of images per prompt. device (torch.device): The device to use for processing. dtype (torch.dtype): The data type of the image. do_classifier_free_guidance (bool, optional): Whether to perform classifier-free guidance. Defaults to False. guess_mode (bool, optional): Whether to use guess mode. Defaults to False. Returns: torch.Tensor: The prepared image for processing. """ if not isinstance(image, torch.Tensor): if isinstance(image, PIL.Image.Image): image = [image] if isinstance(image[0], PIL.Image.Image): images = [] for image_ in image: image_ = image_.convert("RGB") image_ = image_.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]) image_ = np.array(image_) image_ = image_[None, :] images.append(image_) image = images image = np.concatenate(image, axis=0) image = np.array(image).astype(np.float32) / 255.0 image = (image - 0.5) / 0.5 image = image.transpose(0, 3, 1, 2) image = torch.from_numpy(image) elif isinstance(image[0], torch.Tensor): image = torch.cat(image, dim=0) image_batch_size = image.shape[0] if image_batch_size == 1: repeat_by = batch_size else: # image batch size is the same as prompt batch size repeat_by = num_images_per_prompt image = image.repeat_interleave(repeat_by, dim=0) image = image.to(device=device, dtype=dtype) if do_classifier_free_guidance and not guess_mode: image = torch.cat([image] * 2) return image def prepare_ref_latents( self, refimage: torch.Tensor, batch_size: int, dtype: torch.dtype, device: torch.device, generator: Union[int, List[int]], do_classifier_free_guidance: bool, ) -> torch.Tensor: r""" Prepares reference latents for generating images. Args: refimage (torch.Tensor): The reference image. batch_size (int): The desired batch size. dtype (torch.dtype): The data type of the tensors. device (torch.device): The device to perform computations on. generator (int or list): The generator index or a list of generator indices. do_classifier_free_guidance (bool): Whether to use classifier-free guidance. Returns: torch.Tensor: The prepared reference latents. """ refimage = refimage.to(device=device, dtype=dtype) # encode the mask image into latents space so we can concatenate it to the latents if isinstance(generator, list): ref_image_latents = [ self.vae.encode(refimage[i : i + 1]).latent_dist.sample(generator=generator[i]) for i in range(batch_size) ] ref_image_latents = torch.cat(ref_image_latents, dim=0) else: ref_image_latents = self.vae.encode(refimage).latent_dist.sample(generator=generator) ref_image_latents = self.vae.config.scaling_factor * ref_image_latents # duplicate mask and ref_image_latents for each generation per prompt, using mps friendly method if ref_image_latents.shape[0] < batch_size: if not batch_size % ref_image_latents.shape[0] == 0: raise ValueError( "The passed images and the required batch size don't match. Images are supposed to be duplicated" f" to a total batch size of {batch_size}, but {ref_image_latents.shape[0]} images were passed." " Make sure the number of images that you pass is divisible by the total requested batch size." ) ref_image_latents = ref_image_latents.repeat(batch_size // ref_image_latents.shape[0], 1, 1, 1) # aligning device to prevent device errors when concating it with the latent model input ref_image_latents = ref_image_latents.to(device=device, dtype=dtype) return ref_image_latents # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker def run_safety_checker( self, image: Union[torch.Tensor, PIL.Image.Image], device: torch.device, dtype: torch.dtype ) -> Tuple[Union[torch.Tensor, PIL.Image.Image], Optional[bool]]: r""" Runs the safety checker on the given image. Args: image (Union[torch.Tensor, PIL.Image.Image]): The input image to be checked. device (torch.device): The device to run the safety checker on. dtype (torch.dtype): The data type of the input image. Returns: (image, has_nsfw_concept) Tuple[Union[torch.Tensor, PIL.Image.Image], Optional[bool]]: A tuple containing the processed image and a boolean indicating whether the image has a NSFW (Not Safe for Work) concept. """ if self.safety_checker is None: has_nsfw_concept = None else: if torch.is_tensor(image): feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") else: feature_extractor_input = self.image_processor.numpy_to_pil(image) safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(dtype) ) return image, has_nsfw_concept @torch.no_grad() def __call__( self, prompt: Union[str, List[str]] = None, ref_image: Union[torch.Tensor, PIL.Image.Image] = None, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, guidance_rescale: float = 0.0, attention_auto_machine_weight: float = 1.0, gn_auto_machine_weight: float = 1.0, style_fidelity: float = 0.5, reference_attn: bool = True, reference_adain: bool = True, ): r""" Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. ref_image (`torch.Tensor`, `PIL.Image.Image`): The Reference Control input condition. Reference Control uses this input condition to generate guidance to Unet. If the type is specified as `torch.Tensor`, it is passed to Reference Control as is. `PIL.Image.Image` can also be accepted as an image. height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The height in pixels of the generated image. width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 7.5): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to [`schedulers.DDIMScheduler`], will be ignored for others. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that will be called every `callback_steps` steps during inference. The function will be called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function will be called. If not specified, the callback will be called at every step. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). guidance_rescale (`float`, *optional*, defaults to 0.0): Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). Guidance rescale factor should fix overexposure when using zero terminal SNR. attention_auto_machine_weight (`float`): Weight of using reference query for self attention's context. If attention_auto_machine_weight=1.0, use reference query for all self attention's context. gn_auto_machine_weight (`float`): Weight of using reference adain. If gn_auto_machine_weight=2.0, use all reference adain plugins. style_fidelity (`float`): style fidelity of ref_uncond_xt. If style_fidelity=1.0, control more important, elif style_fidelity=0.0, prompt more important, else balanced. reference_attn (`bool`): Whether to use reference query for self attention's context. reference_adain (`bool`): Whether to use reference adain. Examples: Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the `safety_checker`. """ assert reference_attn or reference_adain, "`reference_attn` or `reference_adain` must be True." # 0. Default height and width to unet height, width = self._default_height_width(height, width, ref_image) # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds ) # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input prompt text_encoder_lora_scale = ( cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None ) prompt_embeds = self._encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=text_encoder_lora_scale, ) # 4. Preprocess reference image ref_image = self.prepare_image( image=ref_image, width=width, height=height, batch_size=batch_size * num_images_per_prompt, num_images_per_prompt=num_images_per_prompt, device=device, dtype=prompt_embeds.dtype, ) # 5. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # 6. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, prompt_embeds.dtype, device, generator, latents, ) # 7. Prepare reference latent variables ref_image_latents = self.prepare_ref_latents( ref_image, batch_size * num_images_per_prompt, prompt_embeds.dtype, device, generator, do_classifier_free_guidance, ) # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 9. Modify self attention and group norm MODE = "write" uc_mask = ( torch.Tensor([1] * batch_size * num_images_per_prompt + [0] * batch_size * num_images_per_prompt) .type_as(ref_image_latents) .bool() ) def hacked_basic_transformer_inner_forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, timestep: Optional[torch.LongTensor] = None, cross_attention_kwargs: Dict[str, Any] = None, class_labels: Optional[torch.LongTensor] = None, ): if self.use_ada_layer_norm: norm_hidden_states = self.norm1(hidden_states, timestep) elif self.use_ada_layer_norm_zero: norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1( hidden_states, timestep, class_labels, hidden_dtype=hidden_states.dtype ) else: norm_hidden_states = self.norm1(hidden_states) # 1. Self-Attention cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} if self.only_cross_attention: attn_output = self.attn1( norm_hidden_states, encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, attention_mask=attention_mask, **cross_attention_kwargs, ) else: if MODE == "write": self.bank.append(norm_hidden_states.detach().clone()) attn_output = self.attn1( norm_hidden_states, encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, attention_mask=attention_mask, **cross_attention_kwargs, ) if MODE == "read": if attention_auto_machine_weight > self.attn_weight: attn_output_uc = self.attn1( norm_hidden_states, encoder_hidden_states=torch.cat([norm_hidden_states] + self.bank, dim=1), # attention_mask=attention_mask, **cross_attention_kwargs, ) attn_output_c = attn_output_uc.clone() if do_classifier_free_guidance and style_fidelity > 0: attn_output_c[uc_mask] = self.attn1( norm_hidden_states[uc_mask], encoder_hidden_states=norm_hidden_states[uc_mask], **cross_attention_kwargs, ) attn_output = style_fidelity * attn_output_c + (1.0 - style_fidelity) * attn_output_uc self.bank.clear() else: attn_output = self.attn1( norm_hidden_states, encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, attention_mask=attention_mask, **cross_attention_kwargs, ) if self.use_ada_layer_norm_zero: attn_output = gate_msa.unsqueeze(1) * attn_output hidden_states = attn_output + hidden_states if self.attn2 is not None: norm_hidden_states = ( self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states) ) # 2. Cross-Attention attn_output = self.attn2( norm_hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=encoder_attention_mask, **cross_attention_kwargs, ) hidden_states = attn_output + hidden_states # 3. Feed-forward norm_hidden_states = self.norm3(hidden_states) if self.use_ada_layer_norm_zero: norm_hidden_states = norm_hidden_states * (1 + scale_mlp[:, None]) + shift_mlp[:, None] ff_output = self.ff(norm_hidden_states) if self.use_ada_layer_norm_zero: ff_output = gate_mlp.unsqueeze(1) * ff_output hidden_states = ff_output + hidden_states return hidden_states def hacked_mid_forward(self, *args, **kwargs): eps = 1e-6 x = self.original_forward(*args, **kwargs) if MODE == "write": if gn_auto_machine_weight >= self.gn_weight: var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0) self.mean_bank.append(mean) self.var_bank.append(var) if MODE == "read": if len(self.mean_bank) > 0 and len(self.var_bank) > 0: var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0) std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 mean_acc = sum(self.mean_bank) / float(len(self.mean_bank)) var_acc = sum(self.var_bank) / float(len(self.var_bank)) std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 x_uc = (((x - mean) / std) * std_acc) + mean_acc x_c = x_uc.clone() if do_classifier_free_guidance and style_fidelity > 0: x_c[uc_mask] = x[uc_mask] x = style_fidelity * x_c + (1.0 - style_fidelity) * x_uc self.mean_bank = [] self.var_bank = [] return x def hack_CrossAttnDownBlock2D_forward( self, hidden_states: torch.Tensor, temb: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, cross_attention_kwargs: Optional[Dict[str, Any]] = None, encoder_attention_mask: Optional[torch.Tensor] = None, ): eps = 1e-6 # TODO(Patrick, William) - attention mask is not used output_states = () for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)): hidden_states = resnet(hidden_states, temb) hidden_states = attn( hidden_states, encoder_hidden_states=encoder_hidden_states, cross_attention_kwargs=cross_attention_kwargs, attention_mask=attention_mask, encoder_attention_mask=encoder_attention_mask, return_dict=False, )[0] if MODE == "write": if gn_auto_machine_weight >= self.gn_weight: var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) self.mean_bank.append([mean]) self.var_bank.append([var]) if MODE == "read": if len(self.mean_bank) > 0 and len(self.var_bank) > 0: var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i])) var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i])) std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc hidden_states_c = hidden_states_uc.clone() if do_classifier_free_guidance and style_fidelity > 0: hidden_states_c[uc_mask] = hidden_states[uc_mask] hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc output_states = output_states + (hidden_states,) if MODE == "read": self.mean_bank = [] self.var_bank = [] if self.downsamplers is not None: for downsampler in self.downsamplers: hidden_states = downsampler(hidden_states) output_states = output_states + (hidden_states,) return hidden_states, output_states def hacked_DownBlock2D_forward( self, hidden_states: torch.Tensor, temb: Optional[torch.Tensor] = None, **kwargs: Any, ) -> Tuple[torch.Tensor, ...]: eps = 1e-6 output_states = () for i, resnet in enumerate(self.resnets): hidden_states = resnet(hidden_states, temb) if MODE == "write": if gn_auto_machine_weight >= self.gn_weight: var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) self.mean_bank.append([mean]) self.var_bank.append([var]) if MODE == "read": if len(self.mean_bank) > 0 and len(self.var_bank) > 0: var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i])) var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i])) std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc hidden_states_c = hidden_states_uc.clone() if do_classifier_free_guidance and style_fidelity > 0: hidden_states_c[uc_mask] = hidden_states[uc_mask] hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc output_states = output_states + (hidden_states,) if MODE == "read": self.mean_bank = [] self.var_bank = [] if self.downsamplers is not None: for downsampler in self.downsamplers: hidden_states = downsampler(hidden_states) output_states = output_states + (hidden_states,) return hidden_states, output_states def hacked_CrossAttnUpBlock2D_forward( self, hidden_states: torch.Tensor, res_hidden_states_tuple: Tuple[torch.Tensor, ...], temb: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, cross_attention_kwargs: Optional[Dict[str, Any]] = None, upsample_size: Optional[int] = None, attention_mask: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: eps = 1e-6 # TODO(Patrick, William) - attention mask is not used for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)): # pop res hidden states res_hidden_states = res_hidden_states_tuple[-1] res_hidden_states_tuple = res_hidden_states_tuple[:-1] hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) hidden_states = resnet(hidden_states, temb) hidden_states = attn( hidden_states, encoder_hidden_states=encoder_hidden_states, cross_attention_kwargs=cross_attention_kwargs, attention_mask=attention_mask, encoder_attention_mask=encoder_attention_mask, return_dict=False, )[0] if MODE == "write": if gn_auto_machine_weight >= self.gn_weight: var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) self.mean_bank.append([mean]) self.var_bank.append([var]) if MODE == "read": if len(self.mean_bank) > 0 and len(self.var_bank) > 0: var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i])) var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i])) std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc hidden_states_c = hidden_states_uc.clone() if do_classifier_free_guidance and style_fidelity > 0: hidden_states_c[uc_mask] = hidden_states[uc_mask] hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc if MODE == "read": self.mean_bank = [] self.var_bank = [] if self.upsamplers is not None: for upsampler in self.upsamplers: hidden_states = upsampler(hidden_states, upsample_size) return hidden_states def hacked_UpBlock2D_forward( self, hidden_states: torch.Tensor, res_hidden_states_tuple: Tuple[torch.Tensor, ...], temb: Optional[torch.Tensor] = None, upsample_size: Optional[int] = None, **kwargs: Any, ) -> torch.Tensor: eps = 1e-6 for i, resnet in enumerate(self.resnets): # pop res hidden states res_hidden_states = res_hidden_states_tuple[-1] res_hidden_states_tuple = res_hidden_states_tuple[:-1] hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) hidden_states = resnet(hidden_states, temb) if MODE == "write": if gn_auto_machine_weight >= self.gn_weight: var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) self.mean_bank.append([mean]) self.var_bank.append([var]) if MODE == "read": if len(self.mean_bank) > 0 and len(self.var_bank) > 0: var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0) std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5 mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i])) var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i])) std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5 hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc hidden_states_c = hidden_states_uc.clone() if do_classifier_free_guidance and style_fidelity > 0: hidden_states_c[uc_mask] = hidden_states[uc_mask] hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc if MODE == "read": self.mean_bank = [] self.var_bank = [] if self.upsamplers is not None: for upsampler in self.upsamplers: hidden_states = upsampler(hidden_states, upsample_size) return hidden_states if reference_attn: attn_modules = [module for module in torch_dfs(self.unet) if isinstance(module, BasicTransformerBlock)] attn_modules = sorted(attn_modules, key=lambda x: -x.norm1.normalized_shape[0]) for i, module in enumerate(attn_modules): module._original_inner_forward = module.forward module.forward = hacked_basic_transformer_inner_forward.__get__(module, BasicTransformerBlock) module.bank = [] module.attn_weight = float(i) / float(len(attn_modules)) if reference_adain: gn_modules = [self.unet.mid_block] self.unet.mid_block.gn_weight = 0 down_blocks = self.unet.down_blocks for w, module in enumerate(down_blocks): module.gn_weight = 1.0 - float(w) / float(len(down_blocks)) gn_modules.append(module) up_blocks = self.unet.up_blocks for w, module in enumerate(up_blocks): module.gn_weight = float(w) / float(len(up_blocks)) gn_modules.append(module) for i, module in enumerate(gn_modules): if getattr(module, "original_forward", None) is None: module.original_forward = module.forward if i == 0: # mid_block module.forward = hacked_mid_forward.__get__(module, torch.nn.Module) elif isinstance(module, CrossAttnDownBlock2D): module.forward = hack_CrossAttnDownBlock2D_forward.__get__(module, CrossAttnDownBlock2D) elif isinstance(module, DownBlock2D): module.forward = hacked_DownBlock2D_forward.__get__(module, DownBlock2D) elif isinstance(module, CrossAttnUpBlock2D): module.forward = hacked_CrossAttnUpBlock2D_forward.__get__(module, CrossAttnUpBlock2D) elif isinstance(module, UpBlock2D): module.forward = hacked_UpBlock2D_forward.__get__(module, UpBlock2D) module.mean_bank = [] module.var_bank = [] module.gn_weight *= 2 # 10. Denoising loop num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # ref only part noise = randn_tensor( ref_image_latents.shape, generator=generator, device=device, dtype=ref_image_latents.dtype ) ref_xt = self.scheduler.add_noise( ref_image_latents, noise, t.reshape( 1, ), ) ref_xt = torch.cat([ref_xt] * 2) if do_classifier_free_guidance else ref_xt ref_xt = self.scheduler.scale_model_input(ref_xt, t) MODE = "write" self.unet( ref_xt, t, encoder_hidden_states=prompt_embeds, cross_attention_kwargs=cross_attention_kwargs, return_dict=False, ) # predict the noise residual MODE = "read" noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, cross_attention_kwargs=cross_attention_kwargs, return_dict=False, )[0] # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) if do_classifier_free_guidance and guidance_rescale > 0.0: # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if not output_type == "latent": image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) else: image = latents has_nsfw_concept = None if has_nsfw_concept is None: do_denormalize = [True] * image.shape[0] else: do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) # Offload last model to CPU if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: self.final_offload_hook.offload() if not return_dict: return (image, has_nsfw_concept) return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diffusers/examples/community/stable_diffusion_reference.py/0
{ "file_path": "diffusers/examples/community/stable_diffusion_reference.py", "repo_id": "diffusers", "token_count": 34391 }
211
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import os import sys import tempfile import safetensors sys.path.append("..") from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402 logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger() stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) class TextToImageLCM(ExamplesTestsAccelerate): def test_text_to_image_lcm_lora_sdxl(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/consistency_distillation/train_lcm_distill_lora_sdxl.py --pretrained_teacher_model hf-internal-testing/tiny-stable-diffusion-xl-pipe --dataset_name hf-internal-testing/dummy_image_text_data --resolution 64 --lora_rank 4 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 2 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} """.split() run_command(self._launch_args + test_args) # save_pretrained smoke test self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))) # make sure the state_dict has the correct naming in the parameters. lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")) is_lora = all("lora" in k for k in lora_state_dict.keys()) self.assertTrue(is_lora) def test_text_to_image_lcm_lora_sdxl_checkpointing(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/consistency_distillation/train_lcm_distill_lora_sdxl.py --pretrained_teacher_model hf-internal-testing/tiny-stable-diffusion-xl-pipe --dataset_name hf-internal-testing/dummy_image_text_data --resolution 64 --lora_rank 4 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 7 --checkpointing_steps 2 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} """.split() run_command(self._launch_args + test_args) self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4", "checkpoint-6"}, ) test_args = f""" examples/consistency_distillation/train_lcm_distill_lora_sdxl.py --pretrained_teacher_model hf-internal-testing/tiny-stable-diffusion-xl-pipe --dataset_name hf-internal-testing/dummy_image_text_data --resolution 64 --lora_rank 4 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 9 --checkpointing_steps 2 --resume_from_checkpoint latest --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} """.split() run_command(self._launch_args + test_args) self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4", "checkpoint-6", "checkpoint-8"}, )
diffusers/examples/consistency_distillation/test_lcm_lora.py/0
{ "file_path": "diffusers/examples/consistency_distillation/test_lcm_lora.py", "repo_id": "diffusers", "token_count": 2105 }
212
# Inference Examples **The inference examples folder is deprecated and will be removed in a future version**. **Officially supported inference examples can be found in the [Pipelines folder](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines)**. - For `Image-to-Image text-guided generation with Stable Diffusion`, please have a look at the official [Pipeline examples](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines#examples) - For `In-painting using Stable Diffusion`, please have a look at the official [Pipeline examples](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines#examples) - For `Tweak prompts reusing seeds and latents`, please have a look at the official [Pipeline examples](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines#examples)
diffusers/examples/inference/README.md/0
{ "file_path": "diffusers/examples/inference/README.md", "repo_id": "diffusers", "token_count": 252 }
213
import d4rl # noqa import gym import tqdm from diffusers.experimental import ValueGuidedRLPipeline config = { "n_samples": 64, "horizon": 32, "num_inference_steps": 20, "n_guide_steps": 2, # can set to 0 for faster sampling, does not use value network "scale_grad_by_std": True, "scale": 0.1, "eta": 0.0, "t_grad_cutoff": 2, "device": "cpu", } if __name__ == "__main__": env_name = "hopper-medium-v2" env = gym.make(env_name) pipeline = ValueGuidedRLPipeline.from_pretrained( "bglick13/hopper-medium-v2-value-function-hor32", env=env, ) env.seed(0) obs = env.reset() total_reward = 0 total_score = 0 T = 1000 rollout = [obs.copy()] try: for t in tqdm.tqdm(range(T)): # call the policy denorm_actions = pipeline(obs, planning_horizon=32) # execute action in environment next_observation, reward, terminal, _ = env.step(denorm_actions) score = env.get_normalized_score(total_reward) # update return total_reward += reward total_score += score print( f"Step: {t}, Reward: {reward}, Total Reward: {total_reward}, Score: {score}, Total Score:" f" {total_score}" ) # save observations for rendering rollout.append(next_observation.copy()) obs = next_observation except KeyboardInterrupt: pass print(f"Total reward: {total_reward}")
diffusers/examples/reinforcement_learning/run_diffuser_locomotion.py/0
{ "file_path": "diffusers/examples/reinforcement_learning/run_diffuser_locomotion.py", "repo_id": "diffusers", "token_count": 724 }
214
#!/usr/bin/env python # coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and import argparse import logging import math import os import random from pathlib import Path import numpy as np import PIL import torch import torch.nn.functional as F import torch.utils.checkpoint import transformers from accelerate import Accelerator from accelerate.logging import get_logger from accelerate.utils import ProjectConfiguration, set_seed from huggingface_hub import create_repo, upload_folder from multi_token_clip import MultiTokenCLIPTokenizer # TODO: remove and import from diffusers.utils when the new version of diffusers is released from packaging import version from PIL import Image from torch.utils.data import Dataset from torchvision import transforms from tqdm.auto import tqdm from transformers import CLIPTextModel import diffusers from diffusers import ( AutoencoderKL, DDPMScheduler, DiffusionPipeline, DPMSolverMultistepScheduler, StableDiffusionPipeline, UNet2DConditionModel, ) from diffusers.optimization import get_scheduler from diffusers.utils import check_min_version, is_wandb_available from diffusers.utils.import_utils import is_xformers_available if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): PIL_INTERPOLATION = { "linear": PIL.Image.Resampling.BILINEAR, "bilinear": PIL.Image.Resampling.BILINEAR, "bicubic": PIL.Image.Resampling.BICUBIC, "lanczos": PIL.Image.Resampling.LANCZOS, "nearest": PIL.Image.Resampling.NEAREST, } else: PIL_INTERPOLATION = { "linear": PIL.Image.LINEAR, "bilinear": PIL.Image.BILINEAR, "bicubic": PIL.Image.BICUBIC, "lanczos": PIL.Image.LANCZOS, "nearest": PIL.Image.NEAREST, } # ------------------------------------------------------------------------------ # Will error if the minimal version of diffusers is not installed. Remove at your own risks. check_min_version("0.14.0.dev0") logger = get_logger(__name__) def add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=1, initializer_token=None): """ Add tokens to the tokenizer and set the initial value of token embeddings """ tokenizer.add_placeholder_tokens(placeholder_token, num_vec_per_token=num_vec_per_token) text_encoder.resize_token_embeddings(len(tokenizer)) token_embeds = text_encoder.get_input_embeddings().weight.data placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) if initializer_token: token_ids = tokenizer.encode(initializer_token, add_special_tokens=False) for i, placeholder_token_id in enumerate(placeholder_token_ids): token_embeds[placeholder_token_id] = token_embeds[token_ids[i * len(token_ids) // num_vec_per_token]] else: for i, placeholder_token_id in enumerate(placeholder_token_ids): token_embeds[placeholder_token_id] = torch.randn_like(token_embeds[placeholder_token_id]) return placeholder_token def save_progress(tokenizer, text_encoder, accelerator, save_path): for placeholder_token in tokenizer.token_map: placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_ids] if len(placeholder_token_ids) == 1: learned_embeds = learned_embeds[None] learned_embeds_dict = {placeholder_token: learned_embeds.detach().cpu()} torch.save(learned_embeds_dict, save_path) def load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict): for placeholder_token in learned_embeds_dict: placeholder_embeds = learned_embeds_dict[placeholder_token] num_vec_per_token = placeholder_embeds.shape[0] placeholder_embeds = placeholder_embeds.to(dtype=text_encoder.dtype) add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=num_vec_per_token) placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) token_embeds = text_encoder.get_input_embeddings().weight.data for i, placeholder_token_id in enumerate(placeholder_token_ids): token_embeds[placeholder_token_id] = placeholder_embeds[i] def load_multitoken_tokenizer_from_automatic(tokenizer, text_encoder, automatic_dict, placeholder_token): """ Automatic1111's tokens have format {'string_to_token': {'*': 265}, 'string_to_param': {'*': tensor([[ 0.0833, 0.0030, 0.0057, ..., -0.0264, -0.0616, -0.0529], [ 0.0058, -0.0190, -0.0584, ..., -0.0025, -0.0945, -0.0490], [ 0.0916, 0.0025, 0.0365, ..., -0.0685, -0.0124, 0.0728], [ 0.0812, -0.0199, -0.0100, ..., -0.0581, -0.0780, 0.0254]], requires_grad=True)}, 'name': 'FloralMarble-400', 'step': 399, 'sd_checkpoint': '4bdfc29c', 'sd_checkpoint_name': 'SD2.1-768'} """ learned_embeds_dict = {} learned_embeds_dict[placeholder_token] = automatic_dict["string_to_param"]["*"] load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict) def get_mask(tokenizer, accelerator): # Get the mask of the weights that won't change mask = torch.ones(len(tokenizer)).to(accelerator.device, dtype=torch.bool) for placeholder_token in tokenizer.token_map: placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) for i in range(len(placeholder_token_ids)): mask = mask & (torch.arange(len(tokenizer)) != placeholder_token_ids[i]).to(accelerator.device) return mask def parse_args(): parser = argparse.ArgumentParser(description="Simple example of a training script.") parser.add_argument( "--progressive_tokens_max_steps", type=int, default=2000, help="The number of steps until all tokens will be used.", ) parser.add_argument( "--progressive_tokens", action="store_true", help="Progressively train the tokens. For example, first train for 1 token, then 2 tokens and so on.", ) parser.add_argument("--vector_shuffle", action="store_true", help="Shuffling tokens durint training") parser.add_argument( "--num_vec_per_token", type=int, default=1, help=( "The number of vectors used to represent the placeholder token. The higher the number, the better the" " result at the cost of editability. This can be fixed by prompt editing." ), ) parser.add_argument( "--save_steps", type=int, default=500, help="Save learned_embeds.bin every X updates steps.", ) parser.add_argument( "--only_save_embeds", action="store_true", default=False, help="Save only the embeddings for the new concept.", ) parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--tokenizer_name", type=str, default=None, help="Pretrained tokenizer name or path if not the same as model_name", ) parser.add_argument( "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." ) parser.add_argument( "--placeholder_token", type=str, default=None, required=True, help="A token to use as a placeholder for the concept.", ) parser.add_argument( "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." ) parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") parser.add_argument( "--output_dir", type=str, default="text-inversion-model", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=512, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." ) parser.add_argument( "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." ) parser.add_argument("--num_train_epochs", type=int, default=100) parser.add_argument( "--max_train_steps", type=int, default=5000, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--learning_rate", type=float, default=1e-4, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--dataloader_num_workers", type=int, default=0, help=( "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." ), ) parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--mixed_precision", type=str, default="no", choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose" "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." "and an Nvidia Ampere GPU." ), ) parser.add_argument( "--allow_tf32", action="store_true", help=( "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" ), ) parser.add_argument( "--report_to", type=str, default="tensorboard", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument( "--validation_prompt", type=str, default=None, help="A prompt that is used during validation to verify that the model is learning.", ) parser.add_argument( "--num_validation_images", type=int, default=4, help="Number of images that should be generated during validation with `validation_prompt`.", ) parser.add_argument( "--validation_epochs", type=int, default=50, help=( "Run validation every X epochs. Validation consists of running the prompt" " `args.validation_prompt` multiple times: `args.num_validation_images`" " and logging the images." ), ) parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") parser.add_argument( "--checkpointing_steps", type=int, default=500, help=( "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" " training using `--resume_from_checkpoint`." ), ) parser.add_argument( "--checkpoints_total_limit", type=int, default=None, help=( "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" " for more docs" ), ) parser.add_argument( "--resume_from_checkpoint", type=str, default=None, help=( "Whether training should be resumed from a previous checkpoint. Use a path saved by" ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' ), ) parser.add_argument( "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." ) args = parser.parse_args() env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) if env_local_rank != -1 and env_local_rank != args.local_rank: args.local_rank = env_local_rank if args.train_data_dir is None: raise ValueError("You must specify a train data directory.") return args imagenet_templates_small = [ "a photo of a {}", "a rendering of a {}", "a cropped photo of the {}", "the photo of a {}", "a photo of a clean {}", "a photo of a dirty {}", "a dark photo of the {}", "a photo of my {}", "a photo of the cool {}", "a close-up photo of a {}", "a bright photo of the {}", "a cropped photo of a {}", "a photo of the {}", "a good photo of the {}", "a photo of one {}", "a close-up photo of the {}", "a rendition of the {}", "a photo of the clean {}", "a rendition of a {}", "a photo of a nice {}", "a good photo of a {}", "a photo of the nice {}", "a photo of the small {}", "a photo of the weird {}", "a photo of the large {}", "a photo of a cool {}", "a photo of a small {}", ] imagenet_style_templates_small = [ "a painting in the style of {}", "a rendering in the style of {}", "a cropped painting in the style of {}", "the painting in the style of {}", "a clean painting in the style of {}", "a dirty painting in the style of {}", "a dark painting in the style of {}", "a picture in the style of {}", "a cool painting in the style of {}", "a close-up painting in the style of {}", "a bright painting in the style of {}", "a cropped painting in the style of {}", "a good painting in the style of {}", "a close-up painting in the style of {}", "a rendition in the style of {}", "a nice painting in the style of {}", "a small painting in the style of {}", "a weird painting in the style of {}", "a large painting in the style of {}", ] class TextualInversionDataset(Dataset): def __init__( self, data_root, tokenizer, learnable_property="object", # [object, style] size=512, repeats=100, interpolation="bicubic", flip_p=0.5, set="train", placeholder_token="*", center_crop=False, vector_shuffle=False, progressive_tokens=False, ): self.data_root = data_root self.tokenizer = tokenizer self.learnable_property = learnable_property self.size = size self.placeholder_token = placeholder_token self.center_crop = center_crop self.flip_p = flip_p self.vector_shuffle = vector_shuffle self.progressive_tokens = progressive_tokens self.prop_tokens_to_load = 0 self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] self.num_images = len(self.image_paths) self._length = self.num_images if set == "train": self._length = self.num_images * repeats self.interpolation = { "linear": PIL_INTERPOLATION["linear"], "bilinear": PIL_INTERPOLATION["bilinear"], "bicubic": PIL_INTERPOLATION["bicubic"], "lanczos": PIL_INTERPOLATION["lanczos"], }[interpolation] self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) def __len__(self): return self._length def __getitem__(self, i): example = {} image = Image.open(self.image_paths[i % self.num_images]) if not image.mode == "RGB": image = image.convert("RGB") placeholder_string = self.placeholder_token text = random.choice(self.templates).format(placeholder_string) example["input_ids"] = self.tokenizer.encode( text, padding="max_length", truncation=True, max_length=self.tokenizer.model_max_length, return_tensors="pt", vector_shuffle=self.vector_shuffle, prop_tokens_to_load=self.prop_tokens_to_load if self.progressive_tokens else 1.0, )[0] # default to score-sde preprocessing img = np.array(image).astype(np.uint8) if self.center_crop: crop = min(img.shape[0], img.shape[1]) ( h, w, ) = ( img.shape[0], img.shape[1], ) img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] image = Image.fromarray(img) image = image.resize((self.size, self.size), resample=self.interpolation) image = self.flip_transform(image) image = np.array(image).astype(np.uint8) image = (image / 127.5 - 1.0).astype(np.float32) example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) return example def main(): args = parse_args() if args.report_to == "wandb" and args.hub_token is not None: raise ValueError( "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." " Please use `huggingface-cli login` to authenticate with the Hub." ) logging_dir = os.path.join(args.output_dir, args.logging_dir) accelerator_project_config = ProjectConfiguration( total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir ) accelerator = Accelerator( gradient_accumulation_steps=args.gradient_accumulation_steps, mixed_precision=args.mixed_precision, log_with=args.report_to, project_config=accelerator_project_config, ) # Disable AMP for MPS. if torch.backends.mps.is_available(): accelerator.native_amp = False if args.report_to == "wandb": if not is_wandb_available(): raise ImportError("Make sure to install wandb if you want to use it for logging during training.") import wandb # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger.info(accelerator.state, main_process_only=False) if accelerator.is_local_main_process: transformers.utils.logging.set_verbosity_warning() diffusers.utils.logging.set_verbosity_info() else: transformers.utils.logging.set_verbosity_error() diffusers.utils.logging.set_verbosity_error() # If passed along, set the training seed now. if args.seed is not None: set_seed(args.seed) # Handle the repository creation if accelerator.is_main_process: if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) if args.push_to_hub: repo_id = create_repo( repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token ).repo_id # Load tokenizer if args.tokenizer_name: tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.tokenizer_name) elif args.pretrained_model_name_or_path: tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") # Load scheduler and models noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") text_encoder = CLIPTextModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision ) vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) unet = UNet2DConditionModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision ) if is_xformers_available(): try: unet.enable_xformers_memory_efficient_attention() except Exception as e: logger.warning( "Could not enable memory efficient attention. Make sure xformers is installed" f" correctly and a GPU is available: {e}" ) add_tokens(tokenizer, text_encoder, args.placeholder_token, args.num_vec_per_token, args.initializer_token) # Freeze vae and unet vae.requires_grad_(False) unet.requires_grad_(False) # Freeze all parameters except for the token embeddings in text encoder text_encoder.text_model.encoder.requires_grad_(False) text_encoder.text_model.final_layer_norm.requires_grad_(False) text_encoder.text_model.embeddings.position_embedding.requires_grad_(False) if args.gradient_checkpointing: # Keep unet in train mode if we are using gradient checkpointing to save memory. # The dropout cannot be != 0 so it doesn't matter if we are in eval or train mode. unet.train() text_encoder.gradient_checkpointing_enable() unet.enable_gradient_checkpointing() if args.enable_xformers_memory_efficient_attention: if is_xformers_available(): import xformers xformers_version = version.parse(xformers.__version__) if xformers_version == version.parse("0.0.16"): logger.warning( "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." ) unet.enable_xformers_memory_efficient_attention() else: raise ValueError("xformers is not available. Make sure it is installed correctly") # Enable TF32 for faster training on Ampere GPUs, # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices if args.allow_tf32: torch.backends.cuda.matmul.allow_tf32 = True if args.scale_lr: args.learning_rate = ( args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) # Initialize the optimizer optimizer = torch.optim.AdamW( text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) # Dataset and DataLoaders creation: train_dataset = TextualInversionDataset( data_root=args.train_data_dir, tokenizer=tokenizer, size=args.resolution, placeholder_token=args.placeholder_token, repeats=args.repeats, learnable_property=args.learnable_property, center_crop=args.center_crop, set="train", ) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers ) # Scheduler and math around the number of training steps. overrode_max_train_steps = False num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch overrode_max_train_steps = True lr_scheduler = get_scheduler( args.lr_scheduler, optimizer=optimizer, num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, num_training_steps=args.max_train_steps * accelerator.num_processes, ) # Prepare everything with our `accelerator`. text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( text_encoder, optimizer, train_dataloader, lr_scheduler ) # For mixed precision training we cast the unet and vae weights to half-precision # as these models are only used for inference, keeping weights in full precision is not required. weight_dtype = torch.float32 if accelerator.mixed_precision == "fp16": weight_dtype = torch.float16 elif accelerator.mixed_precision == "bf16": weight_dtype = torch.bfloat16 # Move vae and unet to device and cast to weight_dtype unet.to(accelerator.device, dtype=weight_dtype) vae.to(accelerator.device, dtype=weight_dtype) # We need to recalculate our total training steps as the size of the training dataloader may have changed. num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if overrode_max_train_steps: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch # Afterwards we recalculate our number of training epochs args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: accelerator.init_trackers("textual_inversion", config=vars(args)) # Train! total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") logger.info(f" Total optimization steps = {args.max_train_steps}") global_step = 0 first_epoch = 0 # Potentially load in the weights and states from a previous save if args.resume_from_checkpoint: if args.resume_from_checkpoint != "latest": path = os.path.basename(args.resume_from_checkpoint) else: # Get the most recent checkpoint dirs = os.listdir(args.output_dir) dirs = [d for d in dirs if d.startswith("checkpoint")] dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) path = dirs[-1] if len(dirs) > 0 else None if path is None: accelerator.print( f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." ) args.resume_from_checkpoint = None else: accelerator.print(f"Resuming from checkpoint {path}") accelerator.load_state(os.path.join(args.output_dir, path)) global_step = int(path.split("-")[1]) resume_global_step = global_step * args.gradient_accumulation_steps first_epoch = global_step // num_update_steps_per_epoch resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) # Only show the progress bar once on each machine. progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) progress_bar.set_description("Steps") # keep original embeddings as reference orig_embeds_params = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight.data.clone() for epoch in range(first_epoch, args.num_train_epochs): text_encoder.train() for step, batch in enumerate(train_dataloader): # Skip steps until we reach the resumed step if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: if step % args.gradient_accumulation_steps == 0: progress_bar.update(1) continue if args.progressive_tokens: train_dataset.prop_tokens_to_load = float(global_step) / args.progressive_tokens_max_steps with accelerator.accumulate(text_encoder): # Convert images to latent space latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample().detach() latents = latents * vae.config.scaling_factor # Sample noise that we'll add to the latents noise = torch.randn_like(latents) bsz = latents.shape[0] # Sample a random timestep for each image timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) timesteps = timesteps.long() # Add noise to the latents according to the noise magnitude at each timestep # (this is the forward diffusion process) noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) # Get the text embedding for conditioning encoder_hidden_states = text_encoder(batch["input_ids"])[0].to(dtype=weight_dtype) # Predict the noise residual model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample # Get the target for loss depending on the prediction type if noise_scheduler.config.prediction_type == "epsilon": target = noise elif noise_scheduler.config.prediction_type == "v_prediction": target = noise_scheduler.get_velocity(latents, noise, timesteps) else: raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() # Let's make sure we don't update any embedding weights besides the newly added token index_no_updates = get_mask(tokenizer, accelerator) with torch.no_grad(): accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[ index_no_updates ] = orig_embeds_params[index_no_updates] # Checks if the accelerator has performed an optimization step behind the scenes if accelerator.sync_gradients: progress_bar.update(1) global_step += 1 if global_step % args.save_steps == 0: save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin") save_progress(tokenizer, text_encoder, accelerator, save_path) if global_step % args.checkpointing_steps == 0: if accelerator.is_main_process: save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") accelerator.save_state(save_path) logger.info(f"Saved state to {save_path}") logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} progress_bar.set_postfix(**logs) accelerator.log(logs, step=global_step) if global_step >= args.max_train_steps: break if accelerator.is_main_process and args.validation_prompt is not None and epoch % args.validation_epochs == 0: logger.info( f"Running validation... \n Generating {args.num_validation_images} images with prompt:" f" {args.validation_prompt}." ) # create pipeline (note: unet and vae are loaded again in float32) pipeline = DiffusionPipeline.from_pretrained( args.pretrained_model_name_or_path, text_encoder=accelerator.unwrap_model(text_encoder), tokenizer=tokenizer, unet=unet, vae=vae, revision=args.revision, torch_dtype=weight_dtype, ) pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) pipeline = pipeline.to(accelerator.device) pipeline.set_progress_bar_config(disable=True) # run inference generator = ( None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed) ) images = [] for _ in range(args.num_validation_images): with torch.autocast("cuda"): image = pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0] images.append(image) for tracker in accelerator.trackers: if tracker.name == "tensorboard": np_images = np.stack([np.asarray(img) for img in images]) tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") if tracker.name == "wandb": tracker.log( { "validation": [ wandb.Image(image, caption=f"{i}: {args.validation_prompt}") for i, image in enumerate(images) ] } ) del pipeline torch.cuda.empty_cache() # Create the pipeline using using the trained modules and save it. accelerator.wait_for_everyone() if accelerator.is_main_process: if args.push_to_hub and args.only_save_embeds: logger.warning("Enabling full model saving because --push_to_hub=True was specified.") save_full_model = True else: save_full_model = not args.only_save_embeds if save_full_model: pipeline = StableDiffusionPipeline.from_pretrained( args.pretrained_model_name_or_path, text_encoder=accelerator.unwrap_model(text_encoder), vae=vae, unet=unet, tokenizer=tokenizer, ) pipeline.save_pretrained(args.output_dir) # Save the newly trained embeddings save_path = os.path.join(args.output_dir, "learned_embeds.bin") save_progress(tokenizer, text_encoder, accelerator, save_path) if args.push_to_hub: upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message="End of training", ignore_patterns=["step_*", "epoch_*"], ) accelerator.end_training() if __name__ == "__main__": main()
diffusers/examples/research_projects/multi_token_textual_inversion/textual_inversion.py/0
{ "file_path": "diffusers/examples/research_projects/multi_token_textual_inversion/textual_inversion.py", "repo_id": "diffusers", "token_count": 16403 }
215
## Diffusers examples with ONNXRuntime optimizations **This research project is not actively maintained by the diffusers team. For any questions or comments, please contact Isamu Isozaki(isamu-isozaki) on github with any questions.** The aim of this project is to provide retrieval augmented diffusion models to diffusers!
diffusers/examples/research_projects/rdm/README.md/0
{ "file_path": "diffusers/examples/research_projects/rdm/README.md", "repo_id": "diffusers", "token_count": 75 }
216
# Show best practices for SDXL JAX import time import jax import jax.numpy as jnp import numpy as np from flax.jax_utils import replicate # Let's cache the model compilation, so that it doesn't take as long the next time around. from jax.experimental.compilation_cache import compilation_cache as cc from diffusers import FlaxStableDiffusionXLPipeline cc.initialize_cache("/tmp/sdxl_cache") NUM_DEVICES = jax.device_count() # 1. Let's start by downloading the model and loading it into our pipeline class # Adhering to JAX's functional approach, the model's parameters are returned seperatetely and # will have to be passed to the pipeline during inference pipeline, params = FlaxStableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", revision="refs/pr/95", split_head_dim=True ) # 2. We cast all parameters to bfloat16 EXCEPT the scheduler which we leave in # float32 to keep maximal precision scheduler_state = params.pop("scheduler") params = jax.tree_util.tree_map(lambda x: x.astype(jnp.bfloat16), params) params["scheduler"] = scheduler_state # 3. Next, we define the different inputs to the pipeline default_prompt = "a colorful photo of a castle in the middle of a forest with trees and bushes, by Ismail Inceoglu, shadows, high contrast, dynamic shading, hdr, detailed vegetation, digital painting, digital drawing, detailed painting, a detailed digital painting, gothic art, featured on deviantart" default_neg_prompt = "fog, grainy, purple" default_seed = 33 default_guidance_scale = 5.0 default_num_steps = 25 # 4. In order to be able to compile the pipeline # all inputs have to be tensors or strings # Let's tokenize the prompt and negative prompt def tokenize_prompt(prompt, neg_prompt): prompt_ids = pipeline.prepare_inputs(prompt) neg_prompt_ids = pipeline.prepare_inputs(neg_prompt) return prompt_ids, neg_prompt_ids # 5. To make full use of JAX's parallelization capabilities # the parameters and input tensors are duplicated across devices # To make sure every device generates a different image, we create # different seeds for each image. The model parameters won't change # during inference so we do not wrap them into a function p_params = replicate(params) def replicate_all(prompt_ids, neg_prompt_ids, seed): p_prompt_ids = replicate(prompt_ids) p_neg_prompt_ids = replicate(neg_prompt_ids) rng = jax.random.PRNGKey(seed) rng = jax.random.split(rng, NUM_DEVICES) return p_prompt_ids, p_neg_prompt_ids, rng # 6. Let's now put it all together in a generate function def generate( prompt, negative_prompt, seed=default_seed, guidance_scale=default_guidance_scale, num_inference_steps=default_num_steps, ): prompt_ids, neg_prompt_ids = tokenize_prompt(prompt, negative_prompt) prompt_ids, neg_prompt_ids, rng = replicate_all(prompt_ids, neg_prompt_ids, seed) images = pipeline( prompt_ids, p_params, rng, num_inference_steps=num_inference_steps, neg_prompt_ids=neg_prompt_ids, guidance_scale=guidance_scale, jit=True, ).images # convert the images to PIL images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) return pipeline.numpy_to_pil(np.array(images)) # 7. Remember that the first call will compile the function and hence be very slow. Let's run generate once # so that the pipeline call is compiled start = time.time() print("Compiling ...") generate(default_prompt, default_neg_prompt) print(f"Compiled in {time.time() - start}") # 8. Now the model forward pass will run very quickly, let's try it again start = time.time() prompt = "photo of a rhino dressed suit and tie sitting at a table in a bar with a bar stools, award winning photography, Elke vogelsang" neg_prompt = "cartoon, illustration, animation. face. male, female" images = generate(prompt, neg_prompt) print(f"Inference in {time.time() - start}") for i, image in enumerate(images): image.save(f"castle_{i}.png")
diffusers/examples/research_projects/sdxl_flax/sdxl_single.py/0
{ "file_path": "diffusers/examples/research_projects/sdxl_flax/sdxl_single.py", "repo_id": "diffusers", "token_count": 1341 }
217
#!/usr/bin/env python # coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import logging import math import os import random from pathlib import Path import jax import jax.numpy as jnp import numpy as np import optax import torch import torch.utils.checkpoint import transformers from datasets import load_dataset from flax import jax_utils from flax.training import train_state from flax.training.common_utils import shard from huggingface_hub import create_repo, upload_folder from torchvision import transforms from tqdm.auto import tqdm from transformers import CLIPImageProcessor, CLIPTokenizer, FlaxCLIPTextModel, set_seed from diffusers import ( FlaxAutoencoderKL, FlaxDDPMScheduler, FlaxPNDMScheduler, FlaxStableDiffusionPipeline, FlaxUNet2DConditionModel, ) from diffusers.pipelines.stable_diffusion import FlaxStableDiffusionSafetyChecker from diffusers.utils import check_min_version # Will error if the minimal version of diffusers is not installed. Remove at your own risks. check_min_version("0.28.0.dev0") logger = logging.getLogger(__name__) def parse_args(): parser = argparse.ArgumentParser(description="Simple example of a training script.") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--variant", type=str, default=None, help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16", ) parser.add_argument( "--dataset_name", type=str, default=None, help=( "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," " or to a folder containing files that 🤗 Datasets can understand." ), ) parser.add_argument( "--dataset_config_name", type=str, default=None, help="The config of the Dataset, leave as None if there's only one config.", ) parser.add_argument( "--train_data_dir", type=str, default=None, help=( "A folder containing the training data. Folder contents must follow the structure described in" " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." ), ) parser.add_argument( "--image_column", type=str, default="image", help="The column of the dataset containing an image." ) parser.add_argument( "--caption_column", type=str, default="text", help="The column of the dataset containing a caption or a list of captions.", ) parser.add_argument( "--max_train_samples", type=int, default=None, help=( "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." ), ) parser.add_argument( "--output_dir", type=str, default="sd-model-finetuned", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument( "--cache_dir", type=str, default=None, help="The directory where the downloaded models and datasets will be stored.", ) parser.add_argument("--seed", type=int, default=0, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=512, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--center_crop", default=False, action="store_true", help=( "Whether to center crop the input images to the resolution. If not set, the images will be randomly" " cropped. The images will be resized to the resolution first before cropping." ), ) parser.add_argument( "--random_flip", action="store_true", help="whether to randomly flip images horizontally", ) parser.add_argument( "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." ) parser.add_argument("--num_train_epochs", type=int, default=100) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--learning_rate", type=float, default=1e-4, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--report_to", type=str, default="tensorboard", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument( "--mixed_precision", type=str, default="no", choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose" "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." "and an Nvidia Ampere GPU." ), ) parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") parser.add_argument( "--from_pt", action="store_true", default=False, help="Flag to indicate whether to convert models from PyTorch.", ) args = parser.parse_args() env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) if env_local_rank != -1 and env_local_rank != args.local_rank: args.local_rank = env_local_rank # Sanity checks if args.dataset_name is None and args.train_data_dir is None: raise ValueError("Need either a dataset name or a training folder.") return args dataset_name_mapping = { "lambdalabs/naruto-blip-captions": ("image", "text"), } def get_params_to_save(params): return jax.device_get(jax.tree_util.tree_map(lambda x: x[0], params)) def main(): args = parse_args() if args.report_to == "wandb" and args.hub_token is not None: raise ValueError( "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." " Please use `huggingface-cli login` to authenticate with the Hub." ) logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) # Setup logging, we only want one process per machine to log things on the screen. logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) if jax.process_index() == 0: transformers.utils.logging.set_verbosity_info() else: transformers.utils.logging.set_verbosity_error() if args.seed is not None: set_seed(args.seed) # Handle the repository creation if jax.process_index() == 0: if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) if args.push_to_hub: repo_id = create_repo( repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token ).repo_id # Get the datasets: you can either provide your own training and evaluation files (see below) # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). # In distributed training, the load_dataset function guarantees that only one local process can concurrently # download the dataset. if args.dataset_name is not None: # Downloading and loading a dataset from the hub. dataset = load_dataset( args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, data_dir=args.train_data_dir ) else: data_files = {} if args.train_data_dir is not None: data_files["train"] = os.path.join(args.train_data_dir, "**") dataset = load_dataset( "imagefolder", data_files=data_files, cache_dir=args.cache_dir, ) # See more about loading custom images at # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder # Preprocessing the datasets. # We need to tokenize inputs and targets. column_names = dataset["train"].column_names # 6. Get the column names for input/target. dataset_columns = dataset_name_mapping.get(args.dataset_name, None) if args.image_column is None: image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] else: image_column = args.image_column if image_column not in column_names: raise ValueError( f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}" ) if args.caption_column is None: caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1] else: caption_column = args.caption_column if caption_column not in column_names: raise ValueError( f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}" ) # Preprocessing the datasets. # We need to tokenize input captions and transform the images. def tokenize_captions(examples, is_train=True): captions = [] for caption in examples[caption_column]: if isinstance(caption, str): captions.append(caption) elif isinstance(caption, (list, np.ndarray)): # take a random caption if there are multiple captions.append(random.choice(caption) if is_train else caption[0]) else: raise ValueError( f"Caption column `{caption_column}` should contain either strings or lists of strings." ) inputs = tokenizer(captions, max_length=tokenizer.model_max_length, padding="do_not_pad", truncation=True) input_ids = inputs.input_ids return input_ids train_transforms = transforms.Compose( [ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ] ) def preprocess_train(examples): images = [image.convert("RGB") for image in examples[image_column]] examples["pixel_values"] = [train_transforms(image) for image in images] examples["input_ids"] = tokenize_captions(examples) return examples if args.max_train_samples is not None: dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) # Set the training transforms train_dataset = dataset["train"].with_transform(preprocess_train) def collate_fn(examples): pixel_values = torch.stack([example["pixel_values"] for example in examples]) pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() input_ids = [example["input_ids"] for example in examples] padded_tokens = tokenizer.pad( {"input_ids": input_ids}, padding="max_length", max_length=tokenizer.model_max_length, return_tensors="pt" ) batch = { "pixel_values": pixel_values, "input_ids": padded_tokens.input_ids, } batch = {k: v.numpy() for k, v in batch.items()} return batch total_train_batch_size = args.train_batch_size * jax.local_device_count() train_dataloader = torch.utils.data.DataLoader( train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=total_train_batch_size, drop_last=True ) weight_dtype = jnp.float32 if args.mixed_precision == "fp16": weight_dtype = jnp.float16 elif args.mixed_precision == "bf16": weight_dtype = jnp.bfloat16 # Load models and create wrapper for stable diffusion tokenizer = CLIPTokenizer.from_pretrained( args.pretrained_model_name_or_path, from_pt=args.from_pt, revision=args.revision, subfolder="tokenizer", ) text_encoder = FlaxCLIPTextModel.from_pretrained( args.pretrained_model_name_or_path, from_pt=args.from_pt, revision=args.revision, subfolder="text_encoder", dtype=weight_dtype, ) vae, vae_params = FlaxAutoencoderKL.from_pretrained( args.pretrained_model_name_or_path, from_pt=args.from_pt, revision=args.revision, subfolder="vae", dtype=weight_dtype, ) unet, unet_params = FlaxUNet2DConditionModel.from_pretrained( args.pretrained_model_name_or_path, from_pt=args.from_pt, revision=args.revision, subfolder="unet", dtype=weight_dtype, ) # Optimization if args.scale_lr: args.learning_rate = args.learning_rate * total_train_batch_size constant_scheduler = optax.constant_schedule(args.learning_rate) adamw = optax.adamw( learning_rate=constant_scheduler, b1=args.adam_beta1, b2=args.adam_beta2, eps=args.adam_epsilon, weight_decay=args.adam_weight_decay, ) optimizer = optax.chain( optax.clip_by_global_norm(args.max_grad_norm), adamw, ) state = train_state.TrainState.create(apply_fn=unet.__call__, params=unet_params, tx=optimizer) noise_scheduler = FlaxDDPMScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000 ) noise_scheduler_state = noise_scheduler.create_state() # Initialize our training rng = jax.random.PRNGKey(args.seed) train_rngs = jax.random.split(rng, jax.local_device_count()) def train_step(state, text_encoder_params, vae_params, batch, train_rng): dropout_rng, sample_rng, new_train_rng = jax.random.split(train_rng, 3) def compute_loss(params): # Convert images to latent space vae_outputs = vae.apply( {"params": vae_params}, batch["pixel_values"], deterministic=True, method=vae.encode ) latents = vae_outputs.latent_dist.sample(sample_rng) # (NHWC) -> (NCHW) latents = jnp.transpose(latents, (0, 3, 1, 2)) latents = latents * vae.config.scaling_factor # Sample noise that we'll add to the latents noise_rng, timestep_rng = jax.random.split(sample_rng) noise = jax.random.normal(noise_rng, latents.shape) # Sample a random timestep for each image bsz = latents.shape[0] timesteps = jax.random.randint( timestep_rng, (bsz,), 0, noise_scheduler.config.num_train_timesteps, ) # Add noise to the latents according to the noise magnitude at each timestep # (this is the forward diffusion process) noisy_latents = noise_scheduler.add_noise(noise_scheduler_state, latents, noise, timesteps) # Get the text embedding for conditioning encoder_hidden_states = text_encoder( batch["input_ids"], params=text_encoder_params, train=False, )[0] # Predict the noise residual and compute loss model_pred = unet.apply( {"params": params}, noisy_latents, timesteps, encoder_hidden_states, train=True ).sample # Get the target for loss depending on the prediction type if noise_scheduler.config.prediction_type == "epsilon": target = noise elif noise_scheduler.config.prediction_type == "v_prediction": target = noise_scheduler.get_velocity(noise_scheduler_state, latents, noise, timesteps) else: raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") loss = (target - model_pred) ** 2 loss = loss.mean() return loss grad_fn = jax.value_and_grad(compute_loss) loss, grad = grad_fn(state.params) grad = jax.lax.pmean(grad, "batch") new_state = state.apply_gradients(grads=grad) metrics = {"loss": loss} metrics = jax.lax.pmean(metrics, axis_name="batch") return new_state, metrics, new_train_rng # Create parallel version of the train step p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,)) # Replicate the train state on each device state = jax_utils.replicate(state) text_encoder_params = jax_utils.replicate(text_encoder.params) vae_params = jax_utils.replicate(vae_params) # Train! num_update_steps_per_epoch = math.ceil(len(train_dataloader)) # Scheduler and math around the number of training steps. if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") logger.info(f" Total train batch size (w. parallel & distributed) = {total_train_batch_size}") logger.info(f" Total optimization steps = {args.max_train_steps}") global_step = 0 epochs = tqdm(range(args.num_train_epochs), desc="Epoch ... ", position=0) for epoch in epochs: # ======================== Training ================================ train_metrics = [] steps_per_epoch = len(train_dataset) // total_train_batch_size train_step_progress_bar = tqdm(total=steps_per_epoch, desc="Training...", position=1, leave=False) # train for batch in train_dataloader: batch = shard(batch) state, train_metric, train_rngs = p_train_step(state, text_encoder_params, vae_params, batch, train_rngs) train_metrics.append(train_metric) train_step_progress_bar.update(1) global_step += 1 if global_step >= args.max_train_steps: break train_metric = jax_utils.unreplicate(train_metric) train_step_progress_bar.close() epochs.write(f"Epoch... ({epoch + 1}/{args.num_train_epochs} | Loss: {train_metric['loss']})") # Create the pipeline using using the trained modules and save it. if jax.process_index() == 0: scheduler = FlaxPNDMScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True ) safety_checker = FlaxStableDiffusionSafetyChecker.from_pretrained( "CompVis/stable-diffusion-safety-checker", from_pt=True ) pipeline = FlaxStableDiffusionPipeline( text_encoder=text_encoder, vae=vae, unet=unet, tokenizer=tokenizer, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"), ) pipeline.save_pretrained( args.output_dir, params={ "text_encoder": get_params_to_save(text_encoder_params), "vae": get_params_to_save(vae_params), "unet": get_params_to_save(state.params), "safety_checker": safety_checker.params, }, ) if args.push_to_hub: upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message="End of training", ignore_patterns=["step_*", "epoch_*"], ) if __name__ == "__main__": main()
diffusers/examples/text_to_image/train_text_to_image_flax.py/0
{ "file_path": "diffusers/examples/text_to_image/train_text_to_image_flax.py", "repo_id": "diffusers", "token_count": 10030 }
218
import math import os import urllib import warnings from argparse import ArgumentParser import torch import torch.nn as nn import torch.nn.functional as F from huggingface_hub.utils import insecure_hashlib from safetensors.torch import load_file as stl from tqdm import tqdm from diffusers import AutoencoderKL, ConsistencyDecoderVAE, DiffusionPipeline, StableDiffusionPipeline, UNet2DModel from diffusers.models.autoencoders.vae import Encoder from diffusers.models.embeddings import TimestepEmbedding from diffusers.models.unets.unet_2d_blocks import ResnetDownsampleBlock2D, ResnetUpsampleBlock2D, UNetMidBlock2D args = ArgumentParser() args.add_argument("--save_pretrained", required=False, default=None, type=str) args.add_argument("--test_image", required=True, type=str) args = args.parse_args() def _extract_into_tensor(arr, timesteps, broadcast_shape): # from: https://github.com/openai/guided-diffusion/blob/22e0df8183507e13a7813f8d38d51b072ca1e67c/guided_diffusion/gaussian_diffusion.py#L895 """ res = arr[timesteps].float() dims_to_append = len(broadcast_shape) - len(res.shape) return res[(...,) + (None,) * dims_to_append] def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): # from: https://github.com/openai/guided-diffusion/blob/22e0df8183507e13a7813f8d38d51b072ca1e67c/guided_diffusion/gaussian_diffusion.py#L45 betas = [] for i in range(num_diffusion_timesteps): t1 = i / num_diffusion_timesteps t2 = (i + 1) / num_diffusion_timesteps betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) return torch.tensor(betas) def _download(url: str, root: str): os.makedirs(root, exist_ok=True) filename = os.path.basename(url) expected_sha256 = url.split("/")[-2] download_target = os.path.join(root, filename) if os.path.exists(download_target) and not os.path.isfile(download_target): raise RuntimeError(f"{download_target} exists and is not a regular file") if os.path.isfile(download_target): if insecure_hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256: return download_target else: warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file") with urllib.request.urlopen(url) as source, open(download_target, "wb") as output: with tqdm( total=int(source.info().get("Content-Length")), ncols=80, unit="iB", unit_scale=True, unit_divisor=1024, ) as loop: while True: buffer = source.read(8192) if not buffer: break output.write(buffer) loop.update(len(buffer)) if insecure_hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256: raise RuntimeError("Model has been downloaded but the SHA256 checksum does not not match") return download_target class ConsistencyDecoder: def __init__(self, device="cuda:0", download_root=os.path.expanduser("~/.cache/clip")): self.n_distilled_steps = 64 download_target = _download( "https://openaipublic.azureedge.net/diff-vae/c9cebd3132dd9c42936d803e33424145a748843c8f716c0814838bdc8a2fe7cb/decoder.pt", download_root, ) self.ckpt = torch.jit.load(download_target).to(device) self.device = device sigma_data = 0.5 betas = betas_for_alpha_bar(1024, lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2).to(device) alphas = 1.0 - betas alphas_cumprod = torch.cumprod(alphas, dim=0) self.sqrt_alphas_cumprod = torch.sqrt(alphas_cumprod) self.sqrt_one_minus_alphas_cumprod = torch.sqrt(1.0 - alphas_cumprod) sqrt_recip_alphas_cumprod = torch.sqrt(1.0 / alphas_cumprod) sigmas = torch.sqrt(1.0 / alphas_cumprod - 1) self.c_skip = sqrt_recip_alphas_cumprod * sigma_data**2 / (sigmas**2 + sigma_data**2) self.c_out = sigmas * sigma_data / (sigmas**2 + sigma_data**2) ** 0.5 self.c_in = sqrt_recip_alphas_cumprod / (sigmas**2 + sigma_data**2) ** 0.5 @staticmethod def round_timesteps(timesteps, total_timesteps, n_distilled_steps, truncate_start=True): with torch.no_grad(): space = torch.div(total_timesteps, n_distilled_steps, rounding_mode="floor") rounded_timesteps = (torch.div(timesteps, space, rounding_mode="floor") + 1) * space if truncate_start: rounded_timesteps[rounded_timesteps == total_timesteps] -= space else: rounded_timesteps[rounded_timesteps == total_timesteps] -= space rounded_timesteps[rounded_timesteps == 0] += space return rounded_timesteps @staticmethod def ldm_transform_latent(z, extra_scale_factor=1): channel_means = [0.38862467, 0.02253063, 0.07381133, -0.0171294] channel_stds = [0.9654121, 1.0440036, 0.76147926, 0.77022034] if len(z.shape) != 4: raise ValueError() z = z * 0.18215 channels = [z[:, i] for i in range(z.shape[1])] channels = [extra_scale_factor * (c - channel_means[i]) / channel_stds[i] for i, c in enumerate(channels)] return torch.stack(channels, dim=1) @torch.no_grad() def __call__( self, features: torch.Tensor, schedule=[1.0, 0.5], generator=None, ): features = self.ldm_transform_latent(features) ts = self.round_timesteps( torch.arange(0, 1024), 1024, self.n_distilled_steps, truncate_start=False, ) shape = ( features.size(0), 3, 8 * features.size(2), 8 * features.size(3), ) x_start = torch.zeros(shape, device=features.device, dtype=features.dtype) schedule_timesteps = [int((1024 - 1) * s) for s in schedule] for i in schedule_timesteps: t = ts[i].item() t_ = torch.tensor([t] * features.shape[0]).to(self.device) # noise = torch.randn_like(x_start) noise = torch.randn(x_start.shape, dtype=x_start.dtype, generator=generator).to(device=x_start.device) x_start = ( _extract_into_tensor(self.sqrt_alphas_cumprod, t_, x_start.shape) * x_start + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t_, x_start.shape) * noise ) c_in = _extract_into_tensor(self.c_in, t_, x_start.shape) import torch.nn.functional as F from diffusers import UNet2DModel if isinstance(self.ckpt, UNet2DModel): input = torch.concat([c_in * x_start, F.upsample_nearest(features, scale_factor=8)], dim=1) model_output = self.ckpt(input, t_).sample else: model_output = self.ckpt(c_in * x_start, t_, features=features) B, C = x_start.shape[:2] model_output, _ = torch.split(model_output, C, dim=1) pred_xstart = ( _extract_into_tensor(self.c_out, t_, x_start.shape) * model_output + _extract_into_tensor(self.c_skip, t_, x_start.shape) * x_start ).clamp(-1, 1) x_start = pred_xstart return x_start def save_image(image, name): import numpy as np from PIL import Image image = image[0].cpu().numpy() image = (image + 1.0) * 127.5 image = image.clip(0, 255).astype(np.uint8) image = Image.fromarray(image.transpose(1, 2, 0)) image.save(name) def load_image(uri, size=None, center_crop=False): import numpy as np from PIL import Image image = Image.open(uri) if center_crop: image = image.crop( ( (image.width - min(image.width, image.height)) // 2, (image.height - min(image.width, image.height)) // 2, (image.width + min(image.width, image.height)) // 2, (image.height + min(image.width, image.height)) // 2, ) ) if size is not None: image = image.resize(size) image = torch.tensor(np.array(image).transpose(2, 0, 1)).unsqueeze(0).float() image = image / 127.5 - 1.0 return image class TimestepEmbedding_(nn.Module): def __init__(self, n_time=1024, n_emb=320, n_out=1280) -> None: super().__init__() self.emb = nn.Embedding(n_time, n_emb) self.f_1 = nn.Linear(n_emb, n_out) self.f_2 = nn.Linear(n_out, n_out) def forward(self, x) -> torch.Tensor: x = self.emb(x) x = self.f_1(x) x = F.silu(x) return self.f_2(x) class ImageEmbedding(nn.Module): def __init__(self, in_channels=7, out_channels=320) -> None: super().__init__() self.f = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1) def forward(self, x) -> torch.Tensor: return self.f(x) class ImageUnembedding(nn.Module): def __init__(self, in_channels=320, out_channels=6) -> None: super().__init__() self.gn = nn.GroupNorm(32, in_channels) self.f = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1) def forward(self, x) -> torch.Tensor: return self.f(F.silu(self.gn(x))) class ConvResblock(nn.Module): def __init__(self, in_features=320, out_features=320) -> None: super().__init__() self.f_t = nn.Linear(1280, out_features * 2) self.gn_1 = nn.GroupNorm(32, in_features) self.f_1 = nn.Conv2d(in_features, out_features, kernel_size=3, padding=1) self.gn_2 = nn.GroupNorm(32, out_features) self.f_2 = nn.Conv2d(out_features, out_features, kernel_size=3, padding=1) skip_conv = in_features != out_features self.f_s = nn.Conv2d(in_features, out_features, kernel_size=1, padding=0) if skip_conv else nn.Identity() def forward(self, x, t): x_skip = x t = self.f_t(F.silu(t)) t = t.chunk(2, dim=1) t_1 = t[0].unsqueeze(dim=2).unsqueeze(dim=3) + 1 t_2 = t[1].unsqueeze(dim=2).unsqueeze(dim=3) gn_1 = F.silu(self.gn_1(x)) f_1 = self.f_1(gn_1) gn_2 = self.gn_2(f_1) return self.f_s(x_skip) + self.f_2(F.silu(gn_2 * t_1 + t_2)) # Also ConvResblock class Downsample(nn.Module): def __init__(self, in_channels=320) -> None: super().__init__() self.f_t = nn.Linear(1280, in_channels * 2) self.gn_1 = nn.GroupNorm(32, in_channels) self.f_1 = nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1) self.gn_2 = nn.GroupNorm(32, in_channels) self.f_2 = nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1) def forward(self, x, t) -> torch.Tensor: x_skip = x t = self.f_t(F.silu(t)) t_1, t_2 = t.chunk(2, dim=1) t_1 = t_1.unsqueeze(2).unsqueeze(3) + 1 t_2 = t_2.unsqueeze(2).unsqueeze(3) gn_1 = F.silu(self.gn_1(x)) avg_pool2d = F.avg_pool2d(gn_1, kernel_size=(2, 2), stride=None) f_1 = self.f_1(avg_pool2d) gn_2 = self.gn_2(f_1) f_2 = self.f_2(F.silu(t_2 + (t_1 * gn_2))) return f_2 + F.avg_pool2d(x_skip, kernel_size=(2, 2), stride=None) # Also ConvResblock class Upsample(nn.Module): def __init__(self, in_channels=1024) -> None: super().__init__() self.f_t = nn.Linear(1280, in_channels * 2) self.gn_1 = nn.GroupNorm(32, in_channels) self.f_1 = nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1) self.gn_2 = nn.GroupNorm(32, in_channels) self.f_2 = nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1) def forward(self, x, t) -> torch.Tensor: x_skip = x t = self.f_t(F.silu(t)) t_1, t_2 = t.chunk(2, dim=1) t_1 = t_1.unsqueeze(2).unsqueeze(3) + 1 t_2 = t_2.unsqueeze(2).unsqueeze(3) gn_1 = F.silu(self.gn_1(x)) upsample = F.upsample_nearest(gn_1, scale_factor=2) f_1 = self.f_1(upsample) gn_2 = self.gn_2(f_1) f_2 = self.f_2(F.silu(t_2 + (t_1 * gn_2))) return f_2 + F.upsample_nearest(x_skip, scale_factor=2) class ConvUNetVAE(nn.Module): def __init__(self) -> None: super().__init__() self.embed_image = ImageEmbedding() self.embed_time = TimestepEmbedding_() down_0 = nn.ModuleList( [ ConvResblock(320, 320), ConvResblock(320, 320), ConvResblock(320, 320), Downsample(320), ] ) down_1 = nn.ModuleList( [ ConvResblock(320, 640), ConvResblock(640, 640), ConvResblock(640, 640), Downsample(640), ] ) down_2 = nn.ModuleList( [ ConvResblock(640, 1024), ConvResblock(1024, 1024), ConvResblock(1024, 1024), Downsample(1024), ] ) down_3 = nn.ModuleList( [ ConvResblock(1024, 1024), ConvResblock(1024, 1024), ConvResblock(1024, 1024), ] ) self.down = nn.ModuleList( [ down_0, down_1, down_2, down_3, ] ) self.mid = nn.ModuleList( [ ConvResblock(1024, 1024), ConvResblock(1024, 1024), ] ) up_3 = nn.ModuleList( [ ConvResblock(1024 * 2, 1024), ConvResblock(1024 * 2, 1024), ConvResblock(1024 * 2, 1024), ConvResblock(1024 * 2, 1024), Upsample(1024), ] ) up_2 = nn.ModuleList( [ ConvResblock(1024 * 2, 1024), ConvResblock(1024 * 2, 1024), ConvResblock(1024 * 2, 1024), ConvResblock(1024 + 640, 1024), Upsample(1024), ] ) up_1 = nn.ModuleList( [ ConvResblock(1024 + 640, 640), ConvResblock(640 * 2, 640), ConvResblock(640 * 2, 640), ConvResblock(320 + 640, 640), Upsample(640), ] ) up_0 = nn.ModuleList( [ ConvResblock(320 + 640, 320), ConvResblock(320 * 2, 320), ConvResblock(320 * 2, 320), ConvResblock(320 * 2, 320), ] ) self.up = nn.ModuleList( [ up_0, up_1, up_2, up_3, ] ) self.output = ImageUnembedding() def forward(self, x, t, features) -> torch.Tensor: converted = hasattr(self, "converted") and self.converted x = torch.cat([x, F.upsample_nearest(features, scale_factor=8)], dim=1) if converted: t = self.time_embedding(self.time_proj(t)) else: t = self.embed_time(t) x = self.embed_image(x) skips = [x] for i, down in enumerate(self.down): if converted and i in [0, 1, 2, 3]: x, skips_ = down(x, t) for skip in skips_: skips.append(skip) else: for block in down: x = block(x, t) skips.append(x) print(x.float().abs().sum()) if converted: x = self.mid(x, t) else: for i in range(2): x = self.mid[i](x, t) print(x.float().abs().sum()) for i, up in enumerate(self.up[::-1]): if converted and i in [0, 1, 2, 3]: skip_4 = skips.pop() skip_3 = skips.pop() skip_2 = skips.pop() skip_1 = skips.pop() skips_ = (skip_1, skip_2, skip_3, skip_4) x = up(x, skips_, t) else: for block in up: if isinstance(block, ConvResblock): x = torch.concat([x, skips.pop()], dim=1) x = block(x, t) return self.output(x) def rename_state_dict_key(k): k = k.replace("blocks.", "") for i in range(5): k = k.replace(f"down_{i}_", f"down.{i}.") k = k.replace(f"conv_{i}.", f"{i}.") k = k.replace(f"up_{i}_", f"up.{i}.") k = k.replace(f"mid_{i}", f"mid.{i}") k = k.replace("upsamp.", "4.") k = k.replace("downsamp.", "3.") k = k.replace("f_t.w", "f_t.weight").replace("f_t.b", "f_t.bias") k = k.replace("f_1.w", "f_1.weight").replace("f_1.b", "f_1.bias") k = k.replace("f_2.w", "f_2.weight").replace("f_2.b", "f_2.bias") k = k.replace("f_s.w", "f_s.weight").replace("f_s.b", "f_s.bias") k = k.replace("f.w", "f.weight").replace("f.b", "f.bias") k = k.replace("gn_1.g", "gn_1.weight").replace("gn_1.b", "gn_1.bias") k = k.replace("gn_2.g", "gn_2.weight").replace("gn_2.b", "gn_2.bias") k = k.replace("gn.g", "gn.weight").replace("gn.b", "gn.bias") return k def rename_state_dict(sd, embedding): sd = {rename_state_dict_key(k): v for k, v in sd.items()} sd["embed_time.emb.weight"] = embedding["weight"] return sd # encode with stable diffusion vae pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) pipe.vae.cuda() # construct original decoder with jitted model decoder_consistency = ConsistencyDecoder(device="cuda:0") # construct UNet code, overwrite the decoder with conv_unet_vae model = ConvUNetVAE() model.load_state_dict( rename_state_dict( stl("consistency_decoder.safetensors"), stl("embedding.safetensors"), ) ) model = model.cuda() decoder_consistency.ckpt = model image = load_image(args.test_image, size=(256, 256), center_crop=True) latent = pipe.vae.encode(image.half().cuda()).latent_dist.sample() # decode with gan sample_gan = pipe.vae.decode(latent).sample.detach() save_image(sample_gan, "gan.png") # decode with conv_unet_vae sample_consistency_orig = decoder_consistency(latent, generator=torch.Generator("cpu").manual_seed(0)) save_image(sample_consistency_orig, "con_orig.png") ########### conversion print("CONVERSION") print("DOWN BLOCK ONE") block_one_sd_orig = model.down[0].state_dict() block_one_sd_new = {} for i in range(3): block_one_sd_new[f"resnets.{i}.norm1.weight"] = block_one_sd_orig.pop(f"{i}.gn_1.weight") block_one_sd_new[f"resnets.{i}.norm1.bias"] = block_one_sd_orig.pop(f"{i}.gn_1.bias") block_one_sd_new[f"resnets.{i}.conv1.weight"] = block_one_sd_orig.pop(f"{i}.f_1.weight") block_one_sd_new[f"resnets.{i}.conv1.bias"] = block_one_sd_orig.pop(f"{i}.f_1.bias") block_one_sd_new[f"resnets.{i}.time_emb_proj.weight"] = block_one_sd_orig.pop(f"{i}.f_t.weight") block_one_sd_new[f"resnets.{i}.time_emb_proj.bias"] = block_one_sd_orig.pop(f"{i}.f_t.bias") block_one_sd_new[f"resnets.{i}.norm2.weight"] = block_one_sd_orig.pop(f"{i}.gn_2.weight") block_one_sd_new[f"resnets.{i}.norm2.bias"] = block_one_sd_orig.pop(f"{i}.gn_2.bias") block_one_sd_new[f"resnets.{i}.conv2.weight"] = block_one_sd_orig.pop(f"{i}.f_2.weight") block_one_sd_new[f"resnets.{i}.conv2.bias"] = block_one_sd_orig.pop(f"{i}.f_2.bias") block_one_sd_new["downsamplers.0.norm1.weight"] = block_one_sd_orig.pop("3.gn_1.weight") block_one_sd_new["downsamplers.0.norm1.bias"] = block_one_sd_orig.pop("3.gn_1.bias") block_one_sd_new["downsamplers.0.conv1.weight"] = block_one_sd_orig.pop("3.f_1.weight") block_one_sd_new["downsamplers.0.conv1.bias"] = block_one_sd_orig.pop("3.f_1.bias") block_one_sd_new["downsamplers.0.time_emb_proj.weight"] = block_one_sd_orig.pop("3.f_t.weight") block_one_sd_new["downsamplers.0.time_emb_proj.bias"] = block_one_sd_orig.pop("3.f_t.bias") block_one_sd_new["downsamplers.0.norm2.weight"] = block_one_sd_orig.pop("3.gn_2.weight") block_one_sd_new["downsamplers.0.norm2.bias"] = block_one_sd_orig.pop("3.gn_2.bias") block_one_sd_new["downsamplers.0.conv2.weight"] = block_one_sd_orig.pop("3.f_2.weight") block_one_sd_new["downsamplers.0.conv2.bias"] = block_one_sd_orig.pop("3.f_2.bias") assert len(block_one_sd_orig) == 0 block_one = ResnetDownsampleBlock2D( in_channels=320, out_channels=320, temb_channels=1280, num_layers=3, add_downsample=True, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, ) block_one.load_state_dict(block_one_sd_new) print("DOWN BLOCK TWO") block_two_sd_orig = model.down[1].state_dict() block_two_sd_new = {} for i in range(3): block_two_sd_new[f"resnets.{i}.norm1.weight"] = block_two_sd_orig.pop(f"{i}.gn_1.weight") block_two_sd_new[f"resnets.{i}.norm1.bias"] = block_two_sd_orig.pop(f"{i}.gn_1.bias") block_two_sd_new[f"resnets.{i}.conv1.weight"] = block_two_sd_orig.pop(f"{i}.f_1.weight") block_two_sd_new[f"resnets.{i}.conv1.bias"] = block_two_sd_orig.pop(f"{i}.f_1.bias") block_two_sd_new[f"resnets.{i}.time_emb_proj.weight"] = block_two_sd_orig.pop(f"{i}.f_t.weight") block_two_sd_new[f"resnets.{i}.time_emb_proj.bias"] = block_two_sd_orig.pop(f"{i}.f_t.bias") block_two_sd_new[f"resnets.{i}.norm2.weight"] = block_two_sd_orig.pop(f"{i}.gn_2.weight") block_two_sd_new[f"resnets.{i}.norm2.bias"] = block_two_sd_orig.pop(f"{i}.gn_2.bias") block_two_sd_new[f"resnets.{i}.conv2.weight"] = block_two_sd_orig.pop(f"{i}.f_2.weight") block_two_sd_new[f"resnets.{i}.conv2.bias"] = block_two_sd_orig.pop(f"{i}.f_2.bias") if i == 0: block_two_sd_new[f"resnets.{i}.conv_shortcut.weight"] = block_two_sd_orig.pop(f"{i}.f_s.weight") block_two_sd_new[f"resnets.{i}.conv_shortcut.bias"] = block_two_sd_orig.pop(f"{i}.f_s.bias") block_two_sd_new["downsamplers.0.norm1.weight"] = block_two_sd_orig.pop("3.gn_1.weight") block_two_sd_new["downsamplers.0.norm1.bias"] = block_two_sd_orig.pop("3.gn_1.bias") block_two_sd_new["downsamplers.0.conv1.weight"] = block_two_sd_orig.pop("3.f_1.weight") block_two_sd_new["downsamplers.0.conv1.bias"] = block_two_sd_orig.pop("3.f_1.bias") block_two_sd_new["downsamplers.0.time_emb_proj.weight"] = block_two_sd_orig.pop("3.f_t.weight") block_two_sd_new["downsamplers.0.time_emb_proj.bias"] = block_two_sd_orig.pop("3.f_t.bias") block_two_sd_new["downsamplers.0.norm2.weight"] = block_two_sd_orig.pop("3.gn_2.weight") block_two_sd_new["downsamplers.0.norm2.bias"] = block_two_sd_orig.pop("3.gn_2.bias") block_two_sd_new["downsamplers.0.conv2.weight"] = block_two_sd_orig.pop("3.f_2.weight") block_two_sd_new["downsamplers.0.conv2.bias"] = block_two_sd_orig.pop("3.f_2.bias") assert len(block_two_sd_orig) == 0 block_two = ResnetDownsampleBlock2D( in_channels=320, out_channels=640, temb_channels=1280, num_layers=3, add_downsample=True, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, ) block_two.load_state_dict(block_two_sd_new) print("DOWN BLOCK THREE") block_three_sd_orig = model.down[2].state_dict() block_three_sd_new = {} for i in range(3): block_three_sd_new[f"resnets.{i}.norm1.weight"] = block_three_sd_orig.pop(f"{i}.gn_1.weight") block_three_sd_new[f"resnets.{i}.norm1.bias"] = block_three_sd_orig.pop(f"{i}.gn_1.bias") block_three_sd_new[f"resnets.{i}.conv1.weight"] = block_three_sd_orig.pop(f"{i}.f_1.weight") block_three_sd_new[f"resnets.{i}.conv1.bias"] = block_three_sd_orig.pop(f"{i}.f_1.bias") block_three_sd_new[f"resnets.{i}.time_emb_proj.weight"] = block_three_sd_orig.pop(f"{i}.f_t.weight") block_three_sd_new[f"resnets.{i}.time_emb_proj.bias"] = block_three_sd_orig.pop(f"{i}.f_t.bias") block_three_sd_new[f"resnets.{i}.norm2.weight"] = block_three_sd_orig.pop(f"{i}.gn_2.weight") block_three_sd_new[f"resnets.{i}.norm2.bias"] = block_three_sd_orig.pop(f"{i}.gn_2.bias") block_three_sd_new[f"resnets.{i}.conv2.weight"] = block_three_sd_orig.pop(f"{i}.f_2.weight") block_three_sd_new[f"resnets.{i}.conv2.bias"] = block_three_sd_orig.pop(f"{i}.f_2.bias") if i == 0: block_three_sd_new[f"resnets.{i}.conv_shortcut.weight"] = block_three_sd_orig.pop(f"{i}.f_s.weight") block_three_sd_new[f"resnets.{i}.conv_shortcut.bias"] = block_three_sd_orig.pop(f"{i}.f_s.bias") block_three_sd_new["downsamplers.0.norm1.weight"] = block_three_sd_orig.pop("3.gn_1.weight") block_three_sd_new["downsamplers.0.norm1.bias"] = block_three_sd_orig.pop("3.gn_1.bias") block_three_sd_new["downsamplers.0.conv1.weight"] = block_three_sd_orig.pop("3.f_1.weight") block_three_sd_new["downsamplers.0.conv1.bias"] = block_three_sd_orig.pop("3.f_1.bias") block_three_sd_new["downsamplers.0.time_emb_proj.weight"] = block_three_sd_orig.pop("3.f_t.weight") block_three_sd_new["downsamplers.0.time_emb_proj.bias"] = block_three_sd_orig.pop("3.f_t.bias") block_three_sd_new["downsamplers.0.norm2.weight"] = block_three_sd_orig.pop("3.gn_2.weight") block_three_sd_new["downsamplers.0.norm2.bias"] = block_three_sd_orig.pop("3.gn_2.bias") block_three_sd_new["downsamplers.0.conv2.weight"] = block_three_sd_orig.pop("3.f_2.weight") block_three_sd_new["downsamplers.0.conv2.bias"] = block_three_sd_orig.pop("3.f_2.bias") assert len(block_three_sd_orig) == 0 block_three = ResnetDownsampleBlock2D( in_channels=640, out_channels=1024, temb_channels=1280, num_layers=3, add_downsample=True, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, ) block_three.load_state_dict(block_three_sd_new) print("DOWN BLOCK FOUR") block_four_sd_orig = model.down[3].state_dict() block_four_sd_new = {} for i in range(3): block_four_sd_new[f"resnets.{i}.norm1.weight"] = block_four_sd_orig.pop(f"{i}.gn_1.weight") block_four_sd_new[f"resnets.{i}.norm1.bias"] = block_four_sd_orig.pop(f"{i}.gn_1.bias") block_four_sd_new[f"resnets.{i}.conv1.weight"] = block_four_sd_orig.pop(f"{i}.f_1.weight") block_four_sd_new[f"resnets.{i}.conv1.bias"] = block_four_sd_orig.pop(f"{i}.f_1.bias") block_four_sd_new[f"resnets.{i}.time_emb_proj.weight"] = block_four_sd_orig.pop(f"{i}.f_t.weight") block_four_sd_new[f"resnets.{i}.time_emb_proj.bias"] = block_four_sd_orig.pop(f"{i}.f_t.bias") block_four_sd_new[f"resnets.{i}.norm2.weight"] = block_four_sd_orig.pop(f"{i}.gn_2.weight") block_four_sd_new[f"resnets.{i}.norm2.bias"] = block_four_sd_orig.pop(f"{i}.gn_2.bias") block_four_sd_new[f"resnets.{i}.conv2.weight"] = block_four_sd_orig.pop(f"{i}.f_2.weight") block_four_sd_new[f"resnets.{i}.conv2.bias"] = block_four_sd_orig.pop(f"{i}.f_2.bias") assert len(block_four_sd_orig) == 0 block_four = ResnetDownsampleBlock2D( in_channels=1024, out_channels=1024, temb_channels=1280, num_layers=3, add_downsample=False, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, ) block_four.load_state_dict(block_four_sd_new) print("MID BLOCK 1") mid_block_one_sd_orig = model.mid.state_dict() mid_block_one_sd_new = {} for i in range(2): mid_block_one_sd_new[f"resnets.{i}.norm1.weight"] = mid_block_one_sd_orig.pop(f"{i}.gn_1.weight") mid_block_one_sd_new[f"resnets.{i}.norm1.bias"] = mid_block_one_sd_orig.pop(f"{i}.gn_1.bias") mid_block_one_sd_new[f"resnets.{i}.conv1.weight"] = mid_block_one_sd_orig.pop(f"{i}.f_1.weight") mid_block_one_sd_new[f"resnets.{i}.conv1.bias"] = mid_block_one_sd_orig.pop(f"{i}.f_1.bias") mid_block_one_sd_new[f"resnets.{i}.time_emb_proj.weight"] = mid_block_one_sd_orig.pop(f"{i}.f_t.weight") mid_block_one_sd_new[f"resnets.{i}.time_emb_proj.bias"] = mid_block_one_sd_orig.pop(f"{i}.f_t.bias") mid_block_one_sd_new[f"resnets.{i}.norm2.weight"] = mid_block_one_sd_orig.pop(f"{i}.gn_2.weight") mid_block_one_sd_new[f"resnets.{i}.norm2.bias"] = mid_block_one_sd_orig.pop(f"{i}.gn_2.bias") mid_block_one_sd_new[f"resnets.{i}.conv2.weight"] = mid_block_one_sd_orig.pop(f"{i}.f_2.weight") mid_block_one_sd_new[f"resnets.{i}.conv2.bias"] = mid_block_one_sd_orig.pop(f"{i}.f_2.bias") assert len(mid_block_one_sd_orig) == 0 mid_block_one = UNetMidBlock2D( in_channels=1024, temb_channels=1280, num_layers=1, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, add_attention=False, ) mid_block_one.load_state_dict(mid_block_one_sd_new) print("UP BLOCK ONE") up_block_one_sd_orig = model.up[-1].state_dict() up_block_one_sd_new = {} for i in range(4): up_block_one_sd_new[f"resnets.{i}.norm1.weight"] = up_block_one_sd_orig.pop(f"{i}.gn_1.weight") up_block_one_sd_new[f"resnets.{i}.norm1.bias"] = up_block_one_sd_orig.pop(f"{i}.gn_1.bias") up_block_one_sd_new[f"resnets.{i}.conv1.weight"] = up_block_one_sd_orig.pop(f"{i}.f_1.weight") up_block_one_sd_new[f"resnets.{i}.conv1.bias"] = up_block_one_sd_orig.pop(f"{i}.f_1.bias") up_block_one_sd_new[f"resnets.{i}.time_emb_proj.weight"] = up_block_one_sd_orig.pop(f"{i}.f_t.weight") up_block_one_sd_new[f"resnets.{i}.time_emb_proj.bias"] = up_block_one_sd_orig.pop(f"{i}.f_t.bias") up_block_one_sd_new[f"resnets.{i}.norm2.weight"] = up_block_one_sd_orig.pop(f"{i}.gn_2.weight") up_block_one_sd_new[f"resnets.{i}.norm2.bias"] = up_block_one_sd_orig.pop(f"{i}.gn_2.bias") up_block_one_sd_new[f"resnets.{i}.conv2.weight"] = up_block_one_sd_orig.pop(f"{i}.f_2.weight") up_block_one_sd_new[f"resnets.{i}.conv2.bias"] = up_block_one_sd_orig.pop(f"{i}.f_2.bias") up_block_one_sd_new[f"resnets.{i}.conv_shortcut.weight"] = up_block_one_sd_orig.pop(f"{i}.f_s.weight") up_block_one_sd_new[f"resnets.{i}.conv_shortcut.bias"] = up_block_one_sd_orig.pop(f"{i}.f_s.bias") up_block_one_sd_new["upsamplers.0.norm1.weight"] = up_block_one_sd_orig.pop("4.gn_1.weight") up_block_one_sd_new["upsamplers.0.norm1.bias"] = up_block_one_sd_orig.pop("4.gn_1.bias") up_block_one_sd_new["upsamplers.0.conv1.weight"] = up_block_one_sd_orig.pop("4.f_1.weight") up_block_one_sd_new["upsamplers.0.conv1.bias"] = up_block_one_sd_orig.pop("4.f_1.bias") up_block_one_sd_new["upsamplers.0.time_emb_proj.weight"] = up_block_one_sd_orig.pop("4.f_t.weight") up_block_one_sd_new["upsamplers.0.time_emb_proj.bias"] = up_block_one_sd_orig.pop("4.f_t.bias") up_block_one_sd_new["upsamplers.0.norm2.weight"] = up_block_one_sd_orig.pop("4.gn_2.weight") up_block_one_sd_new["upsamplers.0.norm2.bias"] = up_block_one_sd_orig.pop("4.gn_2.bias") up_block_one_sd_new["upsamplers.0.conv2.weight"] = up_block_one_sd_orig.pop("4.f_2.weight") up_block_one_sd_new["upsamplers.0.conv2.bias"] = up_block_one_sd_orig.pop("4.f_2.bias") assert len(up_block_one_sd_orig) == 0 up_block_one = ResnetUpsampleBlock2D( in_channels=1024, prev_output_channel=1024, out_channels=1024, temb_channels=1280, num_layers=4, add_upsample=True, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, ) up_block_one.load_state_dict(up_block_one_sd_new) print("UP BLOCK TWO") up_block_two_sd_orig = model.up[-2].state_dict() up_block_two_sd_new = {} for i in range(4): up_block_two_sd_new[f"resnets.{i}.norm1.weight"] = up_block_two_sd_orig.pop(f"{i}.gn_1.weight") up_block_two_sd_new[f"resnets.{i}.norm1.bias"] = up_block_two_sd_orig.pop(f"{i}.gn_1.bias") up_block_two_sd_new[f"resnets.{i}.conv1.weight"] = up_block_two_sd_orig.pop(f"{i}.f_1.weight") up_block_two_sd_new[f"resnets.{i}.conv1.bias"] = up_block_two_sd_orig.pop(f"{i}.f_1.bias") up_block_two_sd_new[f"resnets.{i}.time_emb_proj.weight"] = up_block_two_sd_orig.pop(f"{i}.f_t.weight") up_block_two_sd_new[f"resnets.{i}.time_emb_proj.bias"] = up_block_two_sd_orig.pop(f"{i}.f_t.bias") up_block_two_sd_new[f"resnets.{i}.norm2.weight"] = up_block_two_sd_orig.pop(f"{i}.gn_2.weight") up_block_two_sd_new[f"resnets.{i}.norm2.bias"] = up_block_two_sd_orig.pop(f"{i}.gn_2.bias") up_block_two_sd_new[f"resnets.{i}.conv2.weight"] = up_block_two_sd_orig.pop(f"{i}.f_2.weight") up_block_two_sd_new[f"resnets.{i}.conv2.bias"] = up_block_two_sd_orig.pop(f"{i}.f_2.bias") up_block_two_sd_new[f"resnets.{i}.conv_shortcut.weight"] = up_block_two_sd_orig.pop(f"{i}.f_s.weight") up_block_two_sd_new[f"resnets.{i}.conv_shortcut.bias"] = up_block_two_sd_orig.pop(f"{i}.f_s.bias") up_block_two_sd_new["upsamplers.0.norm1.weight"] = up_block_two_sd_orig.pop("4.gn_1.weight") up_block_two_sd_new["upsamplers.0.norm1.bias"] = up_block_two_sd_orig.pop("4.gn_1.bias") up_block_two_sd_new["upsamplers.0.conv1.weight"] = up_block_two_sd_orig.pop("4.f_1.weight") up_block_two_sd_new["upsamplers.0.conv1.bias"] = up_block_two_sd_orig.pop("4.f_1.bias") up_block_two_sd_new["upsamplers.0.time_emb_proj.weight"] = up_block_two_sd_orig.pop("4.f_t.weight") up_block_two_sd_new["upsamplers.0.time_emb_proj.bias"] = up_block_two_sd_orig.pop("4.f_t.bias") up_block_two_sd_new["upsamplers.0.norm2.weight"] = up_block_two_sd_orig.pop("4.gn_2.weight") up_block_two_sd_new["upsamplers.0.norm2.bias"] = up_block_two_sd_orig.pop("4.gn_2.bias") up_block_two_sd_new["upsamplers.0.conv2.weight"] = up_block_two_sd_orig.pop("4.f_2.weight") up_block_two_sd_new["upsamplers.0.conv2.bias"] = up_block_two_sd_orig.pop("4.f_2.bias") assert len(up_block_two_sd_orig) == 0 up_block_two = ResnetUpsampleBlock2D( in_channels=640, prev_output_channel=1024, out_channels=1024, temb_channels=1280, num_layers=4, add_upsample=True, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, ) up_block_two.load_state_dict(up_block_two_sd_new) print("UP BLOCK THREE") up_block_three_sd_orig = model.up[-3].state_dict() up_block_three_sd_new = {} for i in range(4): up_block_three_sd_new[f"resnets.{i}.norm1.weight"] = up_block_three_sd_orig.pop(f"{i}.gn_1.weight") up_block_three_sd_new[f"resnets.{i}.norm1.bias"] = up_block_three_sd_orig.pop(f"{i}.gn_1.bias") up_block_three_sd_new[f"resnets.{i}.conv1.weight"] = up_block_three_sd_orig.pop(f"{i}.f_1.weight") up_block_three_sd_new[f"resnets.{i}.conv1.bias"] = up_block_three_sd_orig.pop(f"{i}.f_1.bias") up_block_three_sd_new[f"resnets.{i}.time_emb_proj.weight"] = up_block_three_sd_orig.pop(f"{i}.f_t.weight") up_block_three_sd_new[f"resnets.{i}.time_emb_proj.bias"] = up_block_three_sd_orig.pop(f"{i}.f_t.bias") up_block_three_sd_new[f"resnets.{i}.norm2.weight"] = up_block_three_sd_orig.pop(f"{i}.gn_2.weight") up_block_three_sd_new[f"resnets.{i}.norm2.bias"] = up_block_three_sd_orig.pop(f"{i}.gn_2.bias") up_block_three_sd_new[f"resnets.{i}.conv2.weight"] = up_block_three_sd_orig.pop(f"{i}.f_2.weight") up_block_three_sd_new[f"resnets.{i}.conv2.bias"] = up_block_three_sd_orig.pop(f"{i}.f_2.bias") up_block_three_sd_new[f"resnets.{i}.conv_shortcut.weight"] = up_block_three_sd_orig.pop(f"{i}.f_s.weight") up_block_three_sd_new[f"resnets.{i}.conv_shortcut.bias"] = up_block_three_sd_orig.pop(f"{i}.f_s.bias") up_block_three_sd_new["upsamplers.0.norm1.weight"] = up_block_three_sd_orig.pop("4.gn_1.weight") up_block_three_sd_new["upsamplers.0.norm1.bias"] = up_block_three_sd_orig.pop("4.gn_1.bias") up_block_three_sd_new["upsamplers.0.conv1.weight"] = up_block_three_sd_orig.pop("4.f_1.weight") up_block_three_sd_new["upsamplers.0.conv1.bias"] = up_block_three_sd_orig.pop("4.f_1.bias") up_block_three_sd_new["upsamplers.0.time_emb_proj.weight"] = up_block_three_sd_orig.pop("4.f_t.weight") up_block_three_sd_new["upsamplers.0.time_emb_proj.bias"] = up_block_three_sd_orig.pop("4.f_t.bias") up_block_three_sd_new["upsamplers.0.norm2.weight"] = up_block_three_sd_orig.pop("4.gn_2.weight") up_block_three_sd_new["upsamplers.0.norm2.bias"] = up_block_three_sd_orig.pop("4.gn_2.bias") up_block_three_sd_new["upsamplers.0.conv2.weight"] = up_block_three_sd_orig.pop("4.f_2.weight") up_block_three_sd_new["upsamplers.0.conv2.bias"] = up_block_three_sd_orig.pop("4.f_2.bias") assert len(up_block_three_sd_orig) == 0 up_block_three = ResnetUpsampleBlock2D( in_channels=320, prev_output_channel=1024, out_channels=640, temb_channels=1280, num_layers=4, add_upsample=True, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, ) up_block_three.load_state_dict(up_block_three_sd_new) print("UP BLOCK FOUR") up_block_four_sd_orig = model.up[-4].state_dict() up_block_four_sd_new = {} for i in range(4): up_block_four_sd_new[f"resnets.{i}.norm1.weight"] = up_block_four_sd_orig.pop(f"{i}.gn_1.weight") up_block_four_sd_new[f"resnets.{i}.norm1.bias"] = up_block_four_sd_orig.pop(f"{i}.gn_1.bias") up_block_four_sd_new[f"resnets.{i}.conv1.weight"] = up_block_four_sd_orig.pop(f"{i}.f_1.weight") up_block_four_sd_new[f"resnets.{i}.conv1.bias"] = up_block_four_sd_orig.pop(f"{i}.f_1.bias") up_block_four_sd_new[f"resnets.{i}.time_emb_proj.weight"] = up_block_four_sd_orig.pop(f"{i}.f_t.weight") up_block_four_sd_new[f"resnets.{i}.time_emb_proj.bias"] = up_block_four_sd_orig.pop(f"{i}.f_t.bias") up_block_four_sd_new[f"resnets.{i}.norm2.weight"] = up_block_four_sd_orig.pop(f"{i}.gn_2.weight") up_block_four_sd_new[f"resnets.{i}.norm2.bias"] = up_block_four_sd_orig.pop(f"{i}.gn_2.bias") up_block_four_sd_new[f"resnets.{i}.conv2.weight"] = up_block_four_sd_orig.pop(f"{i}.f_2.weight") up_block_four_sd_new[f"resnets.{i}.conv2.bias"] = up_block_four_sd_orig.pop(f"{i}.f_2.bias") up_block_four_sd_new[f"resnets.{i}.conv_shortcut.weight"] = up_block_four_sd_orig.pop(f"{i}.f_s.weight") up_block_four_sd_new[f"resnets.{i}.conv_shortcut.bias"] = up_block_four_sd_orig.pop(f"{i}.f_s.bias") assert len(up_block_four_sd_orig) == 0 up_block_four = ResnetUpsampleBlock2D( in_channels=320, prev_output_channel=640, out_channels=320, temb_channels=1280, num_layers=4, add_upsample=False, resnet_time_scale_shift="scale_shift", resnet_eps=1e-5, ) up_block_four.load_state_dict(up_block_four_sd_new) print("initial projection (conv_in)") conv_in_sd_orig = model.embed_image.state_dict() conv_in_sd_new = {} conv_in_sd_new["weight"] = conv_in_sd_orig.pop("f.weight") conv_in_sd_new["bias"] = conv_in_sd_orig.pop("f.bias") assert len(conv_in_sd_orig) == 0 block_out_channels = [320, 640, 1024, 1024] in_channels = 7 conv_in_kernel = 3 conv_in_padding = (conv_in_kernel - 1) // 2 conv_in = nn.Conv2d(in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding) conv_in.load_state_dict(conv_in_sd_new) print("out projection (conv_out) (conv_norm_out)") out_channels = 6 norm_num_groups = 32 norm_eps = 1e-5 act_fn = "silu" conv_out_kernel = 3 conv_out_padding = (conv_out_kernel - 1) // 2 conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps) # uses torch.functional in orig # conv_act = get_activation(act_fn) conv_out = nn.Conv2d(block_out_channels[0], out_channels, kernel_size=conv_out_kernel, padding=conv_out_padding) conv_norm_out.load_state_dict(model.output.gn.state_dict()) conv_out.load_state_dict(model.output.f.state_dict()) print("timestep projection (time_proj) (time_embedding)") f1_sd = model.embed_time.f_1.state_dict() f2_sd = model.embed_time.f_2.state_dict() time_embedding_sd = { "linear_1.weight": f1_sd.pop("weight"), "linear_1.bias": f1_sd.pop("bias"), "linear_2.weight": f2_sd.pop("weight"), "linear_2.bias": f2_sd.pop("bias"), } assert len(f1_sd) == 0 assert len(f2_sd) == 0 time_embedding_type = "learned" num_train_timesteps = 1024 time_embedding_dim = 1280 time_proj = nn.Embedding(num_train_timesteps, block_out_channels[0]) timestep_input_dim = block_out_channels[0] time_embedding = TimestepEmbedding(timestep_input_dim, time_embedding_dim) time_proj.load_state_dict(model.embed_time.emb.state_dict()) time_embedding.load_state_dict(time_embedding_sd) print("CONVERT") time_embedding.to("cuda") time_proj.to("cuda") conv_in.to("cuda") block_one.to("cuda") block_two.to("cuda") block_three.to("cuda") block_four.to("cuda") mid_block_one.to("cuda") up_block_one.to("cuda") up_block_two.to("cuda") up_block_three.to("cuda") up_block_four.to("cuda") conv_norm_out.to("cuda") conv_out.to("cuda") model.time_proj = time_proj model.time_embedding = time_embedding model.embed_image = conv_in model.down[0] = block_one model.down[1] = block_two model.down[2] = block_three model.down[3] = block_four model.mid = mid_block_one model.up[-1] = up_block_one model.up[-2] = up_block_two model.up[-3] = up_block_three model.up[-4] = up_block_four model.output.gn = conv_norm_out model.output.f = conv_out model.converted = True sample_consistency_new = decoder_consistency(latent, generator=torch.Generator("cpu").manual_seed(0)) save_image(sample_consistency_new, "con_new.png") assert (sample_consistency_orig == sample_consistency_new).all() print("making unet") unet = UNet2DModel( in_channels=in_channels, out_channels=out_channels, down_block_types=( "ResnetDownsampleBlock2D", "ResnetDownsampleBlock2D", "ResnetDownsampleBlock2D", "ResnetDownsampleBlock2D", ), up_block_types=( "ResnetUpsampleBlock2D", "ResnetUpsampleBlock2D", "ResnetUpsampleBlock2D", "ResnetUpsampleBlock2D", ), block_out_channels=block_out_channels, layers_per_block=3, norm_num_groups=norm_num_groups, norm_eps=norm_eps, resnet_time_scale_shift="scale_shift", time_embedding_type="learned", num_train_timesteps=num_train_timesteps, add_attention=False, ) unet_state_dict = {} def add_state_dict(prefix, mod): for k, v in mod.state_dict().items(): unet_state_dict[f"{prefix}.{k}"] = v add_state_dict("conv_in", conv_in) add_state_dict("time_proj", time_proj) add_state_dict("time_embedding", time_embedding) add_state_dict("down_blocks.0", block_one) add_state_dict("down_blocks.1", block_two) add_state_dict("down_blocks.2", block_three) add_state_dict("down_blocks.3", block_four) add_state_dict("mid_block", mid_block_one) add_state_dict("up_blocks.0", up_block_one) add_state_dict("up_blocks.1", up_block_two) add_state_dict("up_blocks.2", up_block_three) add_state_dict("up_blocks.3", up_block_four) add_state_dict("conv_norm_out", conv_norm_out) add_state_dict("conv_out", conv_out) unet.load_state_dict(unet_state_dict) print("running with diffusers unet") unet.to("cuda") decoder_consistency.ckpt = unet sample_consistency_new_2 = decoder_consistency(latent, generator=torch.Generator("cpu").manual_seed(0)) save_image(sample_consistency_new_2, "con_new_2.png") assert (sample_consistency_orig == sample_consistency_new_2).all() print("running with diffusers model") Encoder.old_constructor = Encoder.__init__ def new_constructor(self, **kwargs): self.old_constructor(**kwargs) self.constructor_arguments = kwargs Encoder.__init__ = new_constructor vae = AutoencoderKL.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="vae") consistency_vae = ConsistencyDecoderVAE( encoder_args=vae.encoder.constructor_arguments, decoder_args=unet.config, scaling_factor=vae.config.scaling_factor, block_out_channels=vae.config.block_out_channels, latent_channels=vae.config.latent_channels, ) consistency_vae.encoder.load_state_dict(vae.encoder.state_dict()) consistency_vae.quant_conv.load_state_dict(vae.quant_conv.state_dict()) consistency_vae.decoder_unet.load_state_dict(unet.state_dict()) consistency_vae.to(dtype=torch.float16, device="cuda") sample_consistency_new_3 = consistency_vae.decode( 0.18215 * latent, generator=torch.Generator("cpu").manual_seed(0) ).sample print("max difference") print((sample_consistency_orig - sample_consistency_new_3).abs().max()) print("total difference") print((sample_consistency_orig - sample_consistency_new_3).abs().sum()) # assert (sample_consistency_orig == sample_consistency_new_3).all() print("running with diffusers pipeline") pipe = DiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", vae=consistency_vae, torch_dtype=torch.float16 ) pipe.to("cuda") pipe("horse", generator=torch.Generator("cpu").manual_seed(0)).images[0].save("horse.png") if args.save_pretrained is not None: consistency_vae.save_pretrained(args.save_pretrained)
diffusers/scripts/convert_consistency_decoder.py/0
{ "file_path": "diffusers/scripts/convert_consistency_decoder.py", "repo_id": "diffusers", "token_count": 21911 }
219
# coding=utf-8 # Copyright 2024, Haofan Wang, Qixun Wang, All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Conversion script for the LoRA's safetensors checkpoints.""" import argparse import torch from safetensors.torch import load_file from diffusers import StableDiffusionPipeline def convert(base_model_path, checkpoint_path, LORA_PREFIX_UNET, LORA_PREFIX_TEXT_ENCODER, alpha): # load base model pipeline = StableDiffusionPipeline.from_pretrained(base_model_path, torch_dtype=torch.float32) # load LoRA weight from .safetensors state_dict = load_file(checkpoint_path) visited = [] # directly update weight in diffusers model for key in state_dict: # it is suggested to print out the key, it usually will be something like below # "lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight" # as we have set the alpha beforehand, so just skip if ".alpha" in key or key in visited: continue if "text" in key: layer_infos = key.split(".")[0].split(LORA_PREFIX_TEXT_ENCODER + "_")[-1].split("_") curr_layer = pipeline.text_encoder else: layer_infos = key.split(".")[0].split(LORA_PREFIX_UNET + "_")[-1].split("_") curr_layer = pipeline.unet # find the target layer temp_name = layer_infos.pop(0) while len(layer_infos) > -1: try: curr_layer = curr_layer.__getattr__(temp_name) if len(layer_infos) > 0: temp_name = layer_infos.pop(0) elif len(layer_infos) == 0: break except Exception: if len(temp_name) > 0: temp_name += "_" + layer_infos.pop(0) else: temp_name = layer_infos.pop(0) pair_keys = [] if "lora_down" in key: pair_keys.append(key.replace("lora_down", "lora_up")) pair_keys.append(key) else: pair_keys.append(key) pair_keys.append(key.replace("lora_up", "lora_down")) # update weight if len(state_dict[pair_keys[0]].shape) == 4: weight_up = state_dict[pair_keys[0]].squeeze(3).squeeze(2).to(torch.float32) weight_down = state_dict[pair_keys[1]].squeeze(3).squeeze(2).to(torch.float32) curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down).unsqueeze(2).unsqueeze(3) else: weight_up = state_dict[pair_keys[0]].to(torch.float32) weight_down = state_dict[pair_keys[1]].to(torch.float32) curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down) # update visited list for item in pair_keys: visited.append(item) return pipeline if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--base_model_path", default=None, type=str, required=True, help="Path to the base model in diffusers format." ) parser.add_argument( "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert." ) parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") parser.add_argument( "--lora_prefix_unet", default="lora_unet", type=str, help="The prefix of UNet weight in safetensors" ) parser.add_argument( "--lora_prefix_text_encoder", default="lora_te", type=str, help="The prefix of text encoder weight in safetensors", ) parser.add_argument("--alpha", default=0.75, type=float, help="The merging ratio in W = W0 + alpha * deltaW") parser.add_argument( "--to_safetensors", action="store_true", help="Whether to store pipeline in safetensors format or not." ) parser.add_argument("--device", type=str, help="Device to use (e.g. cpu, cuda:0, cuda:1, etc.)") args = parser.parse_args() base_model_path = args.base_model_path checkpoint_path = args.checkpoint_path dump_path = args.dump_path lora_prefix_unet = args.lora_prefix_unet lora_prefix_text_encoder = args.lora_prefix_text_encoder alpha = args.alpha pipe = convert(base_model_path, checkpoint_path, lora_prefix_unet, lora_prefix_text_encoder, alpha) pipe = pipe.to(args.device) pipe.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors)
diffusers/scripts/convert_lora_safetensor_to_diffusers.py/0
{ "file_path": "diffusers/scripts/convert_lora_safetensor_to_diffusers.py", "repo_id": "diffusers", "token_count": 2130 }
220
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import os import shutil from pathlib import Path import onnx import torch from packaging import version from torch.onnx import export from diffusers import OnnxRuntimeModel, OnnxStableDiffusionPipeline, StableDiffusionPipeline is_torch_less_than_1_11 = version.parse(version.parse(torch.__version__).base_version) < version.parse("1.11") def onnx_export( model, model_args: tuple, output_path: Path, ordered_input_names, output_names, dynamic_axes, opset, use_external_data_format=False, ): output_path.parent.mkdir(parents=True, exist_ok=True) # PyTorch deprecated the `enable_onnx_checker` and `use_external_data_format` arguments in v1.11, # so we check the torch version for backwards compatibility if is_torch_less_than_1_11: export( model, model_args, f=output_path.as_posix(), input_names=ordered_input_names, output_names=output_names, dynamic_axes=dynamic_axes, do_constant_folding=True, use_external_data_format=use_external_data_format, enable_onnx_checker=True, opset_version=opset, ) else: export( model, model_args, f=output_path.as_posix(), input_names=ordered_input_names, output_names=output_names, dynamic_axes=dynamic_axes, do_constant_folding=True, opset_version=opset, ) @torch.no_grad() def convert_models(model_path: str, output_path: str, opset: int, fp16: bool = False): dtype = torch.float16 if fp16 else torch.float32 if fp16 and torch.cuda.is_available(): device = "cuda" elif fp16 and not torch.cuda.is_available(): raise ValueError("`float16` model export is only supported on GPUs with CUDA") else: device = "cpu" pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device) output_path = Path(output_path) # TEXT ENCODER num_tokens = pipeline.text_encoder.config.max_position_embeddings text_hidden_size = pipeline.text_encoder.config.hidden_size text_input = pipeline.tokenizer( "A sample prompt", padding="max_length", max_length=pipeline.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) onnx_export( pipeline.text_encoder, # casting to torch.int32 until the CLIP fix is released: https://github.com/huggingface/transformers/pull/18515/files model_args=(text_input.input_ids.to(device=device, dtype=torch.int32)), output_path=output_path / "text_encoder" / "model.onnx", ordered_input_names=["input_ids"], output_names=["last_hidden_state", "pooler_output"], dynamic_axes={ "input_ids": {0: "batch", 1: "sequence"}, }, opset=opset, ) del pipeline.text_encoder # UNET unet_in_channels = pipeline.unet.config.in_channels unet_sample_size = pipeline.unet.config.sample_size unet_path = output_path / "unet" / "model.onnx" onnx_export( pipeline.unet, model_args=( torch.randn(2, unet_in_channels, unet_sample_size, unet_sample_size).to(device=device, dtype=dtype), torch.randn(2).to(device=device, dtype=dtype), torch.randn(2, num_tokens, text_hidden_size).to(device=device, dtype=dtype), False, ), output_path=unet_path, ordered_input_names=["sample", "timestep", "encoder_hidden_states", "return_dict"], output_names=["out_sample"], # has to be different from "sample" for correct tracing dynamic_axes={ "sample": {0: "batch", 1: "channels", 2: "height", 3: "width"}, "timestep": {0: "batch"}, "encoder_hidden_states": {0: "batch", 1: "sequence"}, }, opset=opset, use_external_data_format=True, # UNet is > 2GB, so the weights need to be split ) unet_model_path = str(unet_path.absolute().as_posix()) unet_dir = os.path.dirname(unet_model_path) unet = onnx.load(unet_model_path) # clean up existing tensor files shutil.rmtree(unet_dir) os.mkdir(unet_dir) # collate external tensor files into one onnx.save_model( unet, unet_model_path, save_as_external_data=True, all_tensors_to_one_file=True, location="weights.pb", convert_attribute=False, ) del pipeline.unet # VAE ENCODER vae_encoder = pipeline.vae vae_in_channels = vae_encoder.config.in_channels vae_sample_size = vae_encoder.config.sample_size # need to get the raw tensor output (sample) from the encoder vae_encoder.forward = lambda sample, return_dict: vae_encoder.encode(sample, return_dict)[0].sample() onnx_export( vae_encoder, model_args=( torch.randn(1, vae_in_channels, vae_sample_size, vae_sample_size).to(device=device, dtype=dtype), False, ), output_path=output_path / "vae_encoder" / "model.onnx", ordered_input_names=["sample", "return_dict"], output_names=["latent_sample"], dynamic_axes={ "sample": {0: "batch", 1: "channels", 2: "height", 3: "width"}, }, opset=opset, ) # VAE DECODER vae_decoder = pipeline.vae vae_latent_channels = vae_decoder.config.latent_channels vae_out_channels = vae_decoder.config.out_channels # forward only through the decoder part vae_decoder.forward = vae_encoder.decode onnx_export( vae_decoder, model_args=( torch.randn(1, vae_latent_channels, unet_sample_size, unet_sample_size).to(device=device, dtype=dtype), False, ), output_path=output_path / "vae_decoder" / "model.onnx", ordered_input_names=["latent_sample", "return_dict"], output_names=["sample"], dynamic_axes={ "latent_sample": {0: "batch", 1: "channels", 2: "height", 3: "width"}, }, opset=opset, ) del pipeline.vae # SAFETY CHECKER if pipeline.safety_checker is not None: safety_checker = pipeline.safety_checker clip_num_channels = safety_checker.config.vision_config.num_channels clip_image_size = safety_checker.config.vision_config.image_size safety_checker.forward = safety_checker.forward_onnx onnx_export( pipeline.safety_checker, model_args=( torch.randn( 1, clip_num_channels, clip_image_size, clip_image_size, ).to(device=device, dtype=dtype), torch.randn(1, vae_sample_size, vae_sample_size, vae_out_channels).to(device=device, dtype=dtype), ), output_path=output_path / "safety_checker" / "model.onnx", ordered_input_names=["clip_input", "images"], output_names=["out_images", "has_nsfw_concepts"], dynamic_axes={ "clip_input": {0: "batch", 1: "channels", 2: "height", 3: "width"}, "images": {0: "batch", 1: "height", 2: "width", 3: "channels"}, }, opset=opset, ) del pipeline.safety_checker safety_checker = OnnxRuntimeModel.from_pretrained(output_path / "safety_checker") feature_extractor = pipeline.feature_extractor else: safety_checker = None feature_extractor = None onnx_pipeline = OnnxStableDiffusionPipeline( vae_encoder=OnnxRuntimeModel.from_pretrained(output_path / "vae_encoder"), vae_decoder=OnnxRuntimeModel.from_pretrained(output_path / "vae_decoder"), text_encoder=OnnxRuntimeModel.from_pretrained(output_path / "text_encoder"), tokenizer=pipeline.tokenizer, unet=OnnxRuntimeModel.from_pretrained(output_path / "unet"), scheduler=pipeline.scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, requires_safety_checker=safety_checker is not None, ) onnx_pipeline.save_pretrained(output_path) print("ONNX pipeline saved to", output_path) del pipeline del onnx_pipeline _ = OnnxStableDiffusionPipeline.from_pretrained(output_path, provider="CPUExecutionProvider") print("ONNX pipeline is loadable") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--model_path", type=str, required=True, help="Path to the `diffusers` checkpoint to convert (either a local directory or on the Hub).", ) parser.add_argument("--output_path", type=str, required=True, help="Path to the output model.") parser.add_argument( "--opset", default=14, type=int, help="The version of the ONNX operator set to use.", ) parser.add_argument("--fp16", action="store_true", default=False, help="Export the models in `float16` mode") args = parser.parse_args() convert_models(args.model_path, args.output_path, args.opset, args.fp16)
diffusers/scripts/convert_stable_diffusion_checkpoint_to_onnx.py/0
{ "file_path": "diffusers/scripts/convert_stable_diffusion_checkpoint_to_onnx.py", "repo_id": "diffusers", "token_count": 4384 }
221
__version__ = "0.28.0.dev0" from typing import TYPE_CHECKING from .utils import ( DIFFUSERS_SLOW_IMPORT, OptionalDependencyNotAvailable, _LazyModule, is_flax_available, is_k_diffusion_available, is_librosa_available, is_note_seq_available, is_onnx_available, is_scipy_available, is_torch_available, is_torchsde_available, is_transformers_available, ) # Lazy Import based on # https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py # When adding a new object to this init, please add it to `_import_structure`. The `_import_structure` is a dictionary submodule to list of object names, # and is used to defer the actual importing for when the objects are requested. # This way `import diffusers` provides the names in the namespace without actually importing anything (and especially none of the backends). _import_structure = { "configuration_utils": ["ConfigMixin"], "loaders": ["FromOriginalModelMixin"], "models": [], "pipelines": [], "schedulers": [], "utils": [ "OptionalDependencyNotAvailable", "is_flax_available", "is_inflect_available", "is_invisible_watermark_available", "is_k_diffusion_available", "is_k_diffusion_version", "is_librosa_available", "is_note_seq_available", "is_onnx_available", "is_scipy_available", "is_torch_available", "is_torchsde_available", "is_transformers_available", "is_transformers_version", "is_unidecode_available", "logging", ], } try: if not is_onnx_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_onnx_objects # noqa F403 _import_structure["utils.dummy_onnx_objects"] = [ name for name in dir(dummy_onnx_objects) if not name.startswith("_") ] else: _import_structure["pipelines"].extend(["OnnxRuntimeModel"]) try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_pt_objects # noqa F403 _import_structure["utils.dummy_pt_objects"] = [name for name in dir(dummy_pt_objects) if not name.startswith("_")] else: _import_structure["models"].extend( [ "AsymmetricAutoencoderKL", "AutoencoderKL", "AutoencoderKLTemporalDecoder", "AutoencoderTiny", "ConsistencyDecoderVAE", "ControlNetModel", "ControlNetXSAdapter", "I2VGenXLUNet", "Kandinsky3UNet", "ModelMixin", "MotionAdapter", "MultiAdapter", "PriorTransformer", "StableCascadeUNet", "T2IAdapter", "T5FilmDecoder", "Transformer2DModel", "UNet1DModel", "UNet2DConditionModel", "UNet2DModel", "UNet3DConditionModel", "UNetControlNetXSModel", "UNetMotionModel", "UNetSpatioTemporalConditionModel", "UVit2DModel", "VQModel", ] ) _import_structure["optimization"] = [ "get_constant_schedule", "get_constant_schedule_with_warmup", "get_cosine_schedule_with_warmup", "get_cosine_with_hard_restarts_schedule_with_warmup", "get_linear_schedule_with_warmup", "get_polynomial_decay_schedule_with_warmup", "get_scheduler", ] _import_structure["pipelines"].extend( [ "AudioPipelineOutput", "AutoPipelineForImage2Image", "AutoPipelineForInpainting", "AutoPipelineForText2Image", "ConsistencyModelPipeline", "DanceDiffusionPipeline", "DDIMPipeline", "DDPMPipeline", "DiffusionPipeline", "DiTPipeline", "ImagePipelineOutput", "KarrasVePipeline", "LDMPipeline", "LDMSuperResolutionPipeline", "PNDMPipeline", "RePaintPipeline", "ScoreSdeVePipeline", "StableDiffusionMixin", ] ) _import_structure["schedulers"].extend( [ "AmusedScheduler", "CMStochasticIterativeScheduler", "DDIMInverseScheduler", "DDIMParallelScheduler", "DDIMScheduler", "DDPMParallelScheduler", "DDPMScheduler", "DDPMWuerstchenScheduler", "DEISMultistepScheduler", "DPMSolverMultistepInverseScheduler", "DPMSolverMultistepScheduler", "DPMSolverSinglestepScheduler", "EDMDPMSolverMultistepScheduler", "EDMEulerScheduler", "EulerAncestralDiscreteScheduler", "EulerDiscreteScheduler", "HeunDiscreteScheduler", "IPNDMScheduler", "KarrasVeScheduler", "KDPM2AncestralDiscreteScheduler", "KDPM2DiscreteScheduler", "LCMScheduler", "PNDMScheduler", "RePaintScheduler", "SASolverScheduler", "SchedulerMixin", "ScoreSdeVeScheduler", "TCDScheduler", "UnCLIPScheduler", "UniPCMultistepScheduler", "VQDiffusionScheduler", ] ) _import_structure["training_utils"] = ["EMAModel"] try: if not (is_torch_available() and is_scipy_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_torch_and_scipy_objects # noqa F403 _import_structure["utils.dummy_torch_and_scipy_objects"] = [ name for name in dir(dummy_torch_and_scipy_objects) if not name.startswith("_") ] else: _import_structure["schedulers"].extend(["LMSDiscreteScheduler"]) try: if not (is_torch_available() and is_torchsde_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_torch_and_torchsde_objects # noqa F403 _import_structure["utils.dummy_torch_and_torchsde_objects"] = [ name for name in dir(dummy_torch_and_torchsde_objects) if not name.startswith("_") ] else: _import_structure["schedulers"].extend(["DPMSolverSDEScheduler"]) try: if not (is_torch_available() and is_transformers_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_torch_and_transformers_objects # noqa F403 _import_structure["utils.dummy_torch_and_transformers_objects"] = [ name for name in dir(dummy_torch_and_transformers_objects) if not name.startswith("_") ] else: _import_structure["pipelines"].extend( [ "AltDiffusionImg2ImgPipeline", "AltDiffusionPipeline", "AmusedImg2ImgPipeline", "AmusedInpaintPipeline", "AmusedPipeline", "AnimateDiffPipeline", "AnimateDiffSDXLPipeline", "AnimateDiffVideoToVideoPipeline", "AudioLDM2Pipeline", "AudioLDM2ProjectionModel", "AudioLDM2UNet2DConditionModel", "AudioLDMPipeline", "BlipDiffusionControlNetPipeline", "BlipDiffusionPipeline", "CLIPImageProjection", "CycleDiffusionPipeline", "I2VGenXLPipeline", "IFImg2ImgPipeline", "IFImg2ImgSuperResolutionPipeline", "IFInpaintingPipeline", "IFInpaintingSuperResolutionPipeline", "IFPipeline", "IFSuperResolutionPipeline", "ImageTextPipelineOutput", "Kandinsky3Img2ImgPipeline", "Kandinsky3Pipeline", "KandinskyCombinedPipeline", "KandinskyImg2ImgCombinedPipeline", "KandinskyImg2ImgPipeline", "KandinskyInpaintCombinedPipeline", "KandinskyInpaintPipeline", "KandinskyPipeline", "KandinskyPriorPipeline", "KandinskyV22CombinedPipeline", "KandinskyV22ControlnetImg2ImgPipeline", "KandinskyV22ControlnetPipeline", "KandinskyV22Img2ImgCombinedPipeline", "KandinskyV22Img2ImgPipeline", "KandinskyV22InpaintCombinedPipeline", "KandinskyV22InpaintPipeline", "KandinskyV22Pipeline", "KandinskyV22PriorEmb2EmbPipeline", "KandinskyV22PriorPipeline", "LatentConsistencyModelImg2ImgPipeline", "LatentConsistencyModelPipeline", "LDMTextToImagePipeline", "LEditsPPPipelineStableDiffusion", "LEditsPPPipelineStableDiffusionXL", "MusicLDMPipeline", "PaintByExamplePipeline", "PIAPipeline", "PixArtAlphaPipeline", "PixArtSigmaPipeline", "SemanticStableDiffusionPipeline", "ShapEImg2ImgPipeline", "ShapEPipeline", "StableCascadeCombinedPipeline", "StableCascadeDecoderPipeline", "StableCascadePriorPipeline", "StableDiffusionAdapterPipeline", "StableDiffusionAttendAndExcitePipeline", "StableDiffusionControlNetImg2ImgPipeline", "StableDiffusionControlNetInpaintPipeline", "StableDiffusionControlNetPipeline", "StableDiffusionControlNetXSPipeline", "StableDiffusionDepth2ImgPipeline", "StableDiffusionDiffEditPipeline", "StableDiffusionGLIGENPipeline", "StableDiffusionGLIGENTextImagePipeline", "StableDiffusionImageVariationPipeline", "StableDiffusionImg2ImgPipeline", "StableDiffusionInpaintPipeline", "StableDiffusionInpaintPipelineLegacy", "StableDiffusionInstructPix2PixPipeline", "StableDiffusionLatentUpscalePipeline", "StableDiffusionLDM3DPipeline", "StableDiffusionModelEditingPipeline", "StableDiffusionPanoramaPipeline", "StableDiffusionParadigmsPipeline", "StableDiffusionPipeline", "StableDiffusionPipelineSafe", "StableDiffusionPix2PixZeroPipeline", "StableDiffusionSAGPipeline", "StableDiffusionUpscalePipeline", "StableDiffusionXLAdapterPipeline", "StableDiffusionXLControlNetImg2ImgPipeline", "StableDiffusionXLControlNetInpaintPipeline", "StableDiffusionXLControlNetPipeline", "StableDiffusionXLControlNetXSPipeline", "StableDiffusionXLImg2ImgPipeline", "StableDiffusionXLInpaintPipeline", "StableDiffusionXLInstructPix2PixPipeline", "StableDiffusionXLPipeline", "StableUnCLIPImg2ImgPipeline", "StableUnCLIPPipeline", "StableVideoDiffusionPipeline", "TextToVideoSDPipeline", "TextToVideoZeroPipeline", "TextToVideoZeroSDXLPipeline", "UnCLIPImageVariationPipeline", "UnCLIPPipeline", "UniDiffuserModel", "UniDiffuserPipeline", "UniDiffuserTextDecoder", "VersatileDiffusionDualGuidedPipeline", "VersatileDiffusionImageVariationPipeline", "VersatileDiffusionPipeline", "VersatileDiffusionTextToImagePipeline", "VideoToVideoSDPipeline", "VQDiffusionPipeline", "WuerstchenCombinedPipeline", "WuerstchenDecoderPipeline", "WuerstchenPriorPipeline", ] ) try: if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_torch_and_transformers_and_k_diffusion_objects # noqa F403 _import_structure["utils.dummy_torch_and_transformers_and_k_diffusion_objects"] = [ name for name in dir(dummy_torch_and_transformers_and_k_diffusion_objects) if not name.startswith("_") ] else: _import_structure["pipelines"].extend(["StableDiffusionKDiffusionPipeline", "StableDiffusionXLKDiffusionPipeline"]) try: if not (is_torch_available() and is_transformers_available() and is_onnx_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_torch_and_transformers_and_onnx_objects # noqa F403 _import_structure["utils.dummy_torch_and_transformers_and_onnx_objects"] = [ name for name in dir(dummy_torch_and_transformers_and_onnx_objects) if not name.startswith("_") ] else: _import_structure["pipelines"].extend( [ "OnnxStableDiffusionImg2ImgPipeline", "OnnxStableDiffusionInpaintPipeline", "OnnxStableDiffusionInpaintPipelineLegacy", "OnnxStableDiffusionPipeline", "OnnxStableDiffusionUpscalePipeline", "StableDiffusionOnnxPipeline", ] ) try: if not (is_torch_available() and is_librosa_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_torch_and_librosa_objects # noqa F403 _import_structure["utils.dummy_torch_and_librosa_objects"] = [ name for name in dir(dummy_torch_and_librosa_objects) if not name.startswith("_") ] else: _import_structure["pipelines"].extend(["AudioDiffusionPipeline", "Mel"]) try: if not (is_transformers_available() and is_torch_available() and is_note_seq_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_transformers_and_torch_and_note_seq_objects # noqa F403 _import_structure["utils.dummy_transformers_and_torch_and_note_seq_objects"] = [ name for name in dir(dummy_transformers_and_torch_and_note_seq_objects) if not name.startswith("_") ] else: _import_structure["pipelines"].extend(["SpectrogramDiffusionPipeline"]) try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_flax_objects # noqa F403 _import_structure["utils.dummy_flax_objects"] = [ name for name in dir(dummy_flax_objects) if not name.startswith("_") ] else: _import_structure["models.controlnet_flax"] = ["FlaxControlNetModel"] _import_structure["models.modeling_flax_utils"] = ["FlaxModelMixin"] _import_structure["models.unets.unet_2d_condition_flax"] = ["FlaxUNet2DConditionModel"] _import_structure["models.vae_flax"] = ["FlaxAutoencoderKL"] _import_structure["pipelines"].extend(["FlaxDiffusionPipeline"]) _import_structure["schedulers"].extend( [ "FlaxDDIMScheduler", "FlaxDDPMScheduler", "FlaxDPMSolverMultistepScheduler", "FlaxEulerDiscreteScheduler", "FlaxKarrasVeScheduler", "FlaxLMSDiscreteScheduler", "FlaxPNDMScheduler", "FlaxSchedulerMixin", "FlaxScoreSdeVeScheduler", ] ) try: if not (is_flax_available() and is_transformers_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_flax_and_transformers_objects # noqa F403 _import_structure["utils.dummy_flax_and_transformers_objects"] = [ name for name in dir(dummy_flax_and_transformers_objects) if not name.startswith("_") ] else: _import_structure["pipelines"].extend( [ "FlaxStableDiffusionControlNetPipeline", "FlaxStableDiffusionImg2ImgPipeline", "FlaxStableDiffusionInpaintPipeline", "FlaxStableDiffusionPipeline", "FlaxStableDiffusionXLPipeline", ] ) try: if not (is_note_seq_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_note_seq_objects # noqa F403 _import_structure["utils.dummy_note_seq_objects"] = [ name for name in dir(dummy_note_seq_objects) if not name.startswith("_") ] else: _import_structure["pipelines"].extend(["MidiProcessor"]) if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT: from .configuration_utils import ConfigMixin try: if not is_onnx_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_onnx_objects import * # noqa F403 else: from .pipelines import OnnxRuntimeModel try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_pt_objects import * # noqa F403 else: from .models import ( AsymmetricAutoencoderKL, AutoencoderKL, AutoencoderKLTemporalDecoder, AutoencoderTiny, ConsistencyDecoderVAE, ControlNetModel, ControlNetXSAdapter, I2VGenXLUNet, Kandinsky3UNet, ModelMixin, MotionAdapter, MultiAdapter, PriorTransformer, T2IAdapter, T5FilmDecoder, Transformer2DModel, UNet1DModel, UNet2DConditionModel, UNet2DModel, UNet3DConditionModel, UNetControlNetXSModel, UNetMotionModel, UNetSpatioTemporalConditionModel, UVit2DModel, VQModel, ) from .optimization import ( get_constant_schedule, get_constant_schedule_with_warmup, get_cosine_schedule_with_warmup, get_cosine_with_hard_restarts_schedule_with_warmup, get_linear_schedule_with_warmup, get_polynomial_decay_schedule_with_warmup, get_scheduler, ) from .pipelines import ( AudioPipelineOutput, AutoPipelineForImage2Image, AutoPipelineForInpainting, AutoPipelineForText2Image, BlipDiffusionControlNetPipeline, BlipDiffusionPipeline, CLIPImageProjection, ConsistencyModelPipeline, DanceDiffusionPipeline, DDIMPipeline, DDPMPipeline, DiffusionPipeline, DiTPipeline, ImagePipelineOutput, KarrasVePipeline, LDMPipeline, LDMSuperResolutionPipeline, PNDMPipeline, RePaintPipeline, ScoreSdeVePipeline, StableDiffusionMixin, ) from .schedulers import ( AmusedScheduler, CMStochasticIterativeScheduler, DDIMInverseScheduler, DDIMParallelScheduler, DDIMScheduler, DDPMParallelScheduler, DDPMScheduler, DDPMWuerstchenScheduler, DEISMultistepScheduler, DPMSolverMultistepInverseScheduler, DPMSolverMultistepScheduler, DPMSolverSinglestepScheduler, EDMDPMSolverMultistepScheduler, EDMEulerScheduler, EulerAncestralDiscreteScheduler, EulerDiscreteScheduler, HeunDiscreteScheduler, IPNDMScheduler, KarrasVeScheduler, KDPM2AncestralDiscreteScheduler, KDPM2DiscreteScheduler, LCMScheduler, PNDMScheduler, RePaintScheduler, SASolverScheduler, SchedulerMixin, ScoreSdeVeScheduler, TCDScheduler, UnCLIPScheduler, UniPCMultistepScheduler, VQDiffusionScheduler, ) from .training_utils import EMAModel try: if not (is_torch_available() and is_scipy_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_torch_and_scipy_objects import * # noqa F403 else: from .schedulers import LMSDiscreteScheduler try: if not (is_torch_available() and is_torchsde_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_torch_and_torchsde_objects import * # noqa F403 else: from .schedulers import DPMSolverSDEScheduler try: if not (is_torch_available() and is_transformers_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_torch_and_transformers_objects import * # noqa F403 else: from .pipelines import ( AltDiffusionImg2ImgPipeline, AltDiffusionPipeline, AmusedImg2ImgPipeline, AmusedInpaintPipeline, AmusedPipeline, AnimateDiffPipeline, AnimateDiffSDXLPipeline, AnimateDiffVideoToVideoPipeline, AudioLDM2Pipeline, AudioLDM2ProjectionModel, AudioLDM2UNet2DConditionModel, AudioLDMPipeline, CLIPImageProjection, CycleDiffusionPipeline, I2VGenXLPipeline, IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, IFPipeline, IFSuperResolutionPipeline, ImageTextPipelineOutput, Kandinsky3Img2ImgPipeline, Kandinsky3Pipeline, KandinskyCombinedPipeline, KandinskyImg2ImgCombinedPipeline, KandinskyImg2ImgPipeline, KandinskyInpaintCombinedPipeline, KandinskyInpaintPipeline, KandinskyPipeline, KandinskyPriorPipeline, KandinskyV22CombinedPipeline, KandinskyV22ControlnetImg2ImgPipeline, KandinskyV22ControlnetPipeline, KandinskyV22Img2ImgCombinedPipeline, KandinskyV22Img2ImgPipeline, KandinskyV22InpaintCombinedPipeline, KandinskyV22InpaintPipeline, KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline, KandinskyV22PriorPipeline, LatentConsistencyModelImg2ImgPipeline, LatentConsistencyModelPipeline, LDMTextToImagePipeline, LEditsPPPipelineStableDiffusion, LEditsPPPipelineStableDiffusionXL, MusicLDMPipeline, PaintByExamplePipeline, PIAPipeline, PixArtAlphaPipeline, PixArtSigmaPipeline, SemanticStableDiffusionPipeline, ShapEImg2ImgPipeline, ShapEPipeline, StableCascadeCombinedPipeline, StableCascadeDecoderPipeline, StableCascadePriorPipeline, StableDiffusionAdapterPipeline, StableDiffusionAttendAndExcitePipeline, StableDiffusionControlNetImg2ImgPipeline, StableDiffusionControlNetInpaintPipeline, StableDiffusionControlNetPipeline, StableDiffusionControlNetXSPipeline, StableDiffusionDepth2ImgPipeline, StableDiffusionDiffEditPipeline, StableDiffusionGLIGENPipeline, StableDiffusionGLIGENTextImagePipeline, StableDiffusionImageVariationPipeline, StableDiffusionImg2ImgPipeline, StableDiffusionInpaintPipeline, StableDiffusionInpaintPipelineLegacy, StableDiffusionInstructPix2PixPipeline, StableDiffusionLatentUpscalePipeline, StableDiffusionLDM3DPipeline, StableDiffusionModelEditingPipeline, StableDiffusionPanoramaPipeline, StableDiffusionParadigmsPipeline, StableDiffusionPipeline, StableDiffusionPipelineSafe, StableDiffusionPix2PixZeroPipeline, StableDiffusionSAGPipeline, StableDiffusionUpscalePipeline, StableDiffusionXLAdapterPipeline, StableDiffusionXLControlNetImg2ImgPipeline, StableDiffusionXLControlNetInpaintPipeline, StableDiffusionXLControlNetPipeline, StableDiffusionXLControlNetXSPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, StableDiffusionXLInstructPix2PixPipeline, StableDiffusionXLPipeline, StableUnCLIPImg2ImgPipeline, StableUnCLIPPipeline, StableVideoDiffusionPipeline, TextToVideoSDPipeline, TextToVideoZeroPipeline, TextToVideoZeroSDXLPipeline, UnCLIPImageVariationPipeline, UnCLIPPipeline, UniDiffuserModel, UniDiffuserPipeline, UniDiffuserTextDecoder, VersatileDiffusionDualGuidedPipeline, VersatileDiffusionImageVariationPipeline, VersatileDiffusionPipeline, VersatileDiffusionTextToImagePipeline, VideoToVideoSDPipeline, VQDiffusionPipeline, WuerstchenCombinedPipeline, WuerstchenDecoderPipeline, WuerstchenPriorPipeline, ) try: if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403 else: from .pipelines import StableDiffusionKDiffusionPipeline, StableDiffusionXLKDiffusionPipeline try: if not (is_torch_available() and is_transformers_available() and is_onnx_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403 else: from .pipelines import ( OnnxStableDiffusionImg2ImgPipeline, OnnxStableDiffusionInpaintPipeline, OnnxStableDiffusionInpaintPipelineLegacy, OnnxStableDiffusionPipeline, OnnxStableDiffusionUpscalePipeline, StableDiffusionOnnxPipeline, ) try: if not (is_torch_available() and is_librosa_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_torch_and_librosa_objects import * # noqa F403 else: from .pipelines import AudioDiffusionPipeline, Mel try: if not (is_transformers_available() and is_torch_available() and is_note_seq_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403 else: from .pipelines import SpectrogramDiffusionPipeline try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_flax_objects import * # noqa F403 else: from .models.controlnet_flax import FlaxControlNetModel from .models.modeling_flax_utils import FlaxModelMixin from .models.unets.unet_2d_condition_flax import FlaxUNet2DConditionModel from .models.vae_flax import FlaxAutoencoderKL from .pipelines import FlaxDiffusionPipeline from .schedulers import ( FlaxDDIMScheduler, FlaxDDPMScheduler, FlaxDPMSolverMultistepScheduler, FlaxEulerDiscreteScheduler, FlaxKarrasVeScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, FlaxSchedulerMixin, FlaxScoreSdeVeScheduler, ) try: if not (is_flax_available() and is_transformers_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_flax_and_transformers_objects import * # noqa F403 else: from .pipelines import ( FlaxStableDiffusionControlNetPipeline, FlaxStableDiffusionImg2ImgPipeline, FlaxStableDiffusionInpaintPipeline, FlaxStableDiffusionPipeline, FlaxStableDiffusionXLPipeline, ) try: if not (is_note_seq_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_note_seq_objects import * # noqa F403 else: from .pipelines import MidiProcessor else: import sys sys.modules[__name__] = _LazyModule( __name__, globals()["__file__"], _import_structure, module_spec=__spec__, extra_objects={"__version__": __version__}, )
diffusers/src/diffusers/__init__.py/0
{ "file_path": "diffusers/src/diffusers/__init__.py", "repo_id": "diffusers", "token_count": 14374 }
222
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from huggingface_hub.utils import validate_hf_hub_args from .single_file_utils import ( create_diffusers_controlnet_model_from_ldm, fetch_ldm_config_and_checkpoint, ) class FromOriginalControlNetMixin: """ Load pretrained ControlNet weights saved in the `.ckpt` or `.safetensors` format into a [`ControlNetModel`]. """ @classmethod @validate_hf_hub_args def from_single_file(cls, pretrained_model_link_or_path, **kwargs): r""" Instantiate a [`ControlNetModel`] from pretrained ControlNet weights saved in the original `.ckpt` or `.safetensors` format. The pipeline is set in evaluation mode (`model.eval()`) by default. Parameters: pretrained_model_link_or_path (`str` or `os.PathLike`, *optional*): Can be either: - A link to the `.ckpt` file (for example `"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt"`) on the Hub. - A path to a *file* containing all pipeline weights. config_file (`str`, *optional*): Filepath to the configuration YAML file associated with the model. If not provided it will default to: https://raw.githubusercontent.com/lllyasviel/ControlNet/main/models/cldm_v15.yaml torch_dtype (`str` or `torch.dtype`, *optional*): Override the default `torch.dtype` and load the model with another dtype. If `"auto"` is passed, the dtype is automatically derived from the model's weights. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. resume_download: Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v1 of Diffusers. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to True, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. image_size (`int`, *optional*, defaults to 512): The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (`bool`, *optional*, defaults to `None`): Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to overwrite load and saveable variables (for example the pipeline components of the specific pipeline class). The overwritten components are directly passed to the pipelines `__init__` method. See example below for more information. Examples: ```py from diffusers import StableDiffusionControlNetPipeline, ControlNetModel url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path model = ControlNetModel.from_single_file(url) url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ``` """ original_config_file = kwargs.pop("original_config_file", None) config_file = kwargs.pop("config_file", None) resume_download = kwargs.pop("resume_download", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) token = kwargs.pop("token", None) cache_dir = kwargs.pop("cache_dir", None) local_files_only = kwargs.pop("local_files_only", None) revision = kwargs.pop("revision", None) torch_dtype = kwargs.pop("torch_dtype", None) class_name = cls.__name__ if (config_file is not None) and (original_config_file is not None): raise ValueError( "You cannot pass both `config_file` and `original_config_file` to `from_single_file`. Please use only one of these arguments." ) original_config_file = config_file or original_config_file original_config, checkpoint = fetch_ldm_config_and_checkpoint( pretrained_model_link_or_path=pretrained_model_link_or_path, class_name=class_name, original_config_file=original_config_file, resume_download=resume_download, force_download=force_download, proxies=proxies, token=token, revision=revision, local_files_only=local_files_only, cache_dir=cache_dir, ) upcast_attention = kwargs.pop("upcast_attention", False) image_size = kwargs.pop("image_size", None) component = create_diffusers_controlnet_model_from_ldm( class_name, original_config, checkpoint, upcast_attention=upcast_attention, image_size=image_size, torch_dtype=torch_dtype, ) controlnet = component["controlnet"] if torch_dtype is not None: controlnet = controlnet.to(torch_dtype) return controlnet
diffusers/src/diffusers/loaders/controlnet.py/0
{ "file_path": "diffusers/src/diffusers/loaders/controlnet.py", "repo_id": "diffusers", "token_count": 2877 }
223
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Any, Dict, Optional import torch import torch.nn.functional as F from torch import nn from ..utils import deprecate, logging from ..utils.torch_utils import maybe_allow_in_graph from .activations import GEGLU, GELU, ApproximateGELU from .attention_processor import Attention from .embeddings import SinusoidalPositionalEmbedding from .normalization import AdaLayerNorm, AdaLayerNormContinuous, AdaLayerNormZero, RMSNorm logger = logging.get_logger(__name__) def _chunked_feed_forward(ff: nn.Module, hidden_states: torch.Tensor, chunk_dim: int, chunk_size: int): # "feed_forward_chunk_size" can be used to save memory if hidden_states.shape[chunk_dim] % chunk_size != 0: raise ValueError( f"`hidden_states` dimension to be chunked: {hidden_states.shape[chunk_dim]} has to be divisible by chunk size: {chunk_size}. Make sure to set an appropriate `chunk_size` when calling `unet.enable_forward_chunking`." ) num_chunks = hidden_states.shape[chunk_dim] // chunk_size ff_output = torch.cat( [ff(hid_slice) for hid_slice in hidden_states.chunk(num_chunks, dim=chunk_dim)], dim=chunk_dim, ) return ff_output @maybe_allow_in_graph class GatedSelfAttentionDense(nn.Module): r""" A gated self-attention dense layer that combines visual features and object features. Parameters: query_dim (`int`): The number of channels in the query. context_dim (`int`): The number of channels in the context. n_heads (`int`): The number of heads to use for attention. d_head (`int`): The number of channels in each head. """ def __init__(self, query_dim: int, context_dim: int, n_heads: int, d_head: int): super().__init__() # we need a linear projection since we need cat visual feature and obj feature self.linear = nn.Linear(context_dim, query_dim) self.attn = Attention(query_dim=query_dim, heads=n_heads, dim_head=d_head) self.ff = FeedForward(query_dim, activation_fn="geglu") self.norm1 = nn.LayerNorm(query_dim) self.norm2 = nn.LayerNorm(query_dim) self.register_parameter("alpha_attn", nn.Parameter(torch.tensor(0.0))) self.register_parameter("alpha_dense", nn.Parameter(torch.tensor(0.0))) self.enabled = True def forward(self, x: torch.Tensor, objs: torch.Tensor) -> torch.Tensor: if not self.enabled: return x n_visual = x.shape[1] objs = self.linear(objs) x = x + self.alpha_attn.tanh() * self.attn(self.norm1(torch.cat([x, objs], dim=1)))[:, :n_visual, :] x = x + self.alpha_dense.tanh() * self.ff(self.norm2(x)) return x @maybe_allow_in_graph class BasicTransformerBlock(nn.Module): r""" A basic Transformer block. Parameters: dim (`int`): The number of channels in the input and output. num_attention_heads (`int`): The number of heads to use for multi-head attention. attention_head_dim (`int`): The number of channels in each head. dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. cross_attention_dim (`int`, *optional*): The size of the encoder_hidden_states vector for cross attention. activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. num_embeds_ada_norm (: obj: `int`, *optional*): The number of diffusion steps used during training. See `Transformer2DModel`. attention_bias (: obj: `bool`, *optional*, defaults to `False`): Configure if the attentions should contain a bias parameter. only_cross_attention (`bool`, *optional*): Whether to use only cross-attention layers. In this case two cross attention layers are used. double_self_attention (`bool`, *optional*): Whether to use two self-attention layers. In this case no cross attention layers are used. upcast_attention (`bool`, *optional*): Whether to upcast the attention computation to float32. This is useful for mixed precision training. norm_elementwise_affine (`bool`, *optional*, defaults to `True`): Whether to use learnable elementwise affine parameters for normalization. norm_type (`str`, *optional*, defaults to `"layer_norm"`): The normalization layer to use. Can be `"layer_norm"`, `"ada_norm"` or `"ada_norm_zero"`. final_dropout (`bool` *optional*, defaults to False): Whether to apply a final dropout after the last feed-forward layer. attention_type (`str`, *optional*, defaults to `"default"`): The type of attention to use. Can be `"default"` or `"gated"` or `"gated-text-image"`. positional_embeddings (`str`, *optional*, defaults to `None`): The type of positional embeddings to apply to. num_positional_embeddings (`int`, *optional*, defaults to `None`): The maximum number of positional embeddings to apply. """ def __init__( self, dim: int, num_attention_heads: int, attention_head_dim: int, dropout=0.0, cross_attention_dim: Optional[int] = None, activation_fn: str = "geglu", num_embeds_ada_norm: Optional[int] = None, attention_bias: bool = False, only_cross_attention: bool = False, double_self_attention: bool = False, upcast_attention: bool = False, norm_elementwise_affine: bool = True, norm_type: str = "layer_norm", # 'layer_norm', 'ada_norm', 'ada_norm_zero', 'ada_norm_single', 'ada_norm_continuous', 'layer_norm_i2vgen' norm_eps: float = 1e-5, final_dropout: bool = False, attention_type: str = "default", positional_embeddings: Optional[str] = None, num_positional_embeddings: Optional[int] = None, ada_norm_continous_conditioning_embedding_dim: Optional[int] = None, ada_norm_bias: Optional[int] = None, ff_inner_dim: Optional[int] = None, ff_bias: bool = True, attention_out_bias: bool = True, ): super().__init__() self.only_cross_attention = only_cross_attention # We keep these boolean flags for backward-compatibility. self.use_ada_layer_norm_zero = (num_embeds_ada_norm is not None) and norm_type == "ada_norm_zero" self.use_ada_layer_norm = (num_embeds_ada_norm is not None) and norm_type == "ada_norm" self.use_ada_layer_norm_single = norm_type == "ada_norm_single" self.use_layer_norm = norm_type == "layer_norm" self.use_ada_layer_norm_continuous = norm_type == "ada_norm_continuous" if norm_type in ("ada_norm", "ada_norm_zero") and num_embeds_ada_norm is None: raise ValueError( f"`norm_type` is set to {norm_type}, but `num_embeds_ada_norm` is not defined. Please make sure to" f" define `num_embeds_ada_norm` if setting `norm_type` to {norm_type}." ) self.norm_type = norm_type self.num_embeds_ada_norm = num_embeds_ada_norm if positional_embeddings and (num_positional_embeddings is None): raise ValueError( "If `positional_embedding` type is defined, `num_positition_embeddings` must also be defined." ) if positional_embeddings == "sinusoidal": self.pos_embed = SinusoidalPositionalEmbedding(dim, max_seq_length=num_positional_embeddings) else: self.pos_embed = None # Define 3 blocks. Each block has its own normalization layer. # 1. Self-Attn if norm_type == "ada_norm": self.norm1 = AdaLayerNorm(dim, num_embeds_ada_norm) elif norm_type == "ada_norm_zero": self.norm1 = AdaLayerNormZero(dim, num_embeds_ada_norm) elif norm_type == "ada_norm_continuous": self.norm1 = AdaLayerNormContinuous( dim, ada_norm_continous_conditioning_embedding_dim, norm_elementwise_affine, norm_eps, ada_norm_bias, "rms_norm", ) else: self.norm1 = nn.LayerNorm(dim, elementwise_affine=norm_elementwise_affine, eps=norm_eps) self.attn1 = Attention( query_dim=dim, heads=num_attention_heads, dim_head=attention_head_dim, dropout=dropout, bias=attention_bias, cross_attention_dim=cross_attention_dim if only_cross_attention else None, upcast_attention=upcast_attention, out_bias=attention_out_bias, ) # 2. Cross-Attn if cross_attention_dim is not None or double_self_attention: # We currently only use AdaLayerNormZero for self attention where there will only be one attention block. # I.e. the number of returned modulation chunks from AdaLayerZero would not make sense if returned during # the second cross attention block. if norm_type == "ada_norm": self.norm2 = AdaLayerNorm(dim, num_embeds_ada_norm) elif norm_type == "ada_norm_continuous": self.norm2 = AdaLayerNormContinuous( dim, ada_norm_continous_conditioning_embedding_dim, norm_elementwise_affine, norm_eps, ada_norm_bias, "rms_norm", ) else: self.norm2 = nn.LayerNorm(dim, norm_eps, norm_elementwise_affine) self.attn2 = Attention( query_dim=dim, cross_attention_dim=cross_attention_dim if not double_self_attention else None, heads=num_attention_heads, dim_head=attention_head_dim, dropout=dropout, bias=attention_bias, upcast_attention=upcast_attention, out_bias=attention_out_bias, ) # is self-attn if encoder_hidden_states is none else: self.norm2 = None self.attn2 = None # 3. Feed-forward if norm_type == "ada_norm_continuous": self.norm3 = AdaLayerNormContinuous( dim, ada_norm_continous_conditioning_embedding_dim, norm_elementwise_affine, norm_eps, ada_norm_bias, "layer_norm", ) elif norm_type in ["ada_norm_zero", "ada_norm", "layer_norm", "ada_norm_continuous"]: self.norm3 = nn.LayerNorm(dim, norm_eps, norm_elementwise_affine) elif norm_type == "layer_norm_i2vgen": self.norm3 = None self.ff = FeedForward( dim, dropout=dropout, activation_fn=activation_fn, final_dropout=final_dropout, inner_dim=ff_inner_dim, bias=ff_bias, ) # 4. Fuser if attention_type == "gated" or attention_type == "gated-text-image": self.fuser = GatedSelfAttentionDense(dim, cross_attention_dim, num_attention_heads, attention_head_dim) # 5. Scale-shift for PixArt-Alpha. if norm_type == "ada_norm_single": self.scale_shift_table = nn.Parameter(torch.randn(6, dim) / dim**0.5) # let chunk size default to None self._chunk_size = None self._chunk_dim = 0 def set_chunk_feed_forward(self, chunk_size: Optional[int], dim: int = 0): # Sets chunk feed-forward self._chunk_size = chunk_size self._chunk_dim = dim def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, timestep: Optional[torch.LongTensor] = None, cross_attention_kwargs: Dict[str, Any] = None, class_labels: Optional[torch.LongTensor] = None, added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None, ) -> torch.Tensor: if cross_attention_kwargs is not None: if cross_attention_kwargs.get("scale", None) is not None: logger.warning("Passing `scale` to `cross_attention_kwargs` is deprecated. `scale` will be ignored.") # Notice that normalization is always applied before the real computation in the following blocks. # 0. Self-Attention batch_size = hidden_states.shape[0] if self.norm_type == "ada_norm": norm_hidden_states = self.norm1(hidden_states, timestep) elif self.norm_type == "ada_norm_zero": norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1( hidden_states, timestep, class_labels, hidden_dtype=hidden_states.dtype ) elif self.norm_type in ["layer_norm", "layer_norm_i2vgen"]: norm_hidden_states = self.norm1(hidden_states) elif self.norm_type == "ada_norm_continuous": norm_hidden_states = self.norm1(hidden_states, added_cond_kwargs["pooled_text_emb"]) elif self.norm_type == "ada_norm_single": shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = ( self.scale_shift_table[None] + timestep.reshape(batch_size, 6, -1) ).chunk(6, dim=1) norm_hidden_states = self.norm1(hidden_states) norm_hidden_states = norm_hidden_states * (1 + scale_msa) + shift_msa norm_hidden_states = norm_hidden_states.squeeze(1) else: raise ValueError("Incorrect norm used") if self.pos_embed is not None: norm_hidden_states = self.pos_embed(norm_hidden_states) # 1. Prepare GLIGEN inputs cross_attention_kwargs = cross_attention_kwargs.copy() if cross_attention_kwargs is not None else {} gligen_kwargs = cross_attention_kwargs.pop("gligen", None) attn_output = self.attn1( norm_hidden_states, encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, attention_mask=attention_mask, **cross_attention_kwargs, ) if self.norm_type == "ada_norm_zero": attn_output = gate_msa.unsqueeze(1) * attn_output elif self.norm_type == "ada_norm_single": attn_output = gate_msa * attn_output hidden_states = attn_output + hidden_states if hidden_states.ndim == 4: hidden_states = hidden_states.squeeze(1) # 1.2 GLIGEN Control if gligen_kwargs is not None: hidden_states = self.fuser(hidden_states, gligen_kwargs["objs"]) # 3. Cross-Attention if self.attn2 is not None: if self.norm_type == "ada_norm": norm_hidden_states = self.norm2(hidden_states, timestep) elif self.norm_type in ["ada_norm_zero", "layer_norm", "layer_norm_i2vgen"]: norm_hidden_states = self.norm2(hidden_states) elif self.norm_type == "ada_norm_single": # For PixArt norm2 isn't applied here: # https://github.com/PixArt-alpha/PixArt-alpha/blob/0f55e922376d8b797edd44d25d0e7464b260dcab/diffusion/model/nets/PixArtMS.py#L70C1-L76C103 norm_hidden_states = hidden_states elif self.norm_type == "ada_norm_continuous": norm_hidden_states = self.norm2(hidden_states, added_cond_kwargs["pooled_text_emb"]) else: raise ValueError("Incorrect norm") if self.pos_embed is not None and self.norm_type != "ada_norm_single": norm_hidden_states = self.pos_embed(norm_hidden_states) attn_output = self.attn2( norm_hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=encoder_attention_mask, **cross_attention_kwargs, ) hidden_states = attn_output + hidden_states # 4. Feed-forward # i2vgen doesn't have this norm 🤷‍♂️ if self.norm_type == "ada_norm_continuous": norm_hidden_states = self.norm3(hidden_states, added_cond_kwargs["pooled_text_emb"]) elif not self.norm_type == "ada_norm_single": norm_hidden_states = self.norm3(hidden_states) if self.norm_type == "ada_norm_zero": norm_hidden_states = norm_hidden_states * (1 + scale_mlp[:, None]) + shift_mlp[:, None] if self.norm_type == "ada_norm_single": norm_hidden_states = self.norm2(hidden_states) norm_hidden_states = norm_hidden_states * (1 + scale_mlp) + shift_mlp if self._chunk_size is not None: # "feed_forward_chunk_size" can be used to save memory ff_output = _chunked_feed_forward(self.ff, norm_hidden_states, self._chunk_dim, self._chunk_size) else: ff_output = self.ff(norm_hidden_states) if self.norm_type == "ada_norm_zero": ff_output = gate_mlp.unsqueeze(1) * ff_output elif self.norm_type == "ada_norm_single": ff_output = gate_mlp * ff_output hidden_states = ff_output + hidden_states if hidden_states.ndim == 4: hidden_states = hidden_states.squeeze(1) return hidden_states @maybe_allow_in_graph class TemporalBasicTransformerBlock(nn.Module): r""" A basic Transformer block for video like data. Parameters: dim (`int`): The number of channels in the input and output. time_mix_inner_dim (`int`): The number of channels for temporal attention. num_attention_heads (`int`): The number of heads to use for multi-head attention. attention_head_dim (`int`): The number of channels in each head. cross_attention_dim (`int`, *optional*): The size of the encoder_hidden_states vector for cross attention. """ def __init__( self, dim: int, time_mix_inner_dim: int, num_attention_heads: int, attention_head_dim: int, cross_attention_dim: Optional[int] = None, ): super().__init__() self.is_res = dim == time_mix_inner_dim self.norm_in = nn.LayerNorm(dim) # Define 3 blocks. Each block has its own normalization layer. # 1. Self-Attn self.ff_in = FeedForward( dim, dim_out=time_mix_inner_dim, activation_fn="geglu", ) self.norm1 = nn.LayerNorm(time_mix_inner_dim) self.attn1 = Attention( query_dim=time_mix_inner_dim, heads=num_attention_heads, dim_head=attention_head_dim, cross_attention_dim=None, ) # 2. Cross-Attn if cross_attention_dim is not None: # We currently only use AdaLayerNormZero for self attention where there will only be one attention block. # I.e. the number of returned modulation chunks from AdaLayerZero would not make sense if returned during # the second cross attention block. self.norm2 = nn.LayerNorm(time_mix_inner_dim) self.attn2 = Attention( query_dim=time_mix_inner_dim, cross_attention_dim=cross_attention_dim, heads=num_attention_heads, dim_head=attention_head_dim, ) # is self-attn if encoder_hidden_states is none else: self.norm2 = None self.attn2 = None # 3. Feed-forward self.norm3 = nn.LayerNorm(time_mix_inner_dim) self.ff = FeedForward(time_mix_inner_dim, activation_fn="geglu") # let chunk size default to None self._chunk_size = None self._chunk_dim = None def set_chunk_feed_forward(self, chunk_size: Optional[int], **kwargs): # Sets chunk feed-forward self._chunk_size = chunk_size # chunk dim should be hardcoded to 1 to have better speed vs. memory trade-off self._chunk_dim = 1 def forward( self, hidden_states: torch.Tensor, num_frames: int, encoder_hidden_states: Optional[torch.Tensor] = None, ) -> torch.Tensor: # Notice that normalization is always applied before the real computation in the following blocks. # 0. Self-Attention batch_size = hidden_states.shape[0] batch_frames, seq_length, channels = hidden_states.shape batch_size = batch_frames // num_frames hidden_states = hidden_states[None, :].reshape(batch_size, num_frames, seq_length, channels) hidden_states = hidden_states.permute(0, 2, 1, 3) hidden_states = hidden_states.reshape(batch_size * seq_length, num_frames, channels) residual = hidden_states hidden_states = self.norm_in(hidden_states) if self._chunk_size is not None: hidden_states = _chunked_feed_forward(self.ff_in, hidden_states, self._chunk_dim, self._chunk_size) else: hidden_states = self.ff_in(hidden_states) if self.is_res: hidden_states = hidden_states + residual norm_hidden_states = self.norm1(hidden_states) attn_output = self.attn1(norm_hidden_states, encoder_hidden_states=None) hidden_states = attn_output + hidden_states # 3. Cross-Attention if self.attn2 is not None: norm_hidden_states = self.norm2(hidden_states) attn_output = self.attn2(norm_hidden_states, encoder_hidden_states=encoder_hidden_states) hidden_states = attn_output + hidden_states # 4. Feed-forward norm_hidden_states = self.norm3(hidden_states) if self._chunk_size is not None: ff_output = _chunked_feed_forward(self.ff, norm_hidden_states, self._chunk_dim, self._chunk_size) else: ff_output = self.ff(norm_hidden_states) if self.is_res: hidden_states = ff_output + hidden_states else: hidden_states = ff_output hidden_states = hidden_states[None, :].reshape(batch_size, seq_length, num_frames, channels) hidden_states = hidden_states.permute(0, 2, 1, 3) hidden_states = hidden_states.reshape(batch_size * num_frames, seq_length, channels) return hidden_states class SkipFFTransformerBlock(nn.Module): def __init__( self, dim: int, num_attention_heads: int, attention_head_dim: int, kv_input_dim: int, kv_input_dim_proj_use_bias: bool, dropout=0.0, cross_attention_dim: Optional[int] = None, attention_bias: bool = False, attention_out_bias: bool = True, ): super().__init__() if kv_input_dim != dim: self.kv_mapper = nn.Linear(kv_input_dim, dim, kv_input_dim_proj_use_bias) else: self.kv_mapper = None self.norm1 = RMSNorm(dim, 1e-06) self.attn1 = Attention( query_dim=dim, heads=num_attention_heads, dim_head=attention_head_dim, dropout=dropout, bias=attention_bias, cross_attention_dim=cross_attention_dim, out_bias=attention_out_bias, ) self.norm2 = RMSNorm(dim, 1e-06) self.attn2 = Attention( query_dim=dim, cross_attention_dim=cross_attention_dim, heads=num_attention_heads, dim_head=attention_head_dim, dropout=dropout, bias=attention_bias, out_bias=attention_out_bias, ) def forward(self, hidden_states, encoder_hidden_states, cross_attention_kwargs): cross_attention_kwargs = cross_attention_kwargs.copy() if cross_attention_kwargs is not None else {} if self.kv_mapper is not None: encoder_hidden_states = self.kv_mapper(F.silu(encoder_hidden_states)) norm_hidden_states = self.norm1(hidden_states) attn_output = self.attn1( norm_hidden_states, encoder_hidden_states=encoder_hidden_states, **cross_attention_kwargs, ) hidden_states = attn_output + hidden_states norm_hidden_states = self.norm2(hidden_states) attn_output = self.attn2( norm_hidden_states, encoder_hidden_states=encoder_hidden_states, **cross_attention_kwargs, ) hidden_states = attn_output + hidden_states return hidden_states class FeedForward(nn.Module): r""" A feed-forward layer. Parameters: dim (`int`): The number of channels in the input. dim_out (`int`, *optional*): The number of channels in the output. If not given, defaults to `dim`. mult (`int`, *optional*, defaults to 4): The multiplier to use for the hidden dimension. dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. final_dropout (`bool` *optional*, defaults to False): Apply a final dropout. bias (`bool`, defaults to True): Whether to use a bias in the linear layer. """ def __init__( self, dim: int, dim_out: Optional[int] = None, mult: int = 4, dropout: float = 0.0, activation_fn: str = "geglu", final_dropout: bool = False, inner_dim=None, bias: bool = True, ): super().__init__() if inner_dim is None: inner_dim = int(dim * mult) dim_out = dim_out if dim_out is not None else dim if activation_fn == "gelu": act_fn = GELU(dim, inner_dim, bias=bias) if activation_fn == "gelu-approximate": act_fn = GELU(dim, inner_dim, approximate="tanh", bias=bias) elif activation_fn == "geglu": act_fn = GEGLU(dim, inner_dim, bias=bias) elif activation_fn == "geglu-approximate": act_fn = ApproximateGELU(dim, inner_dim, bias=bias) self.net = nn.ModuleList([]) # project in self.net.append(act_fn) # project dropout self.net.append(nn.Dropout(dropout)) # project out self.net.append(nn.Linear(inner_dim, dim_out, bias=bias)) # FF as used in Vision Transformer, MLP-Mixer, etc. have a final dropout if final_dropout: self.net.append(nn.Dropout(dropout)) def forward(self, hidden_states: torch.Tensor, *args, **kwargs) -> torch.Tensor: if len(args) > 0 or kwargs.get("scale", None) is not None: deprecation_message = "The `scale` argument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future. `scale` should directly be passed while calling the underlying pipeline component i.e., via `cross_attention_kwargs`." deprecate("scale", "1.0.0", deprecation_message) for module in self.net: hidden_states = module(hidden_states) return hidden_states
diffusers/src/diffusers/models/attention.py/0
{ "file_path": "diffusers/src/diffusers/models/attention.py", "repo_id": "diffusers", "token_count": 12510 }
224
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math import flax.linen as nn import jax.numpy as jnp def get_sinusoidal_embeddings( timesteps: jnp.ndarray, embedding_dim: int, freq_shift: float = 1, min_timescale: float = 1, max_timescale: float = 1.0e4, flip_sin_to_cos: bool = False, scale: float = 1.0, ) -> jnp.ndarray: """Returns the positional encoding (same as Tensor2Tensor). Args: timesteps: a 1-D Tensor of N indices, one per batch element. These may be fractional. embedding_dim: The number of output channels. min_timescale: The smallest time unit (should probably be 0.0). max_timescale: The largest time unit. Returns: a Tensor of timing signals [N, num_channels] """ assert timesteps.ndim == 1, "Timesteps should be a 1d-array" assert embedding_dim % 2 == 0, f"Embedding dimension {embedding_dim} should be even" num_timescales = float(embedding_dim // 2) log_timescale_increment = math.log(max_timescale / min_timescale) / (num_timescales - freq_shift) inv_timescales = min_timescale * jnp.exp(jnp.arange(num_timescales, dtype=jnp.float32) * -log_timescale_increment) emb = jnp.expand_dims(timesteps, 1) * jnp.expand_dims(inv_timescales, 0) # scale embeddings scaled_time = scale * emb if flip_sin_to_cos: signal = jnp.concatenate([jnp.cos(scaled_time), jnp.sin(scaled_time)], axis=1) else: signal = jnp.concatenate([jnp.sin(scaled_time), jnp.cos(scaled_time)], axis=1) signal = jnp.reshape(signal, [jnp.shape(timesteps)[0], embedding_dim]) return signal class FlaxTimestepEmbedding(nn.Module): r""" Time step Embedding Module. Learns embeddings for input time steps. Args: time_embed_dim (`int`, *optional*, defaults to `32`): Time step embedding dimension dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): Parameters `dtype` """ time_embed_dim: int = 32 dtype: jnp.dtype = jnp.float32 @nn.compact def __call__(self, temb): temb = nn.Dense(self.time_embed_dim, dtype=self.dtype, name="linear_1")(temb) temb = nn.silu(temb) temb = nn.Dense(self.time_embed_dim, dtype=self.dtype, name="linear_2")(temb) return temb class FlaxTimesteps(nn.Module): r""" Wrapper Module for sinusoidal Time step Embeddings as described in https://arxiv.org/abs/2006.11239 Args: dim (`int`, *optional*, defaults to `32`): Time step embedding dimension """ dim: int = 32 flip_sin_to_cos: bool = False freq_shift: float = 1 @nn.compact def __call__(self, timesteps): return get_sinusoidal_embeddings( timesteps, embedding_dim=self.dim, flip_sin_to_cos=self.flip_sin_to_cos, freq_shift=self.freq_shift )
diffusers/src/diffusers/models/embeddings_flax.py/0
{ "file_path": "diffusers/src/diffusers/models/embeddings_flax.py", "repo_id": "diffusers", "token_count": 1401 }
225
from dataclasses import dataclass from typing import Dict, Optional, Union import torch import torch.nn.functional as F from torch import nn from ...configuration_utils import ConfigMixin, register_to_config from ...loaders import PeftAdapterMixin, UNet2DConditionLoadersMixin from ...utils import BaseOutput from ..attention import BasicTransformerBlock from ..attention_processor import ( ADDED_KV_ATTENTION_PROCESSORS, CROSS_ATTENTION_PROCESSORS, AttentionProcessor, AttnAddedKVProcessor, AttnProcessor, ) from ..embeddings import TimestepEmbedding, Timesteps from ..modeling_utils import ModelMixin @dataclass class PriorTransformerOutput(BaseOutput): """ The output of [`PriorTransformer`]. Args: predicted_image_embedding (`torch.Tensor` of shape `(batch_size, embedding_dim)`): The predicted CLIP image embedding conditioned on the CLIP text embedding input. """ predicted_image_embedding: torch.Tensor class PriorTransformer(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin): """ A Prior Transformer model. Parameters: num_attention_heads (`int`, *optional*, defaults to 32): The number of heads to use for multi-head attention. attention_head_dim (`int`, *optional*, defaults to 64): The number of channels in each head. num_layers (`int`, *optional*, defaults to 20): The number of layers of Transformer blocks to use. embedding_dim (`int`, *optional*, defaults to 768): The dimension of the model input `hidden_states` num_embeddings (`int`, *optional*, defaults to 77): The number of embeddings of the model input `hidden_states` additional_embeddings (`int`, *optional*, defaults to 4): The number of additional tokens appended to the projected `hidden_states`. The actual length of the used `hidden_states` is `num_embeddings + additional_embeddings`. dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. time_embed_act_fn (`str`, *optional*, defaults to 'silu'): The activation function to use to create timestep embeddings. norm_in_type (`str`, *optional*, defaults to None): The normalization layer to apply on hidden states before passing to Transformer blocks. Set it to `None` if normalization is not needed. embedding_proj_norm_type (`str`, *optional*, defaults to None): The normalization layer to apply on the input `proj_embedding`. Set it to `None` if normalization is not needed. encoder_hid_proj_type (`str`, *optional*, defaults to `linear`): The projection layer to apply on the input `encoder_hidden_states`. Set it to `None` if `encoder_hidden_states` is `None`. added_emb_type (`str`, *optional*, defaults to `prd`): Additional embeddings to condition the model. Choose from `prd` or `None`. if choose `prd`, it will prepend a token indicating the (quantized) dot product between the text embedding and image embedding as proposed in the unclip paper https://arxiv.org/abs/2204.06125 If it is `None`, no additional embeddings will be prepended. time_embed_dim (`int, *optional*, defaults to None): The dimension of timestep embeddings. If None, will be set to `num_attention_heads * attention_head_dim` embedding_proj_dim (`int`, *optional*, default to None): The dimension of `proj_embedding`. If None, will be set to `embedding_dim`. clip_embed_dim (`int`, *optional*, default to None): The dimension of the output. If None, will be set to `embedding_dim`. """ @register_to_config def __init__( self, num_attention_heads: int = 32, attention_head_dim: int = 64, num_layers: int = 20, embedding_dim: int = 768, num_embeddings=77, additional_embeddings=4, dropout: float = 0.0, time_embed_act_fn: str = "silu", norm_in_type: Optional[str] = None, # layer embedding_proj_norm_type: Optional[str] = None, # layer encoder_hid_proj_type: Optional[str] = "linear", # linear added_emb_type: Optional[str] = "prd", # prd time_embed_dim: Optional[int] = None, embedding_proj_dim: Optional[int] = None, clip_embed_dim: Optional[int] = None, ): super().__init__() self.num_attention_heads = num_attention_heads self.attention_head_dim = attention_head_dim inner_dim = num_attention_heads * attention_head_dim self.additional_embeddings = additional_embeddings time_embed_dim = time_embed_dim or inner_dim embedding_proj_dim = embedding_proj_dim or embedding_dim clip_embed_dim = clip_embed_dim or embedding_dim self.time_proj = Timesteps(inner_dim, True, 0) self.time_embedding = TimestepEmbedding(inner_dim, time_embed_dim, out_dim=inner_dim, act_fn=time_embed_act_fn) self.proj_in = nn.Linear(embedding_dim, inner_dim) if embedding_proj_norm_type is None: self.embedding_proj_norm = None elif embedding_proj_norm_type == "layer": self.embedding_proj_norm = nn.LayerNorm(embedding_proj_dim) else: raise ValueError(f"unsupported embedding_proj_norm_type: {embedding_proj_norm_type}") self.embedding_proj = nn.Linear(embedding_proj_dim, inner_dim) if encoder_hid_proj_type is None: self.encoder_hidden_states_proj = None elif encoder_hid_proj_type == "linear": self.encoder_hidden_states_proj = nn.Linear(embedding_dim, inner_dim) else: raise ValueError(f"unsupported encoder_hid_proj_type: {encoder_hid_proj_type}") self.positional_embedding = nn.Parameter(torch.zeros(1, num_embeddings + additional_embeddings, inner_dim)) if added_emb_type == "prd": self.prd_embedding = nn.Parameter(torch.zeros(1, 1, inner_dim)) elif added_emb_type is None: self.prd_embedding = None else: raise ValueError( f"`added_emb_type`: {added_emb_type} is not supported. Make sure to choose one of `'prd'` or `None`." ) self.transformer_blocks = nn.ModuleList( [ BasicTransformerBlock( inner_dim, num_attention_heads, attention_head_dim, dropout=dropout, activation_fn="gelu", attention_bias=True, ) for d in range(num_layers) ] ) if norm_in_type == "layer": self.norm_in = nn.LayerNorm(inner_dim) elif norm_in_type is None: self.norm_in = None else: raise ValueError(f"Unsupported norm_in_type: {norm_in_type}.") self.norm_out = nn.LayerNorm(inner_dim) self.proj_to_clip_embeddings = nn.Linear(inner_dim, clip_embed_dim) causal_attention_mask = torch.full( [num_embeddings + additional_embeddings, num_embeddings + additional_embeddings], -10000.0 ) causal_attention_mask.triu_(1) causal_attention_mask = causal_attention_mask[None, ...] self.register_buffer("causal_attention_mask", causal_attention_mask, persistent=False) self.clip_mean = nn.Parameter(torch.zeros(1, clip_embed_dim)) self.clip_std = nn.Parameter(torch.zeros(1, clip_embed_dim)) @property # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.attn_processors def attn_processors(self) -> Dict[str, AttentionProcessor]: r""" Returns: `dict` of attention processors: A dictionary containing all attention processors used in the model with indexed by its weight name. """ # set recursively processors = {} def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): if hasattr(module, "get_processor"): processors[f"{name}.processor"] = module.get_processor(return_deprecated_lora=True) for sub_name, child in module.named_children(): fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) return processors for name, module in self.named_children(): fn_recursive_add_processors(name, module, processors) return processors # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_attn_processor def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): r""" Sets the attention processor to use to compute attention. Parameters: processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): The instantiated processor class or a dictionary of processor classes that will be set as the processor for **all** `Attention` layers. If `processor` is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. """ count = len(self.attn_processors.keys()) if isinstance(processor, dict) and len(processor) != count: raise ValueError( f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" f" number of attention layers: {count}. Please make sure to pass {count} processor classes." ) def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): if hasattr(module, "set_processor"): if not isinstance(processor, dict): module.set_processor(processor) else: module.set_processor(processor.pop(f"{name}.processor")) for sub_name, child in module.named_children(): fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) for name, module in self.named_children(): fn_recursive_attn_processor(name, module, processor) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_default_attn_processor def set_default_attn_processor(self): """ Disables custom attention processors and sets the default attention implementation. """ if all(proc.__class__ in ADDED_KV_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): processor = AttnAddedKVProcessor() elif all(proc.__class__ in CROSS_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): processor = AttnProcessor() else: raise ValueError( f"Cannot call `set_default_attn_processor` when attention processors are of type {next(iter(self.attn_processors.values()))}" ) self.set_attn_processor(processor) def forward( self, hidden_states, timestep: Union[torch.Tensor, float, int], proj_embedding: torch.Tensor, encoder_hidden_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.BoolTensor] = None, return_dict: bool = True, ): """ The [`PriorTransformer`] forward method. Args: hidden_states (`torch.Tensor` of shape `(batch_size, embedding_dim)`): The currently predicted image embeddings. timestep (`torch.LongTensor`): Current denoising step. proj_embedding (`torch.Tensor` of shape `(batch_size, embedding_dim)`): Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (`torch.Tensor` of shape `(batch_size, num_embeddings, embedding_dim)`): Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (`torch.BoolTensor` of shape `(batch_size, num_embeddings)`): Text mask for the text embeddings. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~models.prior_transformer.PriorTransformerOutput`] instead of a plain tuple. Returns: [`~models.prior_transformer.PriorTransformerOutput`] or `tuple`: If return_dict is True, a [`~models.prior_transformer.PriorTransformerOutput`] is returned, otherwise a tuple is returned where the first element is the sample tensor. """ batch_size = hidden_states.shape[0] timesteps = timestep if not torch.is_tensor(timesteps): timesteps = torch.tensor([timesteps], dtype=torch.long, device=hidden_states.device) elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0: timesteps = timesteps[None].to(hidden_states.device) # broadcast to batch dimension in a way that's compatible with ONNX/Core ML timesteps = timesteps * torch.ones(batch_size, dtype=timesteps.dtype, device=timesteps.device) timesteps_projected = self.time_proj(timesteps) # timesteps does not contain any weights and will always return f32 tensors # but time_embedding might be fp16, so we need to cast here. timesteps_projected = timesteps_projected.to(dtype=self.dtype) time_embeddings = self.time_embedding(timesteps_projected) if self.embedding_proj_norm is not None: proj_embedding = self.embedding_proj_norm(proj_embedding) proj_embeddings = self.embedding_proj(proj_embedding) if self.encoder_hidden_states_proj is not None and encoder_hidden_states is not None: encoder_hidden_states = self.encoder_hidden_states_proj(encoder_hidden_states) elif self.encoder_hidden_states_proj is not None and encoder_hidden_states is None: raise ValueError("`encoder_hidden_states_proj` requires `encoder_hidden_states` to be set") hidden_states = self.proj_in(hidden_states) positional_embeddings = self.positional_embedding.to(hidden_states.dtype) additional_embeds = [] additional_embeddings_len = 0 if encoder_hidden_states is not None: additional_embeds.append(encoder_hidden_states) additional_embeddings_len += encoder_hidden_states.shape[1] if len(proj_embeddings.shape) == 2: proj_embeddings = proj_embeddings[:, None, :] if len(hidden_states.shape) == 2: hidden_states = hidden_states[:, None, :] additional_embeds = additional_embeds + [ proj_embeddings, time_embeddings[:, None, :], hidden_states, ] if self.prd_embedding is not None: prd_embedding = self.prd_embedding.to(hidden_states.dtype).expand(batch_size, -1, -1) additional_embeds.append(prd_embedding) hidden_states = torch.cat( additional_embeds, dim=1, ) # Allow positional_embedding to not include the `addtional_embeddings` and instead pad it with zeros for these additional tokens additional_embeddings_len = additional_embeddings_len + proj_embeddings.shape[1] + 1 if positional_embeddings.shape[1] < hidden_states.shape[1]: positional_embeddings = F.pad( positional_embeddings, ( 0, 0, additional_embeddings_len, self.prd_embedding.shape[1] if self.prd_embedding is not None else 0, ), value=0.0, ) hidden_states = hidden_states + positional_embeddings if attention_mask is not None: attention_mask = (1 - attention_mask.to(hidden_states.dtype)) * -10000.0 attention_mask = F.pad(attention_mask, (0, self.additional_embeddings), value=0.0) attention_mask = (attention_mask[:, None, :] + self.causal_attention_mask).to(hidden_states.dtype) attention_mask = attention_mask.repeat_interleave(self.config.num_attention_heads, dim=0) if self.norm_in is not None: hidden_states = self.norm_in(hidden_states) for block in self.transformer_blocks: hidden_states = block(hidden_states, attention_mask=attention_mask) hidden_states = self.norm_out(hidden_states) if self.prd_embedding is not None: hidden_states = hidden_states[:, -1] else: hidden_states = hidden_states[:, additional_embeddings_len:] predicted_image_embedding = self.proj_to_clip_embeddings(hidden_states) if not return_dict: return (predicted_image_embedding,) return PriorTransformerOutput(predicted_image_embedding=predicted_image_embedding) def post_process_latents(self, prior_latents): prior_latents = (prior_latents * self.clip_std) + self.clip_mean return prior_latents
diffusers/src/diffusers/models/transformers/prior_transformer.py/0
{ "file_path": "diffusers/src/diffusers/models/transformers/prior_transformer.py", "repo_id": "diffusers", "token_count": 7381 }
226
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Dict, Optional, Tuple, Union import flax import flax.linen as nn import jax import jax.numpy as jnp from flax.core.frozen_dict import FrozenDict from ...configuration_utils import ConfigMixin, flax_register_to_config from ...utils import BaseOutput from ..embeddings_flax import FlaxTimestepEmbedding, FlaxTimesteps from ..modeling_flax_utils import FlaxModelMixin from .unet_2d_blocks_flax import ( FlaxCrossAttnDownBlock2D, FlaxCrossAttnUpBlock2D, FlaxDownBlock2D, FlaxUNetMidBlock2DCrossAttn, FlaxUpBlock2D, ) @flax.struct.dataclass class FlaxUNet2DConditionOutput(BaseOutput): """ The output of [`FlaxUNet2DConditionModel`]. Args: sample (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)`): The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model. """ sample: jnp.ndarray @flax_register_to_config class FlaxUNet2DConditionModel(nn.Module, FlaxModelMixin, ConfigMixin): r""" A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample shaped output. This model inherits from [`FlaxModelMixin`]. Check the superclass documentation for it's generic methods implemented for all models (such as downloading or saving). This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its general usage and behavior. Inherent JAX features such as the following are supported: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) Parameters: sample_size (`int`, *optional*): The size of the input sample. in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. down_block_types (`Tuple[str]`, *optional*, defaults to `("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")`): The tuple of downsample blocks to use. up_block_types (`Tuple[str]`, *optional*, defaults to `("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")`): The tuple of upsample blocks to use. mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock2DCrossAttn"`): Block type for middle of UNet, it can be one of `UNetMidBlock2DCrossAttn`. If `None`, the mid block layer is skipped. block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): The tuple of output channels for each block. layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. attention_head_dim (`int` or `Tuple[int]`, *optional*, defaults to 8): The dimension of the attention heads. num_attention_heads (`int` or `Tuple[int]`, *optional*): The number of attention heads. cross_attention_dim (`int`, *optional*, defaults to 768): The dimension of the cross attention features. dropout (`float`, *optional*, defaults to 0): Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (`bool`, *optional*, defaults to `True`): Whether to flip the sin to cos in the time embedding. freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. use_memory_efficient_attention (`bool`, *optional*, defaults to `False`): Enable memory efficient attention as described [here](https://arxiv.org/abs/2112.05682). split_head_dim (`bool`, *optional*, defaults to `False`): Whether to split the head dimension into a new axis for the self-attention computation. In most cases, enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. """ sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple[str, ...] = ( "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D", ) up_block_types: Tuple[str, ...] = ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D") mid_block_type: Optional[str] = "UNetMidBlock2DCrossAttn" only_cross_attention: Union[bool, Tuple[bool]] = False block_out_channels: Tuple[int, ...] = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union[int, Tuple[int, ...]] = 8 num_attention_heads: Optional[Union[int, Tuple[int, ...]]] = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: jnp.dtype = jnp.float32 flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union[int, Tuple[int, ...]] = 1 addition_embed_type: Optional[str] = None addition_time_embed_dim: Optional[int] = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional[int] = None def init_weights(self, rng: jax.Array) -> FrozenDict: # init input tensors sample_shape = (1, self.in_channels, self.sample_size, self.sample_size) sample = jnp.zeros(sample_shape, dtype=jnp.float32) timesteps = jnp.ones((1,), dtype=jnp.int32) encoder_hidden_states = jnp.zeros((1, 1, self.cross_attention_dim), dtype=jnp.float32) params_rng, dropout_rng = jax.random.split(rng) rngs = {"params": params_rng, "dropout": dropout_rng} added_cond_kwargs = None if self.addition_embed_type == "text_time": # we retrieve the expected `text_embeds_dim` by first checking if the architecture is a refiner # or non-refiner architecture and then by "reverse-computing" from `projection_class_embeddings_input_dim` is_refiner = ( 5 * self.config.addition_time_embed_dim + self.config.cross_attention_dim == self.config.projection_class_embeddings_input_dim ) num_micro_conditions = 5 if is_refiner else 6 text_embeds_dim = self.config.projection_class_embeddings_input_dim - ( num_micro_conditions * self.config.addition_time_embed_dim ) time_ids_channels = self.projection_class_embeddings_input_dim - text_embeds_dim time_ids_dims = time_ids_channels // self.addition_time_embed_dim added_cond_kwargs = { "text_embeds": jnp.zeros((1, text_embeds_dim), dtype=jnp.float32), "time_ids": jnp.zeros((1, time_ids_dims), dtype=jnp.float32), } return self.init(rngs, sample, timesteps, encoder_hidden_states, added_cond_kwargs)["params"] def setup(self) -> None: block_out_channels = self.block_out_channels time_embed_dim = block_out_channels[0] * 4 if self.num_attention_heads is not None: raise ValueError( "At the moment it is not possible to define the number of attention heads via `num_attention_heads` because of a naming issue as described in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131. Passing `num_attention_heads` will only be supported in diffusers v0.19." ) # If `num_attention_heads` is not defined (which is the case for most models) # it will default to `attention_head_dim`. This looks weird upon first reading it and it is. # The reason for this behavior is to correct for incorrectly named variables that were introduced # when this library was created. The incorrect naming was only discovered much later in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131 # Changing `attention_head_dim` to `num_attention_heads` for 40,000+ configurations is too backwards breaking # which is why we correct for the naming here. num_attention_heads = self.num_attention_heads or self.attention_head_dim # input self.conv_in = nn.Conv( block_out_channels[0], kernel_size=(3, 3), strides=(1, 1), padding=((1, 1), (1, 1)), dtype=self.dtype, ) # time self.time_proj = FlaxTimesteps( block_out_channels[0], flip_sin_to_cos=self.flip_sin_to_cos, freq_shift=self.config.freq_shift ) self.time_embedding = FlaxTimestepEmbedding(time_embed_dim, dtype=self.dtype) only_cross_attention = self.only_cross_attention if isinstance(only_cross_attention, bool): only_cross_attention = (only_cross_attention,) * len(self.down_block_types) if isinstance(num_attention_heads, int): num_attention_heads = (num_attention_heads,) * len(self.down_block_types) # transformer layers per block transformer_layers_per_block = self.transformer_layers_per_block if isinstance(transformer_layers_per_block, int): transformer_layers_per_block = [transformer_layers_per_block] * len(self.down_block_types) # addition embed types if self.addition_embed_type is None: self.add_embedding = None elif self.addition_embed_type == "text_time": if self.addition_time_embed_dim is None: raise ValueError( f"addition_embed_type {self.addition_embed_type} requires `addition_time_embed_dim` to not be None" ) self.add_time_proj = FlaxTimesteps(self.addition_time_embed_dim, self.flip_sin_to_cos, self.freq_shift) self.add_embedding = FlaxTimestepEmbedding(time_embed_dim, dtype=self.dtype) else: raise ValueError(f"addition_embed_type: {self.addition_embed_type} must be None or `text_time`.") # down down_blocks = [] output_channel = block_out_channels[0] for i, down_block_type in enumerate(self.down_block_types): input_channel = output_channel output_channel = block_out_channels[i] is_final_block = i == len(block_out_channels) - 1 if down_block_type == "CrossAttnDownBlock2D": down_block = FlaxCrossAttnDownBlock2D( in_channels=input_channel, out_channels=output_channel, dropout=self.dropout, num_layers=self.layers_per_block, transformer_layers_per_block=transformer_layers_per_block[i], num_attention_heads=num_attention_heads[i], add_downsample=not is_final_block, use_linear_projection=self.use_linear_projection, only_cross_attention=only_cross_attention[i], use_memory_efficient_attention=self.use_memory_efficient_attention, split_head_dim=self.split_head_dim, dtype=self.dtype, ) else: down_block = FlaxDownBlock2D( in_channels=input_channel, out_channels=output_channel, dropout=self.dropout, num_layers=self.layers_per_block, add_downsample=not is_final_block, dtype=self.dtype, ) down_blocks.append(down_block) self.down_blocks = down_blocks # mid if self.config.mid_block_type == "UNetMidBlock2DCrossAttn": self.mid_block = FlaxUNetMidBlock2DCrossAttn( in_channels=block_out_channels[-1], dropout=self.dropout, num_attention_heads=num_attention_heads[-1], transformer_layers_per_block=transformer_layers_per_block[-1], use_linear_projection=self.use_linear_projection, use_memory_efficient_attention=self.use_memory_efficient_attention, split_head_dim=self.split_head_dim, dtype=self.dtype, ) elif self.config.mid_block_type is None: self.mid_block = None else: raise ValueError(f"Unexpected mid_block_type {self.config.mid_block_type}") # up up_blocks = [] reversed_block_out_channels = list(reversed(block_out_channels)) reversed_num_attention_heads = list(reversed(num_attention_heads)) only_cross_attention = list(reversed(only_cross_attention)) output_channel = reversed_block_out_channels[0] reversed_transformer_layers_per_block = list(reversed(transformer_layers_per_block)) for i, up_block_type in enumerate(self.up_block_types): prev_output_channel = output_channel output_channel = reversed_block_out_channels[i] input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] is_final_block = i == len(block_out_channels) - 1 if up_block_type == "CrossAttnUpBlock2D": up_block = FlaxCrossAttnUpBlock2D( in_channels=input_channel, out_channels=output_channel, prev_output_channel=prev_output_channel, num_layers=self.layers_per_block + 1, transformer_layers_per_block=reversed_transformer_layers_per_block[i], num_attention_heads=reversed_num_attention_heads[i], add_upsample=not is_final_block, dropout=self.dropout, use_linear_projection=self.use_linear_projection, only_cross_attention=only_cross_attention[i], use_memory_efficient_attention=self.use_memory_efficient_attention, split_head_dim=self.split_head_dim, dtype=self.dtype, ) else: up_block = FlaxUpBlock2D( in_channels=input_channel, out_channels=output_channel, prev_output_channel=prev_output_channel, num_layers=self.layers_per_block + 1, add_upsample=not is_final_block, dropout=self.dropout, dtype=self.dtype, ) up_blocks.append(up_block) prev_output_channel = output_channel self.up_blocks = up_blocks # out self.conv_norm_out = nn.GroupNorm(num_groups=32, epsilon=1e-5) self.conv_out = nn.Conv( self.out_channels, kernel_size=(3, 3), strides=(1, 1), padding=((1, 1), (1, 1)), dtype=self.dtype, ) def __call__( self, sample: jnp.ndarray, timesteps: Union[jnp.ndarray, float, int], encoder_hidden_states: jnp.ndarray, added_cond_kwargs: Optional[Union[Dict, FrozenDict]] = None, down_block_additional_residuals: Optional[Tuple[jnp.ndarray, ...]] = None, mid_block_additional_residual: Optional[jnp.ndarray] = None, return_dict: bool = True, train: bool = False, ) -> Union[FlaxUNet2DConditionOutput, Tuple[jnp.ndarray]]: r""" Args: sample (`jnp.ndarray`): (batch, channel, height, width) noisy inputs tensor timestep (`jnp.ndarray` or `float` or `int`): timesteps encoder_hidden_states (`jnp.ndarray`): (batch_size, sequence_length, hidden_size) encoder hidden states added_cond_kwargs: (`dict`, *optional*): A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that are passed along to the UNet blocks. down_block_additional_residuals: (`tuple` of `torch.Tensor`, *optional*): A tuple of tensors that if specified are added to the residuals of down unet blocks. mid_block_additional_residual: (`torch.Tensor`, *optional*): A tensor that if specified is added to the residual of the middle unet block. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput`] instead of a plain tuple. train (`bool`, *optional*, defaults to `False`): Use deterministic functions and disable dropout when not training. Returns: [`~models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput`] or `tuple`: [`~models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput`] if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. """ # 1. time if not isinstance(timesteps, jnp.ndarray): timesteps = jnp.array([timesteps], dtype=jnp.int32) elif isinstance(timesteps, jnp.ndarray) and len(timesteps.shape) == 0: timesteps = timesteps.astype(dtype=jnp.float32) timesteps = jnp.expand_dims(timesteps, 0) t_emb = self.time_proj(timesteps) t_emb = self.time_embedding(t_emb) # additional embeddings aug_emb = None if self.addition_embed_type == "text_time": if added_cond_kwargs is None: raise ValueError( f"Need to provide argument `added_cond_kwargs` for {self.__class__} when using `addition_embed_type={self.addition_embed_type}`" ) text_embeds = added_cond_kwargs.get("text_embeds") if text_embeds is None: raise ValueError( f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `text_embeds` to be passed in `added_cond_kwargs`" ) time_ids = added_cond_kwargs.get("time_ids") if time_ids is None: raise ValueError( f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `time_ids` to be passed in `added_cond_kwargs`" ) # compute time embeds time_embeds = self.add_time_proj(jnp.ravel(time_ids)) # (1, 6) => (6,) => (6, 256) time_embeds = jnp.reshape(time_embeds, (text_embeds.shape[0], -1)) add_embeds = jnp.concatenate([text_embeds, time_embeds], axis=-1) aug_emb = self.add_embedding(add_embeds) t_emb = t_emb + aug_emb if aug_emb is not None else t_emb # 2. pre-process sample = jnp.transpose(sample, (0, 2, 3, 1)) sample = self.conv_in(sample) # 3. down down_block_res_samples = (sample,) for down_block in self.down_blocks: if isinstance(down_block, FlaxCrossAttnDownBlock2D): sample, res_samples = down_block(sample, t_emb, encoder_hidden_states, deterministic=not train) else: sample, res_samples = down_block(sample, t_emb, deterministic=not train) down_block_res_samples += res_samples if down_block_additional_residuals is not None: new_down_block_res_samples = () for down_block_res_sample, down_block_additional_residual in zip( down_block_res_samples, down_block_additional_residuals ): down_block_res_sample += down_block_additional_residual new_down_block_res_samples += (down_block_res_sample,) down_block_res_samples = new_down_block_res_samples # 4. mid if self.mid_block is not None: sample = self.mid_block(sample, t_emb, encoder_hidden_states, deterministic=not train) if mid_block_additional_residual is not None: sample += mid_block_additional_residual # 5. up for up_block in self.up_blocks: res_samples = down_block_res_samples[-(self.layers_per_block + 1) :] down_block_res_samples = down_block_res_samples[: -(self.layers_per_block + 1)] if isinstance(up_block, FlaxCrossAttnUpBlock2D): sample = up_block( sample, temb=t_emb, encoder_hidden_states=encoder_hidden_states, res_hidden_states_tuple=res_samples, deterministic=not train, ) else: sample = up_block(sample, temb=t_emb, res_hidden_states_tuple=res_samples, deterministic=not train) # 6. post-process sample = self.conv_norm_out(sample) sample = nn.silu(sample) sample = self.conv_out(sample) sample = jnp.transpose(sample, (0, 3, 1, 2)) if not return_dict: return (sample,) return FlaxUNet2DConditionOutput(sample=sample)
diffusers/src/diffusers/models/unets/unet_2d_condition_flax.py/0
{ "file_path": "diffusers/src/diffusers/models/unets/unet_2d_condition_flax.py", "repo_id": "diffusers", "token_count": 10117 }
227
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Any, Callable, Dict, List, Optional, Tuple, Union import torch from transformers import CLIPTextModelWithProjection, CLIPTokenizer from ...image_processor import VaeImageProcessor from ...models import UVit2DModel, VQModel from ...schedulers import AmusedScheduler from ...utils import replace_example_docstring from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers import AmusedPipeline >>> pipe = AmusedPipeline.from_pretrained("amused/amused-512", variant="fp16", torch_dtype=torch.float16) >>> pipe = pipe.to("cuda") >>> prompt = "a photo of an astronaut riding a horse on mars" >>> image = pipe(prompt).images[0] ``` """ class AmusedPipeline(DiffusionPipeline): image_processor: VaeImageProcessor vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler model_cpu_offload_seq = "text_encoder->transformer->vqvae" def __init__( self, vqvae: VQModel, tokenizer: CLIPTokenizer, text_encoder: CLIPTextModelWithProjection, transformer: UVit2DModel, scheduler: AmusedScheduler, ): super().__init__() self.register_modules( vqvae=vqvae, tokenizer=tokenizer, text_encoder=text_encoder, transformer=transformer, scheduler=scheduler, ) self.vae_scale_factor = 2 ** (len(self.vqvae.config.block_out_channels) - 1) self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor, do_normalize=False) @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Optional[Union[List[str], str]] = None, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 12, guidance_scale: float = 10.0, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, generator: Optional[torch.Generator] = None, latents: Optional[torch.IntTensor] = None, prompt_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, negative_encoder_hidden_states: Optional[torch.Tensor] = None, output_type="pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, micro_conditioning_aesthetic_score: int = 6, micro_conditioning_crop_coord: Tuple[int, int] = (0, 0), temperature: Union[int, Tuple[int, int], List[int]] = (2, 0), ): """ The call function to the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. height (`int`, *optional*, defaults to `self.transformer.config.sample_size * self.vae_scale_factor`): The height in pixels of the generated image. width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 16): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 10.0): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. generator (`torch.Generator`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.IntTensor`, *optional*): Pre-generated tokens representing latent vectors in `self.vqvae`, to be used as inputs for image gneration. If not provided, the starting latents will be completely masked. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the `prompt` input argument. A single vector from the pooled and projected final hidden states. encoder_hidden_states (`torch.Tensor`, *optional*): Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. negative_encoder_hidden_states (`torch.Tensor`, *optional*): Analogous to `encoder_hidden_states` for the positive prompt. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that calls every `callback_steps` steps during inference. The function is called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function is called. If not specified, the callback is called at every step. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). micro_conditioning_aesthetic_score (`int`, *optional*, defaults to 6): The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (`Tuple[int]`, *optional*, defaults to (0, 0)): The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (`Union[int, Tuple[int, int], List[int]]`, *optional*, defaults to (2, 0)): Configures the temperature scheduler on `self.scheduler` see `AmusedScheduler#set_timesteps`. Examples: Returns: [`~pipelines.pipeline_utils.ImagePipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.pipeline_utils.ImagePipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images. """ if (prompt_embeds is not None and encoder_hidden_states is None) or ( prompt_embeds is None and encoder_hidden_states is not None ): raise ValueError("pass either both `prompt_embeds` and `encoder_hidden_states` or neither") if (negative_prompt_embeds is not None and negative_encoder_hidden_states is None) or ( negative_prompt_embeds is None and negative_encoder_hidden_states is not None ): raise ValueError( "pass either both `negatve_prompt_embeds` and `negative_encoder_hidden_states` or neither" ) if (prompt is None and prompt_embeds is None) or (prompt is not None and prompt_embeds is not None): raise ValueError("pass only one of `prompt` or `prompt_embeds`") if isinstance(prompt, str): prompt = [prompt] if prompt is not None: batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] batch_size = batch_size * num_images_per_prompt if height is None: height = self.transformer.config.sample_size * self.vae_scale_factor if width is None: width = self.transformer.config.sample_size * self.vae_scale_factor if prompt_embeds is None: input_ids = self.tokenizer( prompt, return_tensors="pt", padding="max_length", truncation=True, max_length=self.tokenizer.model_max_length, ).input_ids.to(self._execution_device) outputs = self.text_encoder(input_ids, return_dict=True, output_hidden_states=True) prompt_embeds = outputs.text_embeds encoder_hidden_states = outputs.hidden_states[-2] prompt_embeds = prompt_embeds.repeat(num_images_per_prompt, 1) encoder_hidden_states = encoder_hidden_states.repeat(num_images_per_prompt, 1, 1) if guidance_scale > 1.0: if negative_prompt_embeds is None: if negative_prompt is None: negative_prompt = [""] * len(prompt) if isinstance(negative_prompt, str): negative_prompt = [negative_prompt] input_ids = self.tokenizer( negative_prompt, return_tensors="pt", padding="max_length", truncation=True, max_length=self.tokenizer.model_max_length, ).input_ids.to(self._execution_device) outputs = self.text_encoder(input_ids, return_dict=True, output_hidden_states=True) negative_prompt_embeds = outputs.text_embeds negative_encoder_hidden_states = outputs.hidden_states[-2] negative_prompt_embeds = negative_prompt_embeds.repeat(num_images_per_prompt, 1) negative_encoder_hidden_states = negative_encoder_hidden_states.repeat(num_images_per_prompt, 1, 1) prompt_embeds = torch.concat([negative_prompt_embeds, prompt_embeds]) encoder_hidden_states = torch.concat([negative_encoder_hidden_states, encoder_hidden_states]) # Note that the micro conditionings _do_ flip the order of width, height for the original size # and the crop coordinates. This is how it was done in the original code base micro_conds = torch.tensor( [ width, height, micro_conditioning_crop_coord[0], micro_conditioning_crop_coord[1], micro_conditioning_aesthetic_score, ], device=self._execution_device, dtype=encoder_hidden_states.dtype, ) micro_conds = micro_conds.unsqueeze(0) micro_conds = micro_conds.expand(2 * batch_size if guidance_scale > 1.0 else batch_size, -1) shape = (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor) if latents is None: latents = torch.full( shape, self.scheduler.config.mask_token_id, dtype=torch.long, device=self._execution_device ) self.scheduler.set_timesteps(num_inference_steps, temperature, self._execution_device) num_warmup_steps = len(self.scheduler.timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, timestep in enumerate(self.scheduler.timesteps): if guidance_scale > 1.0: model_input = torch.cat([latents] * 2) else: model_input = latents model_output = self.transformer( model_input, micro_conds=micro_conds, pooled_text_emb=prompt_embeds, encoder_hidden_states=encoder_hidden_states, cross_attention_kwargs=cross_attention_kwargs, ) if guidance_scale > 1.0: uncond_logits, cond_logits = model_output.chunk(2) model_output = uncond_logits + guidance_scale * (cond_logits - uncond_logits) latents = self.scheduler.step( model_output=model_output, timestep=timestep, sample=latents, generator=generator, ).prev_sample if i == len(self.scheduler.timesteps) - 1 or ( (i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0 ): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, timestep, latents) if output_type == "latent": output = latents else: needs_upcasting = self.vqvae.dtype == torch.float16 and self.vqvae.config.force_upcast if needs_upcasting: self.vqvae.float() output = self.vqvae.decode( latents, force_not_quantize=True, shape=( batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor, self.vqvae.config.latent_channels, ), ).sample.clip(0, 1) output = self.image_processor.postprocess(output, output_type) if needs_upcasting: self.vqvae.half() self.maybe_free_model_hooks() if not return_dict: return (output,) return ImagePipelineOutput(output)
diffusers/src/diffusers/pipelines/amused/pipeline_amused.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/amused/pipeline_amused.py", "repo_id": "diffusers", "token_count": 6940 }
228
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Optional, Tuple, Union import torch import torch.utils.checkpoint from torch import nn from transformers import BertTokenizer from transformers.activations import QuickGELUActivation as QuickGELU from transformers.modeling_outputs import ( BaseModelOutputWithPastAndCrossAttentions, BaseModelOutputWithPooling, BaseModelOutputWithPoolingAndCrossAttentions, ) from transformers.models.blip_2.configuration_blip_2 import Blip2Config, Blip2VisionConfig from transformers.models.blip_2.modeling_blip_2 import ( Blip2Encoder, Blip2PreTrainedModel, Blip2QFormerAttention, Blip2QFormerIntermediate, Blip2QFormerOutput, ) from transformers.pytorch_utils import apply_chunking_to_forward from transformers.utils import ( logging, replace_return_docstrings, ) logger = logging.get_logger(__name__) # There is an implementation of Blip2 in `transformers` : https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip_2/modeling_blip_2.py. # But it doesn't support getting multimodal embeddings. So, this module can be # replaced with a future `transformers` version supports that. class Blip2TextEmbeddings(nn.Module): """Construct the embeddings from word and position embeddings.""" def __init__(self, config): super().__init__() self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load # any TensorFlow checkpoint file self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout_prob) # position_ids (1, len position emb) is contiguous in memory and exported when serialized self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") self.config = config def forward( self, input_ids=None, position_ids=None, query_embeds=None, past_key_values_length=0, ): if input_ids is not None: seq_length = input_ids.size()[1] else: seq_length = 0 if position_ids is None: position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length].clone() if input_ids is not None: embeddings = self.word_embeddings(input_ids) if self.position_embedding_type == "absolute": position_embeddings = self.position_embeddings(position_ids) embeddings = embeddings + position_embeddings if query_embeds is not None: batch_size = embeddings.shape[0] # repeat the query embeddings for batch size query_embeds = query_embeds.repeat(batch_size, 1, 1) embeddings = torch.cat((query_embeds, embeddings), dim=1) else: embeddings = query_embeds embeddings = embeddings.to(query_embeds.dtype) embeddings = self.LayerNorm(embeddings) embeddings = self.dropout(embeddings) return embeddings # Copy-pasted from transformers.models.blip.modeling_blip.BlipVisionEmbeddings with Blip->Blip2 class Blip2VisionEmbeddings(nn.Module): def __init__(self, config: Blip2VisionConfig): super().__init__() self.config = config self.embed_dim = config.hidden_size self.image_size = config.image_size self.patch_size = config.patch_size self.class_embedding = nn.Parameter(torch.randn(1, 1, self.embed_dim)) self.patch_embedding = nn.Conv2d( in_channels=3, out_channels=self.embed_dim, kernel_size=self.patch_size, stride=self.patch_size, bias=False ) self.num_patches = (self.image_size // self.patch_size) ** 2 self.num_positions = self.num_patches + 1 self.position_embedding = nn.Parameter(torch.randn(1, self.num_positions, self.embed_dim)) def forward(self, pixel_values: torch.Tensor) -> torch.Tensor: batch_size = pixel_values.shape[0] target_dtype = self.patch_embedding.weight.dtype patch_embeds = self.patch_embedding(pixel_values.to(dtype=target_dtype)) # shape = [*, width, grid, grid] patch_embeds = patch_embeds.flatten(2).transpose(1, 2) class_embeds = self.class_embedding.expand(batch_size, 1, -1).to(target_dtype) embeddings = torch.cat([class_embeds, patch_embeds], dim=1) embeddings = embeddings + self.position_embedding[:, : embeddings.size(1), :].to(target_dtype) return embeddings # The Qformer encoder, which takes the visual embeddings, and the text input, to get multimodal embeddings class Blip2QFormerEncoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.layer = nn.ModuleList( [Blip2QFormerLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)] ) self.gradient_checkpointing = False def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=False, output_hidden_states=False, return_dict=True, query_length=0, ): all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions else None next_decoder_cache = () if use_cache else None for i in range(self.config.num_hidden_layers): layer_module = self.layer[i] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_head_mask = head_mask[i] if head_mask is not None else None past_key_value = past_key_values[i] if past_key_values is not None else None if getattr(self.config, "gradient_checkpointing", False) and self.training: if use_cache: logger.warning( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, past_key_value, output_attentions, query_length) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, ) else: layer_outputs = layer_module( hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions, query_length, ) hidden_states = layer_outputs[0] if use_cache: next_decoder_cache += (layer_outputs[-1],) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) if layer_module.has_cross_attention: all_cross_attentions = all_cross_attentions + (layer_outputs[2],) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [ hidden_states, next_decoder_cache, all_hidden_states, all_self_attentions, all_cross_attentions, ] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=next_decoder_cache, hidden_states=all_hidden_states, attentions=all_self_attentions, cross_attentions=all_cross_attentions, ) # The layers making up the Qformer encoder class Blip2QFormerLayer(nn.Module): def __init__(self, config, layer_idx): super().__init__() self.chunk_size_feed_forward = config.chunk_size_feed_forward self.seq_len_dim = 1 self.attention = Blip2QFormerAttention(config) self.layer_idx = layer_idx if layer_idx % config.cross_attention_frequency == 0: self.crossattention = Blip2QFormerAttention(config, is_cross_attention=True) self.has_cross_attention = True else: self.has_cross_attention = False self.intermediate = Blip2QFormerIntermediate(config) self.intermediate_query = Blip2QFormerIntermediate(config) self.output_query = Blip2QFormerOutput(config) self.output = Blip2QFormerOutput(config) def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_value=None, output_attentions=False, query_length=0, ): # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None self_attention_outputs = self.attention( hidden_states, attention_mask, head_mask, output_attentions=output_attentions, past_key_value=self_attn_past_key_value, ) attention_output = self_attention_outputs[0] outputs = self_attention_outputs[1:-1] present_key_value = self_attention_outputs[-1] if query_length > 0: query_attention_output = attention_output[:, :query_length, :] if self.has_cross_attention: if encoder_hidden_states is None: raise ValueError("encoder_hidden_states must be given for cross-attention layers") cross_attention_outputs = self.crossattention( query_attention_output, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions=output_attentions, ) query_attention_output = cross_attention_outputs[0] # add cross attentions if we output attention weights outputs = outputs + cross_attention_outputs[1:-1] layer_output = apply_chunking_to_forward( self.feed_forward_chunk_query, self.chunk_size_feed_forward, self.seq_len_dim, query_attention_output, ) if attention_output.shape[1] > query_length: layer_output_text = apply_chunking_to_forward( self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output[:, query_length:, :], ) layer_output = torch.cat([layer_output, layer_output_text], dim=1) else: layer_output = apply_chunking_to_forward( self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output, ) outputs = (layer_output,) + outputs outputs = outputs + (present_key_value,) return outputs def feed_forward_chunk(self, attention_output): intermediate_output = self.intermediate(attention_output) layer_output = self.output(intermediate_output, attention_output) return layer_output def feed_forward_chunk_query(self, attention_output): intermediate_output = self.intermediate_query(attention_output) layer_output = self.output_query(intermediate_output, attention_output) return layer_output # ProjLayer used to project the multimodal Blip2 embeddings to be used in the text encoder class ProjLayer(nn.Module): def __init__(self, in_dim, out_dim, hidden_dim, drop_p=0.1, eps=1e-12): super().__init__() # Dense1 -> Act -> Dense2 -> Drop -> Res -> Norm self.dense1 = nn.Linear(in_dim, hidden_dim) self.act_fn = QuickGELU() self.dense2 = nn.Linear(hidden_dim, out_dim) self.dropout = nn.Dropout(drop_p) self.LayerNorm = nn.LayerNorm(out_dim, eps=eps) def forward(self, x): x_in = x x = self.LayerNorm(x) x = self.dropout(self.dense2(self.act_fn(self.dense1(x)))) + x_in return x # Copy-pasted from transformers.models.blip.modeling_blip.BlipVisionModel with Blip->Blip2, BLIP->BLIP_2 class Blip2VisionModel(Blip2PreTrainedModel): main_input_name = "pixel_values" config_class = Blip2VisionConfig def __init__(self, config: Blip2VisionConfig): super().__init__(config) self.config = config embed_dim = config.hidden_size self.embeddings = Blip2VisionEmbeddings(config) self.pre_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) self.encoder = Blip2Encoder(config) self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) self.post_init() @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=Blip2VisionConfig) def forward( self, pixel_values: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPooling]: r""" Returns: """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if pixel_values is None: raise ValueError("You have to specify pixel_values") hidden_states = self.embeddings(pixel_values) hidden_states = self.pre_layernorm(hidden_states) encoder_outputs = self.encoder( inputs_embeds=hidden_states, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = encoder_outputs[0] last_hidden_state = self.post_layernorm(last_hidden_state) pooled_output = last_hidden_state[:, 0, :] pooled_output = self.post_layernorm(pooled_output) if not return_dict: return (last_hidden_state, pooled_output) + encoder_outputs[1:] return BaseModelOutputWithPooling( last_hidden_state=last_hidden_state, pooler_output=pooled_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) def get_input_embeddings(self): return self.embeddings # Qformer model, used to get multimodal embeddings from the text and image inputs class Blip2QFormerModel(Blip2PreTrainedModel): """ Querying Transformer (Q-Former), used in BLIP-2. """ def __init__(self, config: Blip2Config): super().__init__(config) self.config = config self.embeddings = Blip2TextEmbeddings(config.qformer_config) self.visual_encoder = Blip2VisionModel(config.vision_config) self.query_tokens = nn.Parameter(torch.zeros(1, config.num_query_tokens, config.qformer_config.hidden_size)) if not hasattr(config, "tokenizer") or config.tokenizer is None: self.tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", truncation_side="right") else: self.tokenizer = BertTokenizer.from_pretrained(config.tokenizer, truncation_side="right") self.tokenizer.add_special_tokens({"bos_token": "[DEC]"}) self.proj_layer = ProjLayer( in_dim=config.qformer_config.hidden_size, out_dim=config.qformer_config.hidden_size, hidden_dim=config.qformer_config.hidden_size * 4, drop_p=0.1, eps=1e-12, ) self.encoder = Blip2QFormerEncoder(config.qformer_config) self.post_init() def get_input_embeddings(self): return self.embeddings.word_embeddings def set_input_embeddings(self, value): self.embeddings.word_embeddings = value def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) def get_extended_attention_mask( self, attention_mask: torch.Tensor, input_shape: Tuple[int], device: torch.device, has_query: bool = False, ) -> torch.Tensor: """ Makes broadcastable attention and causal masks so that future and masked tokens are ignored. Arguments: attention_mask (`torch.Tensor`): Mask with ones indicating tokens to attend to, zeros for tokens to ignore. input_shape (`Tuple[int]`): The shape of the input to the model. device (`torch.device`): The device of the input to the model. Returns: `torch.Tensor` The extended attention mask, with a the same dtype as `attention_mask.dtype`. """ # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] # ourselves in which case we just need to make it broadcastable to all heads. if attention_mask.dim() == 3: extended_attention_mask = attention_mask[:, None, :, :] elif attention_mask.dim() == 2: # Provided a padding mask of dimensions [batch_size, seq_length] # - the model is an encoder, so make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length] extended_attention_mask = attention_mask[:, None, None, :] else: raise ValueError( "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format( input_shape, attention_mask.shape ) ) # Since attention_mask is 1.0 for positions we want to attend and 0.0 for # masked positions, this operation will create a tensor which is 0.0 for # positions we want to attend and -10000.0 for masked positions. # Since we are adding it to the raw scores before the softmax, this is # effectively the same as removing these entirely. extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 return extended_attention_mask def forward( self, text_input=None, image_input=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" encoder_hidden_states (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, `optional`): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, `optional`): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. past_key_values (`tuple(tuple(torch.Tensor))` of length `config.n_layers` with each tuple having 4 tensors of: shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. use_cache (`bool`, `optional`): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). """ text = self.tokenizer(text_input, return_tensors="pt", padding=True) text = text.to(self.device) input_ids = text.input_ids batch_size = input_ids.shape[0] query_atts = torch.ones((batch_size, self.query_tokens.size()[1]), dtype=torch.long).to(self.device) attention_mask = torch.cat([query_atts, text.attention_mask], dim=1) output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict # past_key_values_length past_key_values_length = ( past_key_values[0][0].shape[2] - self.config.query_length if past_key_values is not None else 0 ) query_length = self.query_tokens.shape[1] embedding_output = self.embeddings( input_ids=input_ids, query_embeds=self.query_tokens, past_key_values_length=past_key_values_length, ) # embedding_output = self.layernorm(query_embeds) # embedding_output = self.dropout(embedding_output) input_shape = embedding_output.size()[:-1] batch_size, seq_length = input_shape device = embedding_output.device image_embeds_frozen = self.visual_encoder(image_input).last_hidden_state # image_embeds_frozen = torch.ones_like(image_embeds_frozen) encoder_hidden_states = image_embeds_frozen if attention_mask is None: attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape, device) # If a 2D or 3D attention mask is provided for the cross-attention # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] if encoder_hidden_states is not None: if isinstance(encoder_hidden_states, list): encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[0].size() else: encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) if isinstance(encoder_attention_mask, list): encoder_extended_attention_mask = [self.invert_attention_mask(mask) for mask in encoder_attention_mask] elif encoder_attention_mask is None: encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: encoder_extended_attention_mask = None # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] head_mask = self.get_head_mask(head_mask, self.config.qformer_config.num_hidden_layers) encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, query_length=query_length, ) sequence_output = encoder_outputs[0] pooled_output = sequence_output[:, 0, :] if not return_dict: return self.proj_layer(sequence_output[:, :query_length, :]) return BaseModelOutputWithPoolingAndCrossAttentions( last_hidden_state=sequence_output, pooler_output=pooled_output, past_key_values=encoder_outputs.past_key_values, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, cross_attentions=encoder_outputs.cross_attentions, )
diffusers/src/diffusers/pipelines/blip_diffusion/modeling_blip2.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/blip_diffusion/modeling_blip2.py", "repo_id": "diffusers", "token_count": 12070 }
229
import numpy as np import torch import torch.nn as nn from transformers import CLIPConfig, CLIPVisionModelWithProjection, PreTrainedModel from ...utils import logging logger = logging.get_logger(__name__) class IFSafetyChecker(PreTrainedModel): config_class = CLIPConfig _no_split_modules = ["CLIPEncoderLayer"] def __init__(self, config: CLIPConfig): super().__init__(config) self.vision_model = CLIPVisionModelWithProjection(config.vision_config) self.p_head = nn.Linear(config.vision_config.projection_dim, 1) self.w_head = nn.Linear(config.vision_config.projection_dim, 1) @torch.no_grad() def forward(self, clip_input, images, p_threshold=0.5, w_threshold=0.5): image_embeds = self.vision_model(clip_input)[0] nsfw_detected = self.p_head(image_embeds) nsfw_detected = nsfw_detected.flatten() nsfw_detected = nsfw_detected > p_threshold nsfw_detected = nsfw_detected.tolist() if any(nsfw_detected): logger.warning( "Potential NSFW content was detected in one or more images. A black image will be returned instead." " Try again with a different prompt and/or seed." ) for idx, nsfw_detected_ in enumerate(nsfw_detected): if nsfw_detected_: images[idx] = np.zeros(images[idx].shape) watermark_detected = self.w_head(image_embeds) watermark_detected = watermark_detected.flatten() watermark_detected = watermark_detected > w_threshold watermark_detected = watermark_detected.tolist() if any(watermark_detected): logger.warning( "Potential watermarked content was detected in one or more images. A black image will be returned instead." " Try again with a different prompt and/or seed." ) for idx, watermark_detected_ in enumerate(watermark_detected): if watermark_detected_: images[idx] = np.zeros(images[idx].shape) return images, nsfw_detected, watermark_detected
diffusers/src/diffusers/pipelines/deepfloyd_if/safety_checker.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/deepfloyd_if/safety_checker.py", "repo_id": "diffusers", "token_count": 913 }
230
# Copyright 2024 Pix2Pix Zero Authors and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect from dataclasses import dataclass from typing import Any, Callable, Dict, List, Optional, Union import numpy as np import PIL.Image import torch import torch.nn.functional as F from transformers import ( BlipForConditionalGeneration, BlipProcessor, CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, ) from ....image_processor import PipelineImageInput, VaeImageProcessor from ....loaders import LoraLoaderMixin, TextualInversionLoaderMixin from ....models import AutoencoderKL, UNet2DConditionModel from ....models.attention_processor import Attention from ....models.lora import adjust_lora_scale_text_encoder from ....schedulers import DDIMScheduler, DDPMScheduler, EulerAncestralDiscreteScheduler, LMSDiscreteScheduler from ....schedulers.scheduling_ddim_inverse import DDIMInverseScheduler from ....utils import ( PIL_INTERPOLATION, USE_PEFT_BACKEND, BaseOutput, deprecate, logging, replace_example_docstring, scale_lora_layers, unscale_lora_layers, ) from ....utils.torch_utils import randn_tensor from ...pipeline_utils import DiffusionPipeline, StableDiffusionMixin from ...stable_diffusion.pipeline_output import StableDiffusionPipelineOutput from ...stable_diffusion.safety_checker import StableDiffusionSafetyChecker logger = logging.get_logger(__name__) # pylint: disable=invalid-name @dataclass class Pix2PixInversionPipelineOutput(BaseOutput, TextualInversionLoaderMixin): """ Output class for Stable Diffusion pipelines. Args: latents (`torch.Tensor`) inverted latents tensor images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. """ latents: torch.Tensor images: Union[List[PIL.Image.Image], np.ndarray] EXAMPLE_DOC_STRING = """ Examples: ```py >>> import requests >>> import torch >>> from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline >>> def download(embedding_url, local_filepath): ... r = requests.get(embedding_url) ... with open(local_filepath, "wb") as f: ... f.write(r.content) >>> model_ckpt = "CompVis/stable-diffusion-v1-4" >>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16) >>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) >>> pipeline.to("cuda") >>> prompt = "a high resolution painting of a cat in the style of van gough" >>> source_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/cat.pt" >>> target_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/dog.pt" >>> for url in [source_emb_url, target_emb_url]: ... download(url, url.split("/")[-1]) >>> src_embeds = torch.load(source_emb_url.split("/")[-1]) >>> target_embeds = torch.load(target_emb_url.split("/")[-1]) >>> images = pipeline( ... prompt, ... source_embeds=src_embeds, ... target_embeds=target_embeds, ... num_inference_steps=50, ... cross_attention_guidance_amount=0.15, ... ).images >>> images[0].save("edited_image_dog.png") ``` """ EXAMPLE_INVERT_DOC_STRING = """ Examples: ```py >>> import torch >>> from transformers import BlipForConditionalGeneration, BlipProcessor >>> from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline >>> import requests >>> from PIL import Image >>> captioner_id = "Salesforce/blip-image-captioning-base" >>> processor = BlipProcessor.from_pretrained(captioner_id) >>> model = BlipForConditionalGeneration.from_pretrained( ... captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True ... ) >>> sd_model_ckpt = "CompVis/stable-diffusion-v1-4" >>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( ... sd_model_ckpt, ... caption_generator=model, ... caption_processor=processor, ... torch_dtype=torch.float16, ... safety_checker=None, ... ) >>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) >>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) >>> pipeline.enable_model_cpu_offload() >>> img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" >>> raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512)) >>> # generate caption >>> caption = pipeline.generate_caption(raw_image) >>> # "a photography of a cat with flowers and dai dai daie - daie - daie kasaii" >>> inv_latents = pipeline.invert(caption, image=raw_image).latents >>> # we need to generate source and target embeds >>> source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] >>> target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] >>> source_embeds = pipeline.get_embeds(source_prompts) >>> target_embeds = pipeline.get_embeds(target_prompts) >>> # the latents can then be used to edit a real image >>> # when using Stable Diffusion 2 or other models that use v-prediction >>> # set `cross_attention_guidance_amount` to 0.01 or less to avoid input latent gradient explosion >>> image = pipeline( ... caption, ... source_embeds=source_embeds, ... target_embeds=target_embeds, ... num_inference_steps=50, ... cross_attention_guidance_amount=0.15, ... generator=generator, ... latents=inv_latents, ... negative_prompt=caption, ... ).images[0] >>> image.save("edited_image.png") ``` """ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.preprocess def preprocess(image): deprecation_message = "The preprocess method is deprecated and will be removed in diffusers 1.0.0. Please use VaeImageProcessor.preprocess(...) instead" deprecate("preprocess", "1.0.0", deprecation_message, standard_warn=False) if isinstance(image, torch.Tensor): return image elif isinstance(image, PIL.Image.Image): image = [image] if isinstance(image[0], PIL.Image.Image): w, h = image[0].size w, h = (x - x % 8 for x in (w, h)) # resize to integer multiple of 8 image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image] image = np.concatenate(image, axis=0) image = np.array(image).astype(np.float32) / 255.0 image = image.transpose(0, 3, 1, 2) image = 2.0 * image - 1.0 image = torch.from_numpy(image) elif isinstance(image[0], torch.Tensor): image = torch.cat(image, dim=0) return image def prepare_unet(unet: UNet2DConditionModel): """Modifies the UNet (`unet`) to perform Pix2Pix Zero optimizations.""" pix2pix_zero_attn_procs = {} for name in unet.attn_processors.keys(): module_name = name.replace(".processor", "") module = unet.get_submodule(module_name) if "attn2" in name: pix2pix_zero_attn_procs[name] = Pix2PixZeroAttnProcessor(is_pix2pix_zero=True) module.requires_grad_(True) else: pix2pix_zero_attn_procs[name] = Pix2PixZeroAttnProcessor(is_pix2pix_zero=False) module.requires_grad_(False) unet.set_attn_processor(pix2pix_zero_attn_procs) return unet class Pix2PixZeroL2Loss: def __init__(self): self.loss = 0.0 def compute_loss(self, predictions, targets): self.loss += ((predictions - targets) ** 2).sum((1, 2)).mean(0) class Pix2PixZeroAttnProcessor: """An attention processor class to store the attention weights. In Pix2Pix Zero, it happens during computations in the cross-attention blocks.""" def __init__(self, is_pix2pix_zero=False): self.is_pix2pix_zero = is_pix2pix_zero if self.is_pix2pix_zero: self.reference_cross_attn_map = {} def __call__( self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None, timestep=None, loss=None, ): batch_size, sequence_length, _ = hidden_states.shape attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) query = attn.to_q(hidden_states) if encoder_hidden_states is None: encoder_hidden_states = hidden_states elif attn.norm_cross: encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) key = attn.to_k(encoder_hidden_states) value = attn.to_v(encoder_hidden_states) query = attn.head_to_batch_dim(query) key = attn.head_to_batch_dim(key) value = attn.head_to_batch_dim(value) attention_probs = attn.get_attention_scores(query, key, attention_mask) if self.is_pix2pix_zero and timestep is not None: # new bookkeeping to save the attention weights. if loss is None: self.reference_cross_attn_map[timestep.item()] = attention_probs.detach().cpu() # compute loss elif loss is not None: prev_attn_probs = self.reference_cross_attn_map.pop(timestep.item()) loss.compute_loss(attention_probs, prev_attn_probs.to(attention_probs.device)) hidden_states = torch.bmm(attention_probs, value) hidden_states = attn.batch_to_head_dim(hidden_states) # linear proj hidden_states = attn.to_out[0](hidden_states) # dropout hidden_states = attn.to_out[1](hidden_states) return hidden_states class StableDiffusionPix2PixZeroPipeline(DiffusionPipeline, StableDiffusionMixin): r""" Pipeline for pixel-level image editing using Pix2Pix Zero. Based on Stable Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`], or [`DDPMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. feature_extractor ([`CLIPImageProcessor`]): Model that extracts features from generated images to be used as inputs for the `safety_checker`. requires_safety_checker (bool): Whether the pipeline requires a safety checker. We recommend setting it to True if you're using the pipeline publicly. """ model_cpu_offload_seq = "text_encoder->unet->vae" _optional_components = [ "safety_checker", "feature_extractor", "caption_generator", "caption_processor", "inverse_scheduler", ] _exclude_from_cpu_offload = ["safety_checker"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: Union[DDPMScheduler, DDIMScheduler, EulerAncestralDiscreteScheduler, LMSDiscreteScheduler], feature_extractor: CLIPImageProcessor, safety_checker: StableDiffusionSafetyChecker, inverse_scheduler: DDIMInverseScheduler, caption_generator: BlipForConditionalGeneration, caption_processor: BlipProcessor, requires_safety_checker: bool = True, ): super().__init__() if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, caption_processor=caption_processor, caption_generator=caption_generator, inverse_scheduler=inverse_scheduler, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) self.register_to_config(requires_safety_checker=requires_safety_checker) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt def _encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, **kwargs, ): deprecation_message = "`_encode_prompt()` is deprecated and it will be removed in a future version. Use `encode_prompt()` instead. Also, be aware that the output format changed from a concatenated tensor to a tuple." deprecate("_encode_prompt()", "1.0.0", deprecation_message, standard_warn=False) prompt_embeds_tuple = self.encode_prompt( prompt=prompt, device=device, num_images_per_prompt=num_images_per_prompt, do_classifier_free_guidance=do_classifier_free_guidance, negative_prompt=negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=lora_scale, **kwargs, ) # concatenate for backwards comp prompt_embeds = torch.cat([prompt_embeds_tuple[1], prompt_embeds_tuple[0]]) return prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_prompt def encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, clip_skip: Optional[int] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. """ # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, LoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) else: scale_lora_layers(self.text_encoder, lora_scale) if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, self.tokenizer) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = self.tokenizer.batch_decode( untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] ) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None if clip_skip is None: prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask) prompt_embeds = prompt_embeds[0] else: prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, output_hidden_states=True ) # Access the `hidden_states` first, that contains a tuple of # all the hidden states from the encoder layers. Then index into # the tuple to access the hidden states from the desired layer. prompt_embeds = prompt_embeds[-1][-(clip_skip + 1)] # We also need to apply the final LayerNorm here to not mess with the # representations. The `last_hidden_states` that we typically use for # obtaining the final prompt representations passes through the LayerNorm # layer. prompt_embeds = self.text_encoder.text_model.final_layer_norm(prompt_embeds) if self.text_encoder is not None: prompt_embeds_dtype = self.text_encoder.dtype elif self.unet is not None: prompt_embeds_dtype = self.unet.dtype else: prompt_embeds_dtype = prompt_embeds.dtype prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance and negative_prompt_embeds is None: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) max_length = prompt_embeds.shape[1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) if isinstance(self, LoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder, lora_scale) return prompt_embeds, negative_prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker def run_safety_checker(self, image, device, dtype): if self.safety_checker is None: has_nsfw_concept = None else: if torch.is_tensor(image): feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") else: feature_extractor_input = self.image_processor.numpy_to_pil(image) safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(dtype) ) return image, has_nsfw_concept # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents def decode_latents(self, latents): deprecation_message = "The decode_latents method is deprecated and will be removed in 1.0.0. Please use VaeImageProcessor.postprocess(...) instead" deprecate("decode_latents", "1.0.0", deprecation_message, standard_warn=False) latents = 1 / self.vae.config.scaling_factor * latents image = self.vae.decode(latents, return_dict=False)[0] image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() return image # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, prompt, source_embeds, target_embeds, callback_steps, prompt_embeds=None, ): if (callback_steps is None) or ( callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) ): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if source_embeds is None and target_embeds is None: raise ValueError("`source_embeds` and `target_embeds` cannot be undefined.") if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents @torch.no_grad() def generate_caption(self, images): """Generates caption for a given image.""" text = "a photography of" prev_device = self.caption_generator.device device = self._execution_device inputs = self.caption_processor(images, text, return_tensors="pt").to( device=device, dtype=self.caption_generator.dtype ) self.caption_generator.to(device) outputs = self.caption_generator.generate(**inputs, max_new_tokens=128) # offload caption generator self.caption_generator.to(prev_device) caption = self.caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] return caption def construct_direction(self, embs_source: torch.Tensor, embs_target: torch.Tensor): """Constructs the edit direction to steer the image generation process semantically.""" return (embs_target.mean(0) - embs_source.mean(0)).unsqueeze(0) @torch.no_grad() def get_embeds(self, prompt: List[str], batch_size: int = 16) -> torch.Tensor: num_prompts = len(prompt) embeds = [] for i in range(0, num_prompts, batch_size): prompt_slice = prompt[i : i + batch_size] input_ids = self.tokenizer( prompt_slice, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ).input_ids input_ids = input_ids.to(self.text_encoder.device) embeds.append(self.text_encoder(input_ids)[0]) return torch.cat(embeds, dim=0).mean(0)[None] def prepare_image_latents(self, image, batch_size, dtype, device, generator=None): if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)): raise ValueError( f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}" ) image = image.to(device=device, dtype=dtype) if image.shape[1] == 4: latents = image else: if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if isinstance(generator, list): latents = [ self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) ] latents = torch.cat(latents, dim=0) else: latents = self.vae.encode(image).latent_dist.sample(generator) latents = self.vae.config.scaling_factor * latents if batch_size != latents.shape[0]: if batch_size % latents.shape[0] == 0: # expand image_latents for batch_size deprecation_message = ( f"You have passed {batch_size} text prompts (`prompt`), but only {latents.shape[0]} initial" " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" " your script to pass as many initial images as text prompts to suppress this warning." ) deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) additional_latents_per_image = batch_size // latents.shape[0] latents = torch.cat([latents] * additional_latents_per_image, dim=0) else: raise ValueError( f"Cannot duplicate `image` of batch size {latents.shape[0]} to {batch_size} text prompts." ) else: latents = torch.cat([latents], dim=0) return latents def get_epsilon(self, model_output: torch.Tensor, sample: torch.Tensor, timestep: int): pred_type = self.inverse_scheduler.config.prediction_type alpha_prod_t = self.inverse_scheduler.alphas_cumprod[timestep] beta_prod_t = 1 - alpha_prod_t if pred_type == "epsilon": return model_output elif pred_type == "sample": return (sample - alpha_prod_t ** (0.5) * model_output) / beta_prod_t ** (0.5) elif pred_type == "v_prediction": return (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample else: raise ValueError( f"prediction_type given as {pred_type} must be one of `epsilon`, `sample`, or `v_prediction`" ) def auto_corr_loss(self, hidden_states, generator=None): reg_loss = 0.0 for i in range(hidden_states.shape[0]): for j in range(hidden_states.shape[1]): noise = hidden_states[i : i + 1, j : j + 1, :, :] while True: roll_amount = torch.randint(noise.shape[2] // 2, (1,), generator=generator).item() reg_loss += (noise * torch.roll(noise, shifts=roll_amount, dims=2)).mean() ** 2 reg_loss += (noise * torch.roll(noise, shifts=roll_amount, dims=3)).mean() ** 2 if noise.shape[2] <= 8: break noise = F.avg_pool2d(noise, kernel_size=2) return reg_loss def kl_divergence(self, hidden_states): mean = hidden_states.mean() var = hidden_states.var() return var + mean**2 - 1 - torch.log(var + 1e-7) @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Optional[Union[str, List[str]]] = None, source_embeds: torch.Tensor = None, target_embeds: torch.Tensor = None, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, cross_attention_guidance_amount: float = 0.1, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: Optional[int] = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, clip_skip: Optional[int] = None, ): r""" Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. source_embeds (`torch.Tensor`): Source concept embeddings. Generation of the embeddings as per the [original paper](https://arxiv.org/abs/2302.03027). Used in discovering the edit direction. target_embeds (`torch.Tensor`): Target concept embeddings. Generation of the embeddings as per the [original paper](https://arxiv.org/abs/2302.03027). Used in discovering the edit direction. height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The height in pixels of the generated image. width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 7.5): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to [`schedulers.DDIMScheduler`], will be ignored for others. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. cross_attention_guidance_amount (`float`, defaults to 0.1): Amount of guidance needed from the reference cross-attention maps. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that will be called every `callback_steps` steps during inference. The function will be called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function will be called. If not specified, the callback will be called at every step. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. Examples: Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the `safety_checker`. """ # 0. Define the spatial resolutions. height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, source_embeds, target_embeds, callback_steps, prompt_embeds, ) # 3. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if cross_attention_kwargs is None: cross_attention_kwargs = {} device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input prompt prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, clip_skip=clip_skip, ) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes if do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) # 4. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # 5. Generate the inverted noise from the input image or any other image # generated from the input prompt. num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, prompt_embeds.dtype, device, generator, latents, ) latents_init = latents.clone() # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 8. Rejig the UNet so that we can obtain the cross-attenion maps and # use them for guiding the subsequent image generation. self.unet = prepare_unet(self.unet) # 7. Denoising loop where we obtain the cross-attention maps. num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, cross_attention_kwargs={"timestep": t}, ).sample # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) # 8. Compute the edit directions. edit_direction = self.construct_direction(source_embeds, target_embeds).to(prompt_embeds.device) # 9. Edit the prompt embeddings as per the edit directions discovered. prompt_embeds_edit = prompt_embeds.clone() prompt_embeds_edit[1:2] += edit_direction # 10. Second denoising loop to generate the edited image. self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps latents = latents_init num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # we want to learn the latent such that it steers the generation # process towards the edited direction, so make the make initial # noise learnable x_in = latent_model_input.detach().clone() x_in.requires_grad = True # optimizer opt = torch.optim.SGD([x_in], lr=cross_attention_guidance_amount) with torch.enable_grad(): # initialize loss loss = Pix2PixZeroL2Loss() # predict the noise residual noise_pred = self.unet( x_in, t, encoder_hidden_states=prompt_embeds_edit.detach(), cross_attention_kwargs={"timestep": t, "loss": loss}, ).sample loss.loss.backward(retain_graph=False) opt.step() # recompute the noise noise_pred = self.unet( x_in.detach(), t, encoder_hidden_states=prompt_embeds_edit, cross_attention_kwargs={"timestep": None}, ).sample latents = x_in.detach().chunk(2)[0] # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if not output_type == "latent": image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) else: image = latents has_nsfw_concept = None if has_nsfw_concept is None: do_denormalize = [True] * image.shape[0] else: do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) # Offload all models self.maybe_free_model_hooks() if not return_dict: return (image, has_nsfw_concept) return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) @torch.no_grad() @replace_example_docstring(EXAMPLE_INVERT_DOC_STRING) def invert( self, prompt: Optional[str] = None, image: PipelineImageInput = None, num_inference_steps: int = 50, guidance_scale: float = 1, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, cross_attention_guidance_amount: float = 0.1, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: Optional[int] = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, lambda_auto_corr: float = 20.0, lambda_kl: float = 20.0, num_reg_steps: int = 5, num_auto_corr_rolls: int = 5, ): r""" Function used to generate inverted latents given a prompt and image. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. image (`torch.Tensor` `np.ndarray`, `PIL.Image.Image`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`): `Image`, or tensor representing an image batch which will be used for conditioning. Can also accept image latents as `image`, if passing latents directly, it will not be encoded again. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 1): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. cross_attention_guidance_amount (`float`, defaults to 0.1): Amount of guidance needed from the reference cross-attention maps. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that will be called every `callback_steps` steps during inference. The function will be called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function will be called. If not specified, the callback will be called at every step. lambda_auto_corr (`float`, *optional*, defaults to 20.0): Lambda parameter to control auto correction lambda_kl (`float`, *optional*, defaults to 20.0): Lambda parameter to control Kullback–Leibler divergence output num_reg_steps (`int`, *optional*, defaults to 5): Number of regularization loss steps num_auto_corr_rolls (`int`, *optional*, defaults to 5): Number of auto correction roll steps Examples: Returns: [`~pipelines.stable_diffusion.pipeline_stable_diffusion_pix2pix_zero.Pix2PixInversionPipelineOutput`] or `tuple`: [`~pipelines.stable_diffusion.pipeline_stable_diffusion_pix2pix_zero.Pix2PixInversionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is the inverted latents tensor and then second is the corresponding decoded image. """ # 1. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if cross_attention_kwargs is None: cross_attention_kwargs = {} device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Preprocess image image = self.image_processor.preprocess(image) # 4. Prepare latent variables latents = self.prepare_image_latents(image, batch_size, self.vae.dtype, device, generator) # 5. Encode input prompt num_images_per_prompt = 1 prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, prompt_embeds=prompt_embeds, ) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes if do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) # 4. Prepare timesteps self.inverse_scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.inverse_scheduler.timesteps # 6. Rejig the UNet so that we can obtain the cross-attenion maps and # use them for guiding the subsequent image generation. self.unet = prepare_unet(self.unet) # 7. Denoising loop where we obtain the cross-attention maps. num_warmup_steps = len(timesteps) - num_inference_steps * self.inverse_scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.inverse_scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, cross_attention_kwargs={"timestep": t}, ).sample # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # regularization of the noise prediction with torch.enable_grad(): for _ in range(num_reg_steps): if lambda_auto_corr > 0: for _ in range(num_auto_corr_rolls): var = torch.autograd.Variable(noise_pred.detach().clone(), requires_grad=True) # Derive epsilon from model output before regularizing to IID standard normal var_epsilon = self.get_epsilon(var, latent_model_input.detach(), t) l_ac = self.auto_corr_loss(var_epsilon, generator=generator) l_ac.backward() grad = var.grad.detach() / num_auto_corr_rolls noise_pred = noise_pred - lambda_auto_corr * grad if lambda_kl > 0: var = torch.autograd.Variable(noise_pred.detach().clone(), requires_grad=True) # Derive epsilon from model output before regularizing to IID standard normal var_epsilon = self.get_epsilon(var, latent_model_input.detach(), t) l_kld = self.kl_divergence(var_epsilon) l_kld.backward() grad = var.grad.detach() noise_pred = noise_pred - lambda_kl * grad noise_pred = noise_pred.detach() # compute the previous noisy sample x_t -> x_t-1 latents = self.inverse_scheduler.step(noise_pred, t, latents).prev_sample # call the callback, if provided if i == len(timesteps) - 1 or ( (i + 1) > num_warmup_steps and (i + 1) % self.inverse_scheduler.order == 0 ): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) inverted_latents = latents.detach().clone() # 8. Post-processing image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] image = self.image_processor.postprocess(image, output_type=output_type) # Offload all models self.maybe_free_model_hooks() if not return_dict: return (inverted_latents, image) return Pix2PixInversionPipelineOutput(latents=inverted_latents, images=image)
diffusers/src/diffusers/pipelines/deprecated/stable_diffusion_variants/pipeline_stable_diffusion_pix2pix_zero.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/deprecated/stable_diffusion_variants/pipeline_stable_diffusion_pix2pix_zero.py", "repo_id": "diffusers", "token_count": 28148 }
231
from typing import TYPE_CHECKING from ...utils import ( DIFFUSERS_SLOW_IMPORT, OptionalDependencyNotAvailable, _LazyModule, get_objects_from_module, is_torch_available, is_transformers_available, ) _dummy_objects = {} _import_structure = {} try: if not (is_transformers_available() and is_torch_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from ...utils import dummy_torch_and_transformers_objects # noqa F403 _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects)) else: _import_structure["pipeline_kandinsky"] = ["KandinskyPipeline"] _import_structure["pipeline_kandinsky_combined"] = [ "KandinskyCombinedPipeline", "KandinskyImg2ImgCombinedPipeline", "KandinskyInpaintCombinedPipeline", ] _import_structure["pipeline_kandinsky_img2img"] = ["KandinskyImg2ImgPipeline"] _import_structure["pipeline_kandinsky_inpaint"] = ["KandinskyInpaintPipeline"] _import_structure["pipeline_kandinsky_prior"] = ["KandinskyPriorPipeline", "KandinskyPriorPipelineOutput"] _import_structure["text_encoder"] = ["MultilingualCLIP"] if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT: try: if not (is_transformers_available() and is_torch_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from ...utils.dummy_torch_and_transformers_objects import * else: from .pipeline_kandinsky import KandinskyPipeline from .pipeline_kandinsky_combined import ( KandinskyCombinedPipeline, KandinskyImg2ImgCombinedPipeline, KandinskyInpaintCombinedPipeline, ) from .pipeline_kandinsky_img2img import KandinskyImg2ImgPipeline from .pipeline_kandinsky_inpaint import KandinskyInpaintPipeline from .pipeline_kandinsky_prior import KandinskyPriorPipeline, KandinskyPriorPipelineOutput from .text_encoder import MultilingualCLIP else: import sys sys.modules[__name__] = _LazyModule( __name__, globals()["__file__"], _import_structure, module_spec=__spec__, ) for name, value in _dummy_objects.items(): setattr(sys.modules[__name__], name, value)
diffusers/src/diffusers/pipelines/kandinsky/__init__.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/kandinsky/__init__.py", "repo_id": "diffusers", "token_count": 951 }
232
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import shutil from pathlib import Path from typing import Optional, Union import numpy as np from huggingface_hub import hf_hub_download from huggingface_hub.utils import validate_hf_hub_args from ..utils import ONNX_EXTERNAL_WEIGHTS_NAME, ONNX_WEIGHTS_NAME, is_onnx_available, logging if is_onnx_available(): import onnxruntime as ort logger = logging.get_logger(__name__) ORT_TO_NP_TYPE = { "tensor(bool)": np.bool_, "tensor(int8)": np.int8, "tensor(uint8)": np.uint8, "tensor(int16)": np.int16, "tensor(uint16)": np.uint16, "tensor(int32)": np.int32, "tensor(uint32)": np.uint32, "tensor(int64)": np.int64, "tensor(uint64)": np.uint64, "tensor(float16)": np.float16, "tensor(float)": np.float32, "tensor(double)": np.float64, } class OnnxRuntimeModel: def __init__(self, model=None, **kwargs): logger.info("`diffusers.OnnxRuntimeModel` is experimental and might change in the future.") self.model = model self.model_save_dir = kwargs.get("model_save_dir", None) self.latest_model_name = kwargs.get("latest_model_name", ONNX_WEIGHTS_NAME) def __call__(self, **kwargs): inputs = {k: np.array(v) for k, v in kwargs.items()} return self.model.run(None, inputs) @staticmethod def load_model(path: Union[str, Path], provider=None, sess_options=None): """ Loads an ONNX Inference session with an ExecutionProvider. Default provider is `CPUExecutionProvider` Arguments: path (`str` or `Path`): Directory from which to load provider(`str`, *optional*): Onnxruntime execution provider to use for loading the model, defaults to `CPUExecutionProvider` """ if provider is None: logger.info("No onnxruntime provider specified, using CPUExecutionProvider") provider = "CPUExecutionProvider" return ort.InferenceSession(path, providers=[provider], sess_options=sess_options) def _save_pretrained(self, save_directory: Union[str, Path], file_name: Optional[str] = None, **kwargs): """ Save a model and its configuration file to a directory, so that it can be re-loaded using the [`~optimum.onnxruntime.modeling_ort.ORTModel.from_pretrained`] class method. It will always save the latest_model_name. Arguments: save_directory (`str` or `Path`): Directory where to save the model file. file_name(`str`, *optional*): Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to save the model with a different name. """ model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME src_path = self.model_save_dir.joinpath(self.latest_model_name) dst_path = Path(save_directory).joinpath(model_file_name) try: shutil.copyfile(src_path, dst_path) except shutil.SameFileError: pass # copy external weights (for models >2GB) src_path = self.model_save_dir.joinpath(ONNX_EXTERNAL_WEIGHTS_NAME) if src_path.exists(): dst_path = Path(save_directory).joinpath(ONNX_EXTERNAL_WEIGHTS_NAME) try: shutil.copyfile(src_path, dst_path) except shutil.SameFileError: pass def save_pretrained( self, save_directory: Union[str, os.PathLike], **kwargs, ): """ Save a model to a directory, so that it can be re-loaded using the [`~OnnxModel.from_pretrained`] class method.: Arguments: save_directory (`str` or `os.PathLike`): Directory to which to save. Will be created if it doesn't exist. """ if os.path.isfile(save_directory): logger.error(f"Provided path ({save_directory}) should be a directory, not a file") return os.makedirs(save_directory, exist_ok=True) # saving model weights/files self._save_pretrained(save_directory, **kwargs) @classmethod @validate_hf_hub_args def _from_pretrained( cls, model_id: Union[str, Path], token: Optional[Union[bool, str, None]] = None, revision: Optional[Union[str, None]] = None, force_download: bool = False, cache_dir: Optional[str] = None, file_name: Optional[str] = None, provider: Optional[str] = None, sess_options: Optional["ort.SessionOptions"] = None, **kwargs, ): """ Load a model from a directory or the HF Hub. Arguments: model_id (`str` or `Path`): Directory from which to load token (`str` or `bool`): Is needed to load models from a private or gated repository revision (`str`): Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id cache_dir (`Union[str, Path]`, *optional*): Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. file_name(`str`): Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to load different model files from the same repository or directory. provider(`str`): The ONNX runtime provider, e.g. `CPUExecutionProvider` or `CUDAExecutionProvider`. kwargs (`Dict`, *optional*): kwargs will be passed to the model during initialization """ model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME # load model from local directory if os.path.isdir(model_id): model = OnnxRuntimeModel.load_model( Path(model_id, model_file_name).as_posix(), provider=provider, sess_options=sess_options ) kwargs["model_save_dir"] = Path(model_id) # load model from hub else: # download model model_cache_path = hf_hub_download( repo_id=model_id, filename=model_file_name, token=token, revision=revision, cache_dir=cache_dir, force_download=force_download, ) kwargs["model_save_dir"] = Path(model_cache_path).parent kwargs["latest_model_name"] = Path(model_cache_path).name model = OnnxRuntimeModel.load_model(model_cache_path, provider=provider, sess_options=sess_options) return cls(model=model, **kwargs) @classmethod @validate_hf_hub_args def from_pretrained( cls, model_id: Union[str, Path], force_download: bool = True, token: Optional[str] = None, cache_dir: Optional[str] = None, **model_kwargs, ): revision = None if len(str(model_id).split("@")) == 2: model_id, revision = model_id.split("@") return cls._from_pretrained( model_id=model_id, revision=revision, cache_dir=cache_dir, force_download=force_download, token=token, **model_kwargs, )
diffusers/src/diffusers/pipelines/onnx_utils.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/onnx_utils.py", "repo_id": "diffusers", "token_count": 3622 }
233
# Copyright 2024 Open AI and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Tuple import numpy as np import torch @dataclass class DifferentiableProjectiveCamera: """ Implements a batch, differentiable, standard pinhole camera """ origin: torch.Tensor # [batch_size x 3] x: torch.Tensor # [batch_size x 3] y: torch.Tensor # [batch_size x 3] z: torch.Tensor # [batch_size x 3] width: int height: int x_fov: float y_fov: float shape: Tuple[int] def __post_init__(self): assert self.x.shape[0] == self.y.shape[0] == self.z.shape[0] == self.origin.shape[0] assert self.x.shape[1] == self.y.shape[1] == self.z.shape[1] == self.origin.shape[1] == 3 assert len(self.x.shape) == len(self.y.shape) == len(self.z.shape) == len(self.origin.shape) == 2 def resolution(self): return torch.from_numpy(np.array([self.width, self.height], dtype=np.float32)) def fov(self): return torch.from_numpy(np.array([self.x_fov, self.y_fov], dtype=np.float32)) def get_image_coords(self) -> torch.Tensor: """ :return: coords of shape (width * height, 2) """ pixel_indices = torch.arange(self.height * self.width) coords = torch.stack( [ pixel_indices % self.width, torch.div(pixel_indices, self.width, rounding_mode="trunc"), ], axis=1, ) return coords @property def camera_rays(self): batch_size, *inner_shape = self.shape inner_batch_size = int(np.prod(inner_shape)) coords = self.get_image_coords() coords = torch.broadcast_to(coords.unsqueeze(0), [batch_size * inner_batch_size, *coords.shape]) rays = self.get_camera_rays(coords) rays = rays.view(batch_size, inner_batch_size * self.height * self.width, 2, 3) return rays def get_camera_rays(self, coords: torch.Tensor) -> torch.Tensor: batch_size, *shape, n_coords = coords.shape assert n_coords == 2 assert batch_size == self.origin.shape[0] flat = coords.view(batch_size, -1, 2) res = self.resolution() fov = self.fov() fracs = (flat.float() / (res - 1)) * 2 - 1 fracs = fracs * torch.tan(fov / 2) fracs = fracs.view(batch_size, -1, 2) directions = ( self.z.view(batch_size, 1, 3) + self.x.view(batch_size, 1, 3) * fracs[:, :, :1] + self.y.view(batch_size, 1, 3) * fracs[:, :, 1:] ) directions = directions / directions.norm(dim=-1, keepdim=True) rays = torch.stack( [ torch.broadcast_to(self.origin.view(batch_size, 1, 3), [batch_size, directions.shape[1], 3]), directions, ], dim=2, ) return rays.view(batch_size, *shape, 2, 3) def resize_image(self, width: int, height: int) -> "DifferentiableProjectiveCamera": """ Creates a new camera for the resized view assuming the aspect ratio does not change. """ assert width * self.height == height * self.width, "The aspect ratio should not change." return DifferentiableProjectiveCamera( origin=self.origin, x=self.x, y=self.y, z=self.z, width=width, height=height, x_fov=self.x_fov, y_fov=self.y_fov, ) def create_pan_cameras(size: int) -> DifferentiableProjectiveCamera: origins = [] xs = [] ys = [] zs = [] for theta in np.linspace(0, 2 * np.pi, num=20): z = np.array([np.sin(theta), np.cos(theta), -0.5]) z /= np.sqrt(np.sum(z**2)) origin = -z * 4 x = np.array([np.cos(theta), -np.sin(theta), 0.0]) y = np.cross(z, x) origins.append(origin) xs.append(x) ys.append(y) zs.append(z) return DifferentiableProjectiveCamera( origin=torch.from_numpy(np.stack(origins, axis=0)).float(), x=torch.from_numpy(np.stack(xs, axis=0)).float(), y=torch.from_numpy(np.stack(ys, axis=0)).float(), z=torch.from_numpy(np.stack(zs, axis=0)).float(), width=size, height=size, x_fov=0.7, y_fov=0.7, shape=(1, len(xs)), )
diffusers/src/diffusers/pipelines/shap_e/camera.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/shap_e/camera.py", "repo_id": "diffusers", "token_count": 2274 }
234
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Optional, Union import torch from torch import nn from ...configuration_utils import ConfigMixin, register_to_config from ...models.modeling_utils import ModelMixin class StableUnCLIPImageNormalizer(ModelMixin, ConfigMixin): """ This class is used to hold the mean and standard deviation of the CLIP embedder used in stable unCLIP. It is used to normalize the image embeddings before the noise is applied and un-normalize the noised image embeddings. """ @register_to_config def __init__( self, embedding_dim: int = 768, ): super().__init__() self.mean = nn.Parameter(torch.zeros(1, embedding_dim)) self.std = nn.Parameter(torch.ones(1, embedding_dim)) def to( self, torch_device: Optional[Union[str, torch.device]] = None, torch_dtype: Optional[torch.dtype] = None, ): self.mean = nn.Parameter(self.mean.to(torch_device).to(torch_dtype)) self.std = nn.Parameter(self.std.to(torch_device).to(torch_dtype)) return self def scale(self, embeds): embeds = (embeds - self.mean) * 1.0 / self.std return embeds def unscale(self, embeds): embeds = (embeds * self.std) + self.mean return embeds
diffusers/src/diffusers/pipelines/stable_diffusion/stable_unclip_image_normalizer.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/stable_diffusion/stable_unclip_image_normalizer.py", "repo_id": "diffusers", "token_count": 674 }
235
from dataclasses import dataclass from typing import List, Optional, Union import numpy as np import PIL.Image from ...utils import ( BaseOutput, ) @dataclass class StableDiffusionSafePipelineOutput(BaseOutput): """ Output class for Safe Stable Diffusion pipelines. Args: images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (`List[bool]`) List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, or `None` if safety checking could not be performed. images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images that were flagged by the safety checker any may contain "not-safe-for-work" (nsfw) content, or `None` if no safety check was performed or no images were flagged. applied_safety_concept (`str`) The safety concept that was applied for safety guidance, or `None` if safety guidance was disabled """ images: Union[List[PIL.Image.Image], np.ndarray] nsfw_content_detected: Optional[List[bool]] unsafe_images: Optional[Union[List[PIL.Image.Image], np.ndarray]] applied_safety_concept: Optional[str]
diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_output.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_output.py", "repo_id": "diffusers", "token_count": 527 }
236
# Copyright 2024 TencentARC and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect from dataclasses import dataclass from typing import Any, Callable, Dict, List, Optional, Union import numpy as np import PIL.Image import torch from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer from ...image_processor import VaeImageProcessor from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin from ...models import AutoencoderKL, MultiAdapter, T2IAdapter, UNet2DConditionModel from ...models.lora import adjust_lora_scale_text_encoder from ...schedulers import KarrasDiffusionSchedulers from ...utils import ( PIL_INTERPOLATION, USE_PEFT_BACKEND, BaseOutput, deprecate, logging, replace_example_docstring, scale_lora_layers, unscale_lora_layers, ) from ...utils.torch_utils import randn_tensor from ..pipeline_utils import DiffusionPipeline, StableDiffusionMixin from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker @dataclass class StableDiffusionAdapterPipelineOutput(BaseOutput): """ Args: images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (`List[bool]`) List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, or `None` if safety checking could not be performed. """ images: Union[List[PIL.Image.Image], np.ndarray] nsfw_content_detected: Optional[List[bool]] logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> from PIL import Image >>> from diffusers.utils import load_image >>> import torch >>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter >>> image = load_image( ... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" ... ) >>> color_palette = image.resize((8, 8)) >>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) >>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) >>> pipe = StableDiffusionAdapterPipeline.from_pretrained( ... "CompVis/stable-diffusion-v1-4", ... adapter=adapter, ... torch_dtype=torch.float16, ... ) >>> pipe.to("cuda") >>> out_image = pipe( ... "At night, glowing cubes in front of the beach", ... image=color_palette, ... ).images[0] ``` """ def _preprocess_adapter_image(image, height, width): if isinstance(image, torch.Tensor): return image elif isinstance(image, PIL.Image.Image): image = [image] if isinstance(image[0], PIL.Image.Image): image = [np.array(i.resize((width, height), resample=PIL_INTERPOLATION["lanczos"])) for i in image] image = [ i[None, ..., None] if i.ndim == 2 else i[None, ...] for i in image ] # expand [h, w] or [h, w, c] to [b, h, w, c] image = np.concatenate(image, axis=0) image = np.array(image).astype(np.float32) / 255.0 image = image.transpose(0, 3, 1, 2) image = torch.from_numpy(image) elif isinstance(image[0], torch.Tensor): if image[0].ndim == 3: image = torch.stack(image, dim=0) elif image[0].ndim == 4: image = torch.cat(image, dim=0) else: raise ValueError( f"Invalid image tensor! Expecting image tensor with 3 or 4 dimension, but recive: {image[0].ndim}" ) return image # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps def retrieve_timesteps( scheduler, num_inference_steps: Optional[int] = None, device: Optional[Union[str, torch.device]] = None, timesteps: Optional[List[int]] = None, sigmas: Optional[List[float]] = None, **kwargs, ): """ Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`. Args: scheduler (`SchedulerMixin`): The scheduler to get timesteps from. num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps` must be `None`. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. timesteps (`List[int]`, *optional*): Custom timesteps used to override the timestep spacing strategy of the scheduler. If `timesteps` is passed, `num_inference_steps` and `sigmas` must be `None`. sigmas (`List[float]`, *optional*): Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed, `num_inference_steps` and `timesteps` must be `None`. Returns: `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the second element is the number of inference steps. """ if timesteps is not None and sigmas is not None: raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values") if timesteps is not None: accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accepts_timesteps: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" timestep schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) elif sigmas is not None: accept_sigmas = "sigmas" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accept_sigmas: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" sigmas schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) else: scheduler.set_timesteps(num_inference_steps, device=device, **kwargs) timesteps = scheduler.timesteps return timesteps, num_inference_steps class StableDiffusionAdapterPipeline(DiffusionPipeline, StableDiffusionMixin): r""" Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter https://arxiv.org/abs/2302.08453 This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: adapter ([`T2IAdapter`] or [`MultiAdapter`] or `List[T2IAdapter]`): Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (`List[float]`, *optional*, defaults to None): List of floats representing the weight which will be multiply to each adapter's output before adding them together. vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. feature_extractor ([`CLIPFeatureExtractor`]): Model that extracts features from generated images to be used as inputs for the `safety_checker`. """ model_cpu_offload_seq = "text_encoder->adapter->unet->vae" _optional_components = ["safety_checker", "feature_extractor"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, adapter: Union[T2IAdapter, MultiAdapter, List[T2IAdapter]], scheduler: KarrasDiffusionSchedulers, safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPFeatureExtractor, requires_safety_checker: bool = True, ): super().__init__() if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) if isinstance(adapter, (list, tuple)): adapter = MultiAdapter(adapter) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, adapter=adapter, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) self.register_to_config(requires_safety_checker=requires_safety_checker) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt def _encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, **kwargs, ): deprecation_message = "`_encode_prompt()` is deprecated and it will be removed in a future version. Use `encode_prompt()` instead. Also, be aware that the output format changed from a concatenated tensor to a tuple." deprecate("_encode_prompt()", "1.0.0", deprecation_message, standard_warn=False) prompt_embeds_tuple = self.encode_prompt( prompt=prompt, device=device, num_images_per_prompt=num_images_per_prompt, do_classifier_free_guidance=do_classifier_free_guidance, negative_prompt=negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=lora_scale, **kwargs, ) # concatenate for backwards comp prompt_embeds = torch.cat([prompt_embeds_tuple[1], prompt_embeds_tuple[0]]) return prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_prompt def encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, clip_skip: Optional[int] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. """ # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, LoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) else: scale_lora_layers(self.text_encoder, lora_scale) if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, self.tokenizer) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = self.tokenizer.batch_decode( untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] ) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None if clip_skip is None: prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask) prompt_embeds = prompt_embeds[0] else: prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, output_hidden_states=True ) # Access the `hidden_states` first, that contains a tuple of # all the hidden states from the encoder layers. Then index into # the tuple to access the hidden states from the desired layer. prompt_embeds = prompt_embeds[-1][-(clip_skip + 1)] # We also need to apply the final LayerNorm here to not mess with the # representations. The `last_hidden_states` that we typically use for # obtaining the final prompt representations passes through the LayerNorm # layer. prompt_embeds = self.text_encoder.text_model.final_layer_norm(prompt_embeds) if self.text_encoder is not None: prompt_embeds_dtype = self.text_encoder.dtype elif self.unet is not None: prompt_embeds_dtype = self.unet.dtype else: prompt_embeds_dtype = prompt_embeds.dtype prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance and negative_prompt_embeds is None: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) max_length = prompt_embeds.shape[1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) if isinstance(self, LoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder, lora_scale) return prompt_embeds, negative_prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker def run_safety_checker(self, image, device, dtype): if self.safety_checker is None: has_nsfw_concept = None else: if torch.is_tensor(image): feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") else: feature_extractor_input = self.image_processor.numpy_to_pil(image) safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(dtype) ) return image, has_nsfw_concept # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents def decode_latents(self, latents): deprecation_message = "The decode_latents method is deprecated and will be removed in 1.0.0. Please use VaeImageProcessor.postprocess(...) instead" deprecate("decode_latents", "1.0.0", deprecation_message, standard_warn=False) latents = 1 / self.vae.config.scaling_factor * latents image = self.vae.decode(latents, return_dict=False)[0] image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() return image # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, prompt, height, width, callback_steps, image, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if (callback_steps is None) or ( callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) ): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) if isinstance(self.adapter, MultiAdapter): if not isinstance(image, list): raise ValueError( "MultiAdapter is enabled, but `image` is not a list. Please pass a list of images to `image`." ) if len(image) != len(self.adapter.adapters): raise ValueError( f"MultiAdapter requires passing the same number of images as adapters. Given {len(image)} images and {len(self.adapter.adapters)} adapters." ) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents def _default_height_width(self, height, width, image): # NOTE: It is possible that a list of images have different # dimensions for each image, so just checking the first image # is not _exactly_ correct, but it is simple. while isinstance(image, list): image = image[0] if height is None: if isinstance(image, PIL.Image.Image): height = image.height elif isinstance(image, torch.Tensor): height = image.shape[-2] # round down to nearest multiple of `self.adapter.downscale_factor` height = (height // self.adapter.downscale_factor) * self.adapter.downscale_factor if width is None: if isinstance(image, PIL.Image.Image): width = image.width elif isinstance(image, torch.Tensor): width = image.shape[-1] # round down to nearest multiple of `self.adapter.downscale_factor` width = (width // self.adapter.downscale_factor) * self.adapter.downscale_factor return height, width # Copied from diffusers.pipelines.latent_consistency_models.pipeline_latent_consistency_text2img.LatentConsistencyModelPipeline.get_guidance_scale_embedding def get_guidance_scale_embedding( self, w: torch.Tensor, embedding_dim: int = 512, dtype: torch.dtype = torch.float32 ) -> torch.Tensor: """ See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 Args: w (`torch.Tensor`): Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings. embedding_dim (`int`, *optional*, defaults to 512): Dimension of the embeddings to generate. dtype (`torch.dtype`, *optional*, defaults to `torch.float32`): Data type of the generated embeddings. Returns: `torch.Tensor`: Embedding vectors with shape `(len(w), embedding_dim)`. """ assert len(w.shape) == 1 w = w * 1000.0 half_dim = embedding_dim // 2 emb = torch.log(torch.tensor(10000.0)) / (half_dim - 1) emb = torch.exp(torch.arange(half_dim, dtype=dtype) * -emb) emb = w.to(dtype)[:, None] * emb[None, :] emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) if embedding_dim % 2 == 1: # zero pad emb = torch.nn.functional.pad(emb, (0, 1)) assert emb.shape == (w.shape[0], embedding_dim) return emb @property def guidance_scale(self): return self._guidance_scale # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. @property def do_classifier_free_guidance(self): return self._guidance_scale > 1 and self.unet.config.time_cond_proj_dim is None @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]] = None, image: Union[torch.Tensor, PIL.Image.Image, List[PIL.Image.Image]] = None, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, timesteps: List[int] = None, sigmas: List[float] = None, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, adapter_conditioning_scale: Union[float, List[float]] = 1.0, clip_skip: Optional[int] = None, ): r""" Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. image (`torch.Tensor`, `PIL.Image.Image`, `List[torch.Tensor]` or `List[PIL.Image.Image]` or `List[List[PIL.Image.Image]]`): The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the type is specified as `torch.Tensor`, it is passed to Adapter as is. PIL.Image.Image` can also be accepted as an image. The control image is automatically resized to fit the output image. height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The height in pixels of the generated image. width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. timesteps (`List[int]`, *optional*): Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed will be used. Must be in descending order. sigmas (`List[float]`, *optional*): Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed will be used. guidance_scale (`float`, *optional*, defaults to 7.5): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to [`schedulers.DDIMScheduler`], will be ignored for others. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that will be called every `callback_steps` steps during inference. The function will be called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function will be called. If not specified, the callback will be called at every step. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttnProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). adapter_conditioning_scale (`float` or `List[float]`, *optional*, defaults to 1.0): The outputs of the adapter are multiplied by `adapter_conditioning_scale` before they are added to the residual in the original unet. If multiple adapters are specified in init, you can set the corresponding scale as a list. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. Examples: Returns: [`~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput`] or `tuple`: [`~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the `safety_checker`. """ # 0. Default height and width to unet height, width = self._default_height_width(height, width, image) device = self._execution_device # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, height, width, callback_steps, image, negative_prompt, prompt_embeds, negative_prompt_embeds ) self._guidance_scale = guidance_scale if isinstance(self.adapter, MultiAdapter): adapter_input = [] for one_image in image: one_image = _preprocess_adapter_image(one_image, height, width) one_image = one_image.to(device=device, dtype=self.adapter.dtype) adapter_input.append(one_image) else: adapter_input = _preprocess_adapter_image(image, height, width) adapter_input = adapter_input.to(device=device, dtype=self.adapter.dtype) # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] # 3. Encode input prompt prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt, device, num_images_per_prompt, self.do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, clip_skip=clip_skip, ) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes if self.do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) # 4. Prepare timesteps timesteps, num_inference_steps = retrieve_timesteps( self.scheduler, num_inference_steps, device, timesteps, sigmas ) # 5. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, prompt_embeds.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 6.5 Optionally get Guidance Scale Embedding timestep_cond = None if self.unet.config.time_cond_proj_dim is not None: guidance_scale_tensor = torch.tensor(self.guidance_scale - 1).repeat(batch_size * num_images_per_prompt) timestep_cond = self.get_guidance_scale_embedding( guidance_scale_tensor, embedding_dim=self.unet.config.time_cond_proj_dim ).to(device=device, dtype=latents.dtype) # 7. Denoising loop if isinstance(self.adapter, MultiAdapter): adapter_state = self.adapter(adapter_input, adapter_conditioning_scale) for k, v in enumerate(adapter_state): adapter_state[k] = v else: adapter_state = self.adapter(adapter_input) for k, v in enumerate(adapter_state): adapter_state[k] = v * adapter_conditioning_scale if num_images_per_prompt > 1: for k, v in enumerate(adapter_state): adapter_state[k] = v.repeat(num_images_per_prompt, 1, 1, 1) if self.do_classifier_free_guidance: for k, v in enumerate(adapter_state): adapter_state[k] = torch.cat([v] * 2, dim=0) num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, timestep_cond=timestep_cond, cross_attention_kwargs=cross_attention_kwargs, down_intrablock_additional_residuals=[state.clone() for state in adapter_state], return_dict=False, )[0] # perform guidance if self.do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if output_type == "latent": image = latents has_nsfw_concept = None elif output_type == "pil": # 8. Post-processing image = self.decode_latents(latents) # 9. Run safety checker image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) # 10. Convert to PIL image = self.numpy_to_pil(image) else: # 8. Post-processing image = self.decode_latents(latents) # 9. Run safety checker image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) # Offload all models self.maybe_free_model_hooks() if not return_dict: return (image, has_nsfw_concept) return StableDiffusionAdapterPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diffusers/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py", "repo_id": "diffusers", "token_count": 20434 }
237
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import List, Optional, Tuple, Union import numpy as np import torch from ..configuration_utils import ConfigMixin, register_to_config from ..utils import BaseOutput, logging from ..utils.torch_utils import randn_tensor from .scheduling_utils import SchedulerMixin logger = logging.get_logger(__name__) # pylint: disable=invalid-name @dataclass class CMStochasticIterativeSchedulerOutput(BaseOutput): """ Output class for the scheduler's `step` function. Args: prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the denoising loop. """ prev_sample: torch.Tensor class CMStochasticIterativeScheduler(SchedulerMixin, ConfigMixin): """ Multistep and onestep sampling for consistency models. This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving. Args: num_train_timesteps (`int`, defaults to 40): The number of diffusion steps to train the model. sigma_min (`float`, defaults to 0.002): Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (`float`, defaults to 80.0): Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (`float`, defaults to 0.5): The standard deviation of the data distribution from the EDM [paper](https://huggingface.co/papers/2206.00364). Defaults to 0.5 from the original implementation. s_noise (`float`, defaults to 1.0): The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, 1.011]. Defaults to 1.0 from the original implementation. rho (`float`, defaults to 7.0): The parameter for calculating the Karras sigma schedule from the EDM [paper](https://huggingface.co/papers/2206.00364). Defaults to 7.0 from the original implementation. clip_denoised (`bool`, defaults to `True`): Whether to clip the denoised outputs to `(-1, 1)`. timesteps (`List` or `np.ndarray` or `torch.Tensor`, *optional*): An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in increasing order. """ order = 1 @register_to_config def __init__( self, num_train_timesteps: int = 40, sigma_min: float = 0.002, sigma_max: float = 80.0, sigma_data: float = 0.5, s_noise: float = 1.0, rho: float = 7.0, clip_denoised: bool = True, ): # standard deviation of the initial noise distribution self.init_noise_sigma = sigma_max ramp = np.linspace(0, 1, num_train_timesteps) sigmas = self._convert_to_karras(ramp) timesteps = self.sigma_to_t(sigmas) # setable values self.num_inference_steps = None self.sigmas = torch.from_numpy(sigmas) self.timesteps = torch.from_numpy(timesteps) self.custom_timesteps = False self.is_scale_input_called = False self._step_index = None self._begin_index = None self.sigmas = self.sigmas.to("cpu") # to avoid too much CPU/GPU communication @property def step_index(self): """ The index counter for current timestep. It will increase 1 after each scheduler step. """ return self._step_index @property def begin_index(self): """ The index for the first timestep. It should be set from pipeline with `set_begin_index` method. """ return self._begin_index # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.set_begin_index def set_begin_index(self, begin_index: int = 0): """ Sets the begin index for the scheduler. This function should be run from pipeline before the inference. Args: begin_index (`int`): The begin index for the scheduler. """ self._begin_index = begin_index def scale_model_input(self, sample: torch.Tensor, timestep: Union[float, torch.Tensor]) -> torch.Tensor: """ Scales the consistency model input by `(sigma**2 + sigma_data**2) ** 0.5`. Args: sample (`torch.Tensor`): The input sample. timestep (`float` or `torch.Tensor`): The current timestep in the diffusion chain. Returns: `torch.Tensor`: A scaled input sample. """ # Get sigma corresponding to timestep if self.step_index is None: self._init_step_index(timestep) sigma = self.sigmas[self.step_index] sample = sample / ((sigma**2 + self.config.sigma_data**2) ** 0.5) self.is_scale_input_called = True return sample def sigma_to_t(self, sigmas: Union[float, np.ndarray]): """ Gets scaled timesteps from the Karras sigmas for input to the consistency model. Args: sigmas (`float` or `np.ndarray`): A single Karras sigma or an array of Karras sigmas. Returns: `float` or `np.ndarray`: A scaled input timestep or scaled input timestep array. """ if not isinstance(sigmas, np.ndarray): sigmas = np.array(sigmas, dtype=np.float64) timesteps = 1000 * 0.25 * np.log(sigmas + 1e-44) return timesteps def set_timesteps( self, num_inference_steps: Optional[int] = None, device: Union[str, torch.device] = None, timesteps: Optional[List[int]] = None, ): """ Sets the timesteps used for the diffusion chain (to be run before inference). Args: num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. timesteps (`List[int]`, *optional*): Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default timestep spacing strategy of equal spacing between timesteps is used. If `timesteps` is passed, `num_inference_steps` must be `None`. """ if num_inference_steps is None and timesteps is None: raise ValueError("Exactly one of `num_inference_steps` or `timesteps` must be supplied.") if num_inference_steps is not None and timesteps is not None: raise ValueError("Can only pass one of `num_inference_steps` or `timesteps`.") # Follow DDPMScheduler custom timesteps logic if timesteps is not None: for i in range(1, len(timesteps)): if timesteps[i] >= timesteps[i - 1]: raise ValueError("`timesteps` must be in descending order.") if timesteps[0] >= self.config.num_train_timesteps: raise ValueError( f"`timesteps` must start before `self.config.train_timesteps`:" f" {self.config.num_train_timesteps}." ) timesteps = np.array(timesteps, dtype=np.int64) self.custom_timesteps = True else: if num_inference_steps > self.config.num_train_timesteps: raise ValueError( f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:" f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle" f" maximal {self.config.num_train_timesteps} timesteps." ) self.num_inference_steps = num_inference_steps step_ratio = self.config.num_train_timesteps // self.num_inference_steps timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64) self.custom_timesteps = False # Map timesteps to Karras sigmas directly for multistep sampling # See https://github.com/openai/consistency_models/blob/main/cm/karras_diffusion.py#L675 num_train_timesteps = self.config.num_train_timesteps ramp = timesteps[::-1].copy() ramp = ramp / (num_train_timesteps - 1) sigmas = self._convert_to_karras(ramp) timesteps = self.sigma_to_t(sigmas) sigmas = np.concatenate([sigmas, [self.config.sigma_min]]).astype(np.float32) self.sigmas = torch.from_numpy(sigmas).to(device=device) if str(device).startswith("mps"): # mps does not support float64 self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32) else: self.timesteps = torch.from_numpy(timesteps).to(device=device) self._step_index = None self._begin_index = None self.sigmas = self.sigmas.to("cpu") # to avoid too much CPU/GPU communication # Modified _convert_to_karras implementation that takes in ramp as argument def _convert_to_karras(self, ramp): """Constructs the noise schedule of Karras et al. (2022).""" sigma_min: float = self.config.sigma_min sigma_max: float = self.config.sigma_max rho = self.config.rho min_inv_rho = sigma_min ** (1 / rho) max_inv_rho = sigma_max ** (1 / rho) sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho return sigmas def get_scalings(self, sigma): sigma_data = self.config.sigma_data c_skip = sigma_data**2 / (sigma**2 + sigma_data**2) c_out = sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5 return c_skip, c_out def get_scalings_for_boundary_condition(self, sigma): """ Gets the scalings used in the consistency model parameterization (from Appendix C of the [paper](https://huggingface.co/papers/2303.01469)) to enforce boundary condition. <Tip> `epsilon` in the equations for `c_skip` and `c_out` is set to `sigma_min`. </Tip> Args: sigma (`torch.Tensor`): The current sigma in the Karras sigma schedule. Returns: `tuple`: A two-element tuple where `c_skip` (which weights the current sample) is the first element and `c_out` (which weights the consistency model output) is the second element. """ sigma_min = self.config.sigma_min sigma_data = self.config.sigma_data c_skip = sigma_data**2 / ((sigma - sigma_min) ** 2 + sigma_data**2) c_out = (sigma - sigma_min) * sigma_data / (sigma**2 + sigma_data**2) ** 0.5 return c_skip, c_out # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.index_for_timestep def index_for_timestep(self, timestep, schedule_timesteps=None): if schedule_timesteps is None: schedule_timesteps = self.timesteps indices = (schedule_timesteps == timestep).nonzero() # The sigma index that is taken for the **very** first `step` # is always the second index (or the last index if there is only 1) # This way we can ensure we don't accidentally skip a sigma in # case we start in the middle of the denoising schedule (e.g. for image-to-image) pos = 1 if len(indices) > 1 else 0 return indices[pos].item() # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._init_step_index def _init_step_index(self, timestep): if self.begin_index is None: if isinstance(timestep, torch.Tensor): timestep = timestep.to(self.timesteps.device) self._step_index = self.index_for_timestep(timestep) else: self._step_index = self._begin_index def step( self, model_output: torch.Tensor, timestep: Union[float, torch.Tensor], sample: torch.Tensor, generator: Optional[torch.Generator] = None, return_dict: bool = True, ) -> Union[CMStochasticIterativeSchedulerOutput, Tuple]: """ Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise). Args: model_output (`torch.Tensor`): The direct output from the learned diffusion model. timestep (`float`): The current timestep in the diffusion chain. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. generator (`torch.Generator`, *optional*): A random number generator. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput`] or `tuple`. Returns: [`~schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput`] or `tuple`: If return_dict is `True`, [`~schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput`] is returned, otherwise a tuple is returned where the first element is the sample tensor. """ if ( isinstance(timestep, int) or isinstance(timestep, torch.IntTensor) or isinstance(timestep, torch.LongTensor) ): raise ValueError( ( "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to" f" `{self.__class__}.step()` is not supported. Make sure to pass" " one of the `scheduler.timesteps` as a timestep." ), ) if not self.is_scale_input_called: logger.warning( "The `scale_model_input` function should be called before `step` to ensure correct denoising. " "See `StableDiffusionPipeline` for a usage example." ) sigma_min = self.config.sigma_min sigma_max = self.config.sigma_max if self.step_index is None: self._init_step_index(timestep) # sigma_next corresponds to next_t in original implementation sigma = self.sigmas[self.step_index] if self.step_index + 1 < self.config.num_train_timesteps: sigma_next = self.sigmas[self.step_index + 1] else: # Set sigma_next to sigma_min sigma_next = self.sigmas[-1] # Get scalings for boundary conditions c_skip, c_out = self.get_scalings_for_boundary_condition(sigma) # 1. Denoise model output using boundary conditions denoised = c_out * model_output + c_skip * sample if self.config.clip_denoised: denoised = denoised.clamp(-1, 1) # 2. Sample z ~ N(0, s_noise^2 * I) # Noise is not used for onestep sampling. if len(self.timesteps) > 1: noise = randn_tensor( model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator ) else: noise = torch.zeros_like(model_output) z = noise * self.config.s_noise sigma_hat = sigma_next.clamp(min=sigma_min, max=sigma_max) # 3. Return noisy sample # tau = sigma_hat, eps = sigma_min prev_sample = denoised + z * (sigma_hat**2 - sigma_min**2) ** 0.5 # upon completion increase step index by one self._step_index += 1 if not return_dict: return (prev_sample,) return CMStochasticIterativeSchedulerOutput(prev_sample=prev_sample) # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.add_noise def add_noise( self, original_samples: torch.Tensor, noise: torch.Tensor, timesteps: torch.Tensor, ) -> torch.Tensor: # Make sure sigmas and timesteps have the same device and dtype as original_samples sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): # mps does not support float64 schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) timesteps = timesteps.to(original_samples.device, dtype=torch.float32) else: schedule_timesteps = self.timesteps.to(original_samples.device) timesteps = timesteps.to(original_samples.device) # self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index if self.begin_index is None: step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timesteps] elif self.step_index is not None: # add_noise is called after first denoising step (for inpainting) step_indices = [self.step_index] * timesteps.shape[0] else: # add noise is called before first denoising step to create initial latent(img2img) step_indices = [self.begin_index] * timesteps.shape[0] sigma = sigmas[step_indices].flatten() while len(sigma.shape) < len(original_samples.shape): sigma = sigma.unsqueeze(-1) noisy_samples = original_samples + noise * sigma return noisy_samples def __len__(self): return self.config.num_train_timesteps
diffusers/src/diffusers/schedulers/scheduling_consistency_models.py/0
{ "file_path": "diffusers/src/diffusers/schedulers/scheduling_consistency_models.py", "repo_id": "diffusers", "token_count": 8187 }
238
# Copyright 2024 Katherine Crowson and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from dataclasses import dataclass from typing import Optional, Tuple, Union import numpy as np import torch from ..configuration_utils import ConfigMixin, register_to_config from ..utils import BaseOutput, logging from ..utils.torch_utils import randn_tensor from .scheduling_utils import SchedulerMixin logger = logging.get_logger(__name__) # pylint: disable=invalid-name @dataclass # Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerDiscrete class EDMEulerSchedulerOutput(BaseOutput): """ Output class for the scheduler's `step` function output. Args: prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the denoising loop. pred_original_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): The predicted denoised sample `(x_{0})` based on the model output from the current timestep. `pred_original_sample` can be used to preview progress or for guidance. """ prev_sample: torch.Tensor pred_original_sample: Optional[torch.Tensor] = None class EDMEulerScheduler(SchedulerMixin, ConfigMixin): """ Implements the Euler scheduler in EDM formulation as presented in Karras et al. 2022 [1]. [1] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based Generative Models." https://arxiv.org/abs/2206.00364 This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving. Args: sigma_min (`float`, *optional*, defaults to 0.002): Minimum noise magnitude in the sigma schedule. This was set to 0.002 in the EDM paper [1]; a reasonable range is [0, 10]. sigma_max (`float`, *optional*, defaults to 80.0): Maximum noise magnitude in the sigma schedule. This was set to 80.0 in the EDM paper [1]; a reasonable range is [0.2, 80.0]. sigma_data (`float`, *optional*, defaults to 0.5): The standard deviation of the data distribution. This is set to 0.5 in the EDM paper [1]. sigma_schedule (`str`, *optional*, defaults to `karras`): Sigma schedule to compute the `sigmas`. By default, we the schedule introduced in the EDM paper (https://arxiv.org/abs/2206.00364). Other acceptable value is "exponential". The exponential schedule was incorporated in this model: https://huggingface.co/stabilityai/cosxl. num_train_timesteps (`int`, defaults to 1000): The number of diffusion steps to train the model. prediction_type (`str`, defaults to `epsilon`, *optional*): Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process), `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen Video](https://imagen.research.google/video/paper.pdf) paper). rho (`float`, *optional*, defaults to 7.0): The rho parameter used for calculating the Karras sigma schedule, which is set to 7.0 in the EDM paper [1]. """ _compatibles = [] order = 1 @register_to_config def __init__( self, sigma_min: float = 0.002, sigma_max: float = 80.0, sigma_data: float = 0.5, sigma_schedule: str = "karras", num_train_timesteps: int = 1000, prediction_type: str = "epsilon", rho: float = 7.0, ): if sigma_schedule not in ["karras", "exponential"]: raise ValueError(f"Wrong value for provided for `{sigma_schedule=}`.`") # setable values self.num_inference_steps = None ramp = torch.linspace(0, 1, num_train_timesteps) if sigma_schedule == "karras": sigmas = self._compute_karras_sigmas(ramp) elif sigma_schedule == "exponential": sigmas = self._compute_exponential_sigmas(ramp) self.timesteps = self.precondition_noise(sigmas) self.sigmas = torch.cat([sigmas, torch.zeros(1, device=sigmas.device)]) self.is_scale_input_called = False self._step_index = None self._begin_index = None self.sigmas = self.sigmas.to("cpu") # to avoid too much CPU/GPU communication @property def init_noise_sigma(self): # standard deviation of the initial noise distribution return (self.config.sigma_max**2 + 1) ** 0.5 @property def step_index(self): """ The index counter for current timestep. It will increase 1 after each scheduler step. """ return self._step_index @property def begin_index(self): """ The index for the first timestep. It should be set from pipeline with `set_begin_index` method. """ return self._begin_index # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.set_begin_index def set_begin_index(self, begin_index: int = 0): """ Sets the begin index for the scheduler. This function should be run from pipeline before the inference. Args: begin_index (`int`): The begin index for the scheduler. """ self._begin_index = begin_index def precondition_inputs(self, sample, sigma): c_in = 1 / ((sigma**2 + self.config.sigma_data**2) ** 0.5) scaled_sample = sample * c_in return scaled_sample def precondition_noise(self, sigma): if not isinstance(sigma, torch.Tensor): sigma = torch.tensor([sigma]) c_noise = 0.25 * torch.log(sigma) return c_noise def precondition_outputs(self, sample, model_output, sigma): sigma_data = self.config.sigma_data c_skip = sigma_data**2 / (sigma**2 + sigma_data**2) if self.config.prediction_type == "epsilon": c_out = sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5 elif self.config.prediction_type == "v_prediction": c_out = -sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5 else: raise ValueError(f"Prediction type {self.config.prediction_type} is not supported.") denoised = c_skip * sample + c_out * model_output return denoised def scale_model_input(self, sample: torch.Tensor, timestep: Union[float, torch.Tensor]) -> torch.Tensor: """ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm. Args: sample (`torch.Tensor`): The input sample. timestep (`int`, *optional*): The current timestep in the diffusion chain. Returns: `torch.Tensor`: A scaled input sample. """ if self.step_index is None: self._init_step_index(timestep) sigma = self.sigmas[self.step_index] sample = self.precondition_inputs(sample, sigma) self.is_scale_input_called = True return sample def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): """ Sets the discrete timesteps used for the diffusion chain (to be run before inference). Args: num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. """ self.num_inference_steps = num_inference_steps ramp = np.linspace(0, 1, self.num_inference_steps) if self.config.sigma_schedule == "karras": sigmas = self._compute_karras_sigmas(ramp) elif self.config.sigma_schedule == "exponential": sigmas = self._compute_exponential_sigmas(ramp) sigmas = torch.from_numpy(sigmas).to(dtype=torch.float32, device=device) self.timesteps = self.precondition_noise(sigmas) self.sigmas = torch.cat([sigmas, torch.zeros(1, device=sigmas.device)]) self._step_index = None self._begin_index = None self.sigmas = self.sigmas.to("cpu") # to avoid too much CPU/GPU communication # Taken from https://github.com/crowsonkb/k-diffusion/blob/686dbad0f39640ea25c8a8c6a6e56bb40eacefa2/k_diffusion/sampling.py#L17 def _compute_karras_sigmas(self, ramp, sigma_min=None, sigma_max=None) -> torch.Tensor: """Constructs the noise schedule of Karras et al. (2022).""" sigma_min = sigma_min or self.config.sigma_min sigma_max = sigma_max or self.config.sigma_max rho = self.config.rho min_inv_rho = sigma_min ** (1 / rho) max_inv_rho = sigma_max ** (1 / rho) sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho return sigmas def _compute_exponential_sigmas(self, ramp, sigma_min=None, sigma_max=None) -> torch.Tensor: """Implementation closely follows k-diffusion. https://github.com/crowsonkb/k-diffusion/blob/6ab5146d4a5ef63901326489f31f1d8e7dd36b48/k_diffusion/sampling.py#L26 """ sigma_min = sigma_min or self.config.sigma_min sigma_max = sigma_max or self.config.sigma_max sigmas = torch.linspace(math.log(sigma_min), math.log(sigma_max), len(ramp)).exp().flip(0) return sigmas # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.index_for_timestep def index_for_timestep(self, timestep, schedule_timesteps=None): if schedule_timesteps is None: schedule_timesteps = self.timesteps indices = (schedule_timesteps == timestep).nonzero() # The sigma index that is taken for the **very** first `step` # is always the second index (or the last index if there is only 1) # This way we can ensure we don't accidentally skip a sigma in # case we start in the middle of the denoising schedule (e.g. for image-to-image) pos = 1 if len(indices) > 1 else 0 return indices[pos].item() # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._init_step_index def _init_step_index(self, timestep): if self.begin_index is None: if isinstance(timestep, torch.Tensor): timestep = timestep.to(self.timesteps.device) self._step_index = self.index_for_timestep(timestep) else: self._step_index = self._begin_index def step( self, model_output: torch.Tensor, timestep: Union[float, torch.Tensor], sample: torch.Tensor, s_churn: float = 0.0, s_tmin: float = 0.0, s_tmax: float = float("inf"), s_noise: float = 1.0, generator: Optional[torch.Generator] = None, return_dict: bool = True, ) -> Union[EDMEulerSchedulerOutput, Tuple]: """ Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise). Args: model_output (`torch.Tensor`): The direct output from learned diffusion model. timestep (`float`): The current discrete timestep in the diffusion chain. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. s_churn (`float`): s_tmin (`float`): s_tmax (`float`): s_noise (`float`, defaults to 1.0): Scaling factor for noise added to the sample. generator (`torch.Generator`, *optional*): A random number generator. return_dict (`bool`): Whether or not to return a [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] or tuple. Returns: [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] or `tuple`: If return_dict is `True`, [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] is returned, otherwise a tuple is returned where the first element is the sample tensor. """ if ( isinstance(timestep, int) or isinstance(timestep, torch.IntTensor) or isinstance(timestep, torch.LongTensor) ): raise ValueError( ( "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to" " `EDMEulerScheduler.step()` is not supported. Make sure to pass" " one of the `scheduler.timesteps` as a timestep." ), ) if not self.is_scale_input_called: logger.warning( "The `scale_model_input` function should be called before `step` to ensure correct denoising. " "See `StableDiffusionPipeline` for a usage example." ) if self.step_index is None: self._init_step_index(timestep) # Upcast to avoid precision issues when computing prev_sample sample = sample.to(torch.float32) sigma = self.sigmas[self.step_index] gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0 noise = randn_tensor( model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator ) eps = noise * s_noise sigma_hat = sigma * (gamma + 1) if gamma > 0: sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5 # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise pred_original_sample = self.precondition_outputs(sample, model_output, sigma_hat) # 2. Convert to an ODE derivative derivative = (sample - pred_original_sample) / sigma_hat dt = self.sigmas[self.step_index + 1] - sigma_hat prev_sample = sample + derivative * dt # Cast sample back to model compatible dtype prev_sample = prev_sample.to(model_output.dtype) # upon completion increase step index by one self._step_index += 1 if not return_dict: return (prev_sample,) return EDMEulerSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample) # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.add_noise def add_noise( self, original_samples: torch.Tensor, noise: torch.Tensor, timesteps: torch.Tensor, ) -> torch.Tensor: # Make sure sigmas and timesteps have the same device and dtype as original_samples sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): # mps does not support float64 schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) timesteps = timesteps.to(original_samples.device, dtype=torch.float32) else: schedule_timesteps = self.timesteps.to(original_samples.device) timesteps = timesteps.to(original_samples.device) # self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index if self.begin_index is None: step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timesteps] elif self.step_index is not None: # add_noise is called after first denoising step (for inpainting) step_indices = [self.step_index] * timesteps.shape[0] else: # add noise is called before first denoising step to create initial latent(img2img) step_indices = [self.begin_index] * timesteps.shape[0] sigma = sigmas[step_indices].flatten() while len(sigma.shape) < len(original_samples.shape): sigma = sigma.unsqueeze(-1) noisy_samples = original_samples + noise * sigma return noisy_samples def __len__(self): return self.config.num_train_timesteps
diffusers/src/diffusers/schedulers/scheduling_edm_euler.py/0
{ "file_path": "diffusers/src/diffusers/schedulers/scheduling_edm_euler.py", "repo_id": "diffusers", "token_count": 7360 }
239
# Copyright 2024 Google Brain and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch import math from dataclasses import dataclass from typing import Optional, Tuple, Union import torch from ..configuration_utils import ConfigMixin, register_to_config from ..utils import BaseOutput from ..utils.torch_utils import randn_tensor from .scheduling_utils import SchedulerMixin, SchedulerOutput @dataclass class SdeVeOutput(BaseOutput): """ Output class for the scheduler's `step` function output. Args: prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the denoising loop. prev_sample_mean (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): Mean averaged `prev_sample` over previous timesteps. """ prev_sample: torch.Tensor prev_sample_mean: torch.Tensor class ScoreSdeVeScheduler(SchedulerMixin, ConfigMixin): """ `ScoreSdeVeScheduler` is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving. Args: num_train_timesteps (`int`, defaults to 1000): The number of diffusion steps to train the model. snr (`float`, defaults to 0.15): A coefficient weighting the step from the `model_output` sample (from the network) to the random noise. sigma_min (`float`, defaults to 0.01): The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror the distribution of the data. sigma_max (`float`, defaults to 1348.0): The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (`float`, defaults to 1e-5): The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (`int`, defaults to 1): The number of correction steps performed on a produced sample. """ order = 1 @register_to_config def __init__( self, num_train_timesteps: int = 2000, snr: float = 0.15, sigma_min: float = 0.01, sigma_max: float = 1348.0, sampling_eps: float = 1e-5, correct_steps: int = 1, ): # standard deviation of the initial noise distribution self.init_noise_sigma = sigma_max # setable values self.timesteps = None self.set_sigmas(num_train_timesteps, sigma_min, sigma_max, sampling_eps) def scale_model_input(self, sample: torch.Tensor, timestep: Optional[int] = None) -> torch.Tensor: """ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep. Args: sample (`torch.Tensor`): The input sample. timestep (`int`, *optional*): The current timestep in the diffusion chain. Returns: `torch.Tensor`: A scaled input sample. """ return sample def set_timesteps( self, num_inference_steps: int, sampling_eps: float = None, device: Union[str, torch.device] = None ): """ Sets the continuous timesteps used for the diffusion chain (to be run before inference). Args: num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (`float`, *optional*): The final timestep value (overrides value given during scheduler instantiation). device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. """ sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps self.timesteps = torch.linspace(1, sampling_eps, num_inference_steps, device=device) def set_sigmas( self, num_inference_steps: int, sigma_min: float = None, sigma_max: float = None, sampling_eps: float = None ): """ Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight of the `drift` and `diffusion` components of the sample update. Args: num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (`float`, optional): The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (`float`, optional): The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (`float`, optional): The final timestep value (overrides value given during scheduler instantiation). """ sigma_min = sigma_min if sigma_min is not None else self.config.sigma_min sigma_max = sigma_max if sigma_max is not None else self.config.sigma_max sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps if self.timesteps is None: self.set_timesteps(num_inference_steps, sampling_eps) self.sigmas = sigma_min * (sigma_max / sigma_min) ** (self.timesteps / sampling_eps) self.discrete_sigmas = torch.exp(torch.linspace(math.log(sigma_min), math.log(sigma_max), num_inference_steps)) self.sigmas = torch.tensor([sigma_min * (sigma_max / sigma_min) ** t for t in self.timesteps]) def get_adjacent_sigma(self, timesteps, t): return torch.where( timesteps == 0, torch.zeros_like(t.to(timesteps.device)), self.discrete_sigmas[timesteps - 1].to(timesteps.device), ) def step_pred( self, model_output: torch.Tensor, timestep: int, sample: torch.Tensor, generator: Optional[torch.Generator] = None, return_dict: bool = True, ) -> Union[SdeVeOutput, Tuple]: """ Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise). Args: model_output (`torch.Tensor`): The direct output from learned diffusion model. timestep (`int`): The current discrete timestep in the diffusion chain. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. generator (`torch.Generator`, *optional*): A random number generator. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~schedulers.scheduling_sde_ve.SdeVeOutput`] or `tuple`. Returns: [`~schedulers.scheduling_sde_ve.SdeVeOutput`] or `tuple`: If return_dict is `True`, [`~schedulers.scheduling_sde_ve.SdeVeOutput`] is returned, otherwise a tuple is returned where the first element is the sample tensor. """ if self.timesteps is None: raise ValueError( "`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler" ) timestep = timestep * torch.ones( sample.shape[0], device=sample.device ) # torch.repeat_interleave(timestep, sample.shape[0]) timesteps = (timestep * (len(self.timesteps) - 1)).long() # mps requires indices to be in the same device, so we use cpu as is the default with cuda timesteps = timesteps.to(self.discrete_sigmas.device) sigma = self.discrete_sigmas[timesteps].to(sample.device) adjacent_sigma = self.get_adjacent_sigma(timesteps, timestep).to(sample.device) drift = torch.zeros_like(sample) diffusion = (sigma**2 - adjacent_sigma**2) ** 0.5 # equation 6 in the paper: the model_output modeled by the network is grad_x log pt(x) # also equation 47 shows the analog from SDE models to ancestral sampling methods diffusion = diffusion.flatten() while len(diffusion.shape) < len(sample.shape): diffusion = diffusion.unsqueeze(-1) drift = drift - diffusion**2 * model_output # equation 6: sample noise for the diffusion term of noise = randn_tensor( sample.shape, layout=sample.layout, generator=generator, device=sample.device, dtype=sample.dtype ) prev_sample_mean = sample - drift # subtract because `dt` is a small negative timestep # TODO is the variable diffusion the correct scaling term for the noise? prev_sample = prev_sample_mean + diffusion * noise # add impact of diffusion field g if not return_dict: return (prev_sample, prev_sample_mean) return SdeVeOutput(prev_sample=prev_sample, prev_sample_mean=prev_sample_mean) def step_correct( self, model_output: torch.Tensor, sample: torch.Tensor, generator: Optional[torch.Generator] = None, return_dict: bool = True, ) -> Union[SchedulerOutput, Tuple]: """ Correct the predicted sample based on the `model_output` of the network. This is often run repeatedly after making the prediction for the previous timestep. Args: model_output (`torch.Tensor`): The direct output from learned diffusion model. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. generator (`torch.Generator`, *optional*): A random number generator. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~schedulers.scheduling_sde_ve.SdeVeOutput`] or `tuple`. Returns: [`~schedulers.scheduling_sde_ve.SdeVeOutput`] or `tuple`: If return_dict is `True`, [`~schedulers.scheduling_sde_ve.SdeVeOutput`] is returned, otherwise a tuple is returned where the first element is the sample tensor. """ if self.timesteps is None: raise ValueError( "`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler" ) # For small batch sizes, the paper "suggest replacing norm(z) with sqrt(d), where d is the dim. of z" # sample noise for correction noise = randn_tensor(sample.shape, layout=sample.layout, generator=generator).to(sample.device) # compute step size from the model_output, the noise, and the snr grad_norm = torch.norm(model_output.reshape(model_output.shape[0], -1), dim=-1).mean() noise_norm = torch.norm(noise.reshape(noise.shape[0], -1), dim=-1).mean() step_size = (self.config.snr * noise_norm / grad_norm) ** 2 * 2 step_size = step_size * torch.ones(sample.shape[0]).to(sample.device) # self.repeat_scalar(step_size, sample.shape[0]) # compute corrected sample: model_output term and noise term step_size = step_size.flatten() while len(step_size.shape) < len(sample.shape): step_size = step_size.unsqueeze(-1) prev_sample_mean = sample + step_size * model_output prev_sample = prev_sample_mean + ((step_size * 2) ** 0.5) * noise if not return_dict: return (prev_sample,) return SchedulerOutput(prev_sample=prev_sample) def add_noise( self, original_samples: torch.Tensor, noise: torch.Tensor, timesteps: torch.Tensor, ) -> torch.Tensor: # Make sure sigmas and timesteps have the same device and dtype as original_samples timesteps = timesteps.to(original_samples.device) sigmas = self.discrete_sigmas.to(original_samples.device)[timesteps] noise = ( noise * sigmas[:, None, None, None] if noise is not None else torch.randn_like(original_samples) * sigmas[:, None, None, None] ) noisy_samples = noise + original_samples return noisy_samples def __len__(self): return self.config.num_train_timesteps
diffusers/src/diffusers/schedulers/scheduling_sde_ve.py/0
{ "file_path": "diffusers/src/diffusers/schedulers/scheduling_sde_ve.py", "repo_id": "diffusers", "token_count": 5379 }
240
# This file is autogenerated by the command `make fix-copies`, do not edit. from ..utils import DummyObject, requires_backends class MidiProcessor(metaclass=DummyObject): _backends = ["note_seq"] def __init__(self, *args, **kwargs): requires_backends(self, ["note_seq"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["note_seq"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["note_seq"])
diffusers/src/diffusers/utils/dummy_note_seq_objects.py/0
{ "file_path": "diffusers/src/diffusers/utils/dummy_note_seq_objects.py", "repo_id": "diffusers", "token_count": 201 }
241
--- {{ card_data }} --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> {{ model_description }} ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
diffusers/src/diffusers/utils/model_card_template.md/0
{ "file_path": "diffusers/src/diffusers/utils/model_card_template.md", "repo_id": "diffusers", "token_count": 138 }
242
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import tempfile import unittest from itertools import product import numpy as np import torch from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer from diffusers import ( AutoencoderKL, DDIMScheduler, LCMScheduler, UNet2DConditionModel, ) from diffusers.utils.import_utils import is_peft_available from diffusers.utils.testing_utils import ( floats_tensor, require_peft_backend, require_peft_version_greater, skip_mps, torch_device, ) if is_peft_available(): from peft import LoraConfig from peft.tuners.tuners_utils import BaseTunerLayer from peft.utils import get_peft_model_state_dict def state_dicts_almost_equal(sd1, sd2): sd1 = dict(sorted(sd1.items())) sd2 = dict(sorted(sd2.items())) models_are_equal = True for ten1, ten2 in zip(sd1.values(), sd2.values()): if (ten1 - ten2).abs().max() > 1e-3: models_are_equal = False return models_are_equal def check_if_lora_correctly_set(model) -> bool: """ Checks if the LoRA layers are correctly set with peft """ for module in model.modules(): if isinstance(module, BaseTunerLayer): return True return False @require_peft_backend class PeftLoraLoaderMixinTests: pipeline_class = None scheduler_cls = None scheduler_kwargs = None has_two_text_encoders = False unet_kwargs = None vae_kwargs = None def get_dummy_components(self, scheduler_cls=None, use_dora=False): scheduler_cls = self.scheduler_cls if scheduler_cls is None else scheduler_cls rank = 4 torch.manual_seed(0) unet = UNet2DConditionModel(**self.unet_kwargs) scheduler = scheduler_cls(**self.scheduler_kwargs) torch.manual_seed(0) vae = AutoencoderKL(**self.vae_kwargs) text_encoder = CLIPTextModel.from_pretrained("peft-internal-testing/tiny-clip-text-2") tokenizer = CLIPTokenizer.from_pretrained("peft-internal-testing/tiny-clip-text-2") if self.has_two_text_encoders: text_encoder_2 = CLIPTextModelWithProjection.from_pretrained("peft-internal-testing/tiny-clip-text-2") tokenizer_2 = CLIPTokenizer.from_pretrained("peft-internal-testing/tiny-clip-text-2") text_lora_config = LoraConfig( r=rank, lora_alpha=rank, target_modules=["q_proj", "k_proj", "v_proj", "out_proj"], init_lora_weights=False, use_dora=use_dora, ) unet_lora_config = LoraConfig( r=rank, lora_alpha=rank, target_modules=["to_q", "to_k", "to_v", "to_out.0"], init_lora_weights=False, use_dora=use_dora, ) if self.has_two_text_encoders: pipeline_components = { "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, "image_encoder": None, "feature_extractor": None, } else: pipeline_components = { "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "safety_checker": None, "feature_extractor": None, "image_encoder": None, } return pipeline_components, text_lora_config, unet_lora_config def get_dummy_inputs(self, with_generator=True): batch_size = 1 sequence_length = 10 num_channels = 4 sizes = (32, 32) generator = torch.manual_seed(0) noise = floats_tensor((batch_size, num_channels) + sizes) input_ids = torch.randint(1, sequence_length, size=(batch_size, sequence_length), generator=generator) pipeline_inputs = { "prompt": "A painting of a squirrel eating a burger", "num_inference_steps": 5, "guidance_scale": 6.0, "output_type": "np", } if with_generator: pipeline_inputs.update({"generator": generator}) return noise, input_ids, pipeline_inputs # copied from: https://colab.research.google.com/gist/sayakpaul/df2ef6e1ae6d8c10a49d859883b10860/scratchpad.ipynb def get_dummy_tokens(self): max_seq_length = 77 inputs = torch.randint(2, 56, size=(1, max_seq_length), generator=torch.manual_seed(0)) prepared_inputs = {} prepared_inputs["input_ids"] = inputs return prepared_inputs def test_simple_inference(self): """ Tests a simple inference and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, _ = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs() output_no_lora = pipe(**inputs).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) def test_simple_inference_with_text_lora(self): """ Tests a simple inference with lora attached on the text encoder and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, _ = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) output_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( not np.allclose(output_lora, output_no_lora, atol=1e-3, rtol=1e-3), "Lora should change the output" ) def test_simple_inference_with_text_lora_and_scale(self): """ Tests a simple inference with lora attached on the text encoder + scale argument and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, _ = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) output_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( not np.allclose(output_lora, output_no_lora, atol=1e-3, rtol=1e-3), "Lora should change the output" ) output_lora_scale = pipe( **inputs, generator=torch.manual_seed(0), cross_attention_kwargs={"scale": 0.5} ).images self.assertTrue( not np.allclose(output_lora, output_lora_scale, atol=1e-3, rtol=1e-3), "Lora + scale should change the output", ) output_lora_0_scale = pipe( **inputs, generator=torch.manual_seed(0), cross_attention_kwargs={"scale": 0.0} ).images self.assertTrue( np.allclose(output_no_lora, output_lora_0_scale, atol=1e-3, rtol=1e-3), "Lora + 0 scale should lead to same result as no LoRA", ) def test_simple_inference_with_text_lora_fused(self): """ Tests a simple inference with lora attached into text encoder + fuses the lora weights into base model and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, _ = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.fuse_lora() # Fusing should still keep the LoRA layers self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") if self.has_two_text_encoders: self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) ouput_fused = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertFalse( np.allclose(ouput_fused, output_no_lora, atol=1e-3, rtol=1e-3), "Fused lora should change the output" ) def test_simple_inference_with_text_lora_unloaded(self): """ Tests a simple inference with lora attached to text encoder, then unloads the lora weights and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, _ = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.unload_lora_weights() # unloading should remove the LoRA layers self.assertFalse( check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly unloaded in text encoder" ) if self.has_two_text_encoders: self.assertFalse( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly unloaded in text encoder 2", ) ouput_unloaded = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(ouput_unloaded, output_no_lora, atol=1e-3, rtol=1e-3), "Fused lora should change the output", ) def test_simple_inference_with_text_lora_save_load(self): """ Tests a simple usecase where users could use saving utilities for LoRA. """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, _ = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) images_lora = pipe(**inputs, generator=torch.manual_seed(0)).images with tempfile.TemporaryDirectory() as tmpdirname: text_encoder_state_dict = get_peft_model_state_dict(pipe.text_encoder) if self.has_two_text_encoders: text_encoder_2_state_dict = get_peft_model_state_dict(pipe.text_encoder_2) self.pipeline_class.save_lora_weights( save_directory=tmpdirname, text_encoder_lora_layers=text_encoder_state_dict, text_encoder_2_lora_layers=text_encoder_2_state_dict, safe_serialization=False, ) else: self.pipeline_class.save_lora_weights( save_directory=tmpdirname, text_encoder_lora_layers=text_encoder_state_dict, safe_serialization=False, ) self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) pipe.unload_lora_weights() pipe.load_lora_weights(os.path.join(tmpdirname, "pytorch_lora_weights.bin")) images_lora_from_pretrained = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") if self.has_two_text_encoders: self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) self.assertTrue( np.allclose(images_lora, images_lora_from_pretrained, atol=1e-3, rtol=1e-3), "Loading from saved checkpoints should give same results.", ) def test_simple_inference_save_pretrained(self): """ Tests a simple usecase where users could use saving utilities for LoRA through save_pretrained """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, _ = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) images_lora = pipe(**inputs, generator=torch.manual_seed(0)).images with tempfile.TemporaryDirectory() as tmpdirname: pipe.save_pretrained(tmpdirname) pipe_from_pretrained = self.pipeline_class.from_pretrained(tmpdirname) pipe_from_pretrained.to(torch_device) self.assertTrue( check_if_lora_correctly_set(pipe_from_pretrained.text_encoder), "Lora not correctly set in text encoder", ) if self.has_two_text_encoders: self.assertTrue( check_if_lora_correctly_set(pipe_from_pretrained.text_encoder_2), "Lora not correctly set in text encoder 2", ) images_lora_save_pretrained = pipe_from_pretrained(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(images_lora, images_lora_save_pretrained, atol=1e-3, rtol=1e-3), "Loading from saved checkpoints should give same results.", ) def test_simple_inference_with_text_unet_lora_save_load(self): """ Tests a simple usecase where users could use saving utilities for LoRA for Unet + text encoder """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) pipe.unet.add_adapter(unet_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) images_lora = pipe(**inputs, generator=torch.manual_seed(0)).images with tempfile.TemporaryDirectory() as tmpdirname: text_encoder_state_dict = get_peft_model_state_dict(pipe.text_encoder) unet_state_dict = get_peft_model_state_dict(pipe.unet) if self.has_two_text_encoders: text_encoder_2_state_dict = get_peft_model_state_dict(pipe.text_encoder_2) self.pipeline_class.save_lora_weights( save_directory=tmpdirname, text_encoder_lora_layers=text_encoder_state_dict, text_encoder_2_lora_layers=text_encoder_2_state_dict, unet_lora_layers=unet_state_dict, safe_serialization=False, ) else: self.pipeline_class.save_lora_weights( save_directory=tmpdirname, text_encoder_lora_layers=text_encoder_state_dict, unet_lora_layers=unet_state_dict, safe_serialization=False, ) self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) pipe.unload_lora_weights() pipe.load_lora_weights(os.path.join(tmpdirname, "pytorch_lora_weights.bin")) images_lora_from_pretrained = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) self.assertTrue( np.allclose(images_lora, images_lora_from_pretrained, atol=1e-3, rtol=1e-3), "Loading from saved checkpoints should give same results.", ) def test_simple_inference_with_text_unet_lora_and_scale(self): """ Tests a simple inference with lora attached on the text encoder + Unet + scale argument and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) pipe.unet.add_adapter(unet_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) output_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( not np.allclose(output_lora, output_no_lora, atol=1e-3, rtol=1e-3), "Lora should change the output" ) output_lora_scale = pipe( **inputs, generator=torch.manual_seed(0), cross_attention_kwargs={"scale": 0.5} ).images self.assertTrue( not np.allclose(output_lora, output_lora_scale, atol=1e-3, rtol=1e-3), "Lora + scale should change the output", ) output_lora_0_scale = pipe( **inputs, generator=torch.manual_seed(0), cross_attention_kwargs={"scale": 0.0} ).images self.assertTrue( np.allclose(output_no_lora, output_lora_0_scale, atol=1e-3, rtol=1e-3), "Lora + 0 scale should lead to same result as no LoRA", ) self.assertTrue( pipe.text_encoder.text_model.encoder.layers[0].self_attn.q_proj.scaling["default"] == 1.0, "The scaling parameter has not been correctly restored!", ) def test_simple_inference_with_text_lora_unet_fused(self): """ Tests a simple inference with lora attached into text encoder + fuses the lora weights into base model and makes sure it works as expected - with unet """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) pipe.unet.add_adapter(unet_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.fuse_lora() # Fusing should still keep the LoRA layers self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in unet") if self.has_two_text_encoders: self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) ouput_fused = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertFalse( np.allclose(ouput_fused, output_no_lora, atol=1e-3, rtol=1e-3), "Fused lora should change the output" ) def test_simple_inference_with_text_unet_lora_unloaded(self): """ Tests a simple inference with lora attached to text encoder and unet, then unloads the lora weights and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) pipe.unet.add_adapter(unet_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.unload_lora_weights() # unloading should remove the LoRA layers self.assertFalse( check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly unloaded in text encoder" ) self.assertFalse(check_if_lora_correctly_set(pipe.unet), "Lora not correctly unloaded in Unet") if self.has_two_text_encoders: self.assertFalse( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly unloaded in text encoder 2", ) ouput_unloaded = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(ouput_unloaded, output_no_lora, atol=1e-3, rtol=1e-3), "Fused lora should change the output", ) def test_simple_inference_with_text_unet_lora_unfused(self): """ Tests a simple inference with lora attached to text encoder and unet, then unloads the lora weights and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) pipe.text_encoder.add_adapter(text_lora_config) pipe.unet.add_adapter(unet_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.fuse_lora() output_fused_lora = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.unfuse_lora() output_unfused_lora = pipe(**inputs, generator=torch.manual_seed(0)).images # unloading should remove the LoRA layers self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Unfuse should still keep LoRA layers") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Unfuse should still keep LoRA layers") if self.has_two_text_encoders: self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Unfuse should still keep LoRA layers" ) # Fuse and unfuse should lead to the same results self.assertTrue( np.allclose(output_fused_lora, output_unfused_lora, atol=1e-3, rtol=1e-3), "Fused lora should change the output", ) def test_simple_inference_with_text_unet_multi_adapter(self): """ Tests a simple inference with lora attached to text encoder and unet, attaches multiple adapters and set them """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder.add_adapter(text_lora_config, "adapter-2") pipe.unet.add_adapter(unet_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-2") self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-2") self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.set_adapters("adapter-1") output_adapter_1 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters("adapter-2") output_adapter_2 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters(["adapter-1", "adapter-2"]) output_adapter_mixed = pipe(**inputs, generator=torch.manual_seed(0)).images # Fuse and unfuse should lead to the same results self.assertFalse( np.allclose(output_adapter_1, output_adapter_2, atol=1e-3, rtol=1e-3), "Adapter 1 and 2 should give different results", ) self.assertFalse( np.allclose(output_adapter_1, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Adapter 1 and mixed adapters should give different results", ) self.assertFalse( np.allclose(output_adapter_2, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Adapter 2 and mixed adapters should give different results", ) pipe.disable_lora() output_disabled = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(output_no_lora, output_disabled, atol=1e-3, rtol=1e-3), "output with no lora and output with lora disabled should give same results", ) def test_simple_inference_with_text_unet_block_scale(self): """ Tests a simple inference with lora attached to text encoder and unet, attaches one adapter and set differnt weights for different blocks (i.e. block lora) """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-1") self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-1") self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) weights_1 = {"text_encoder": 2, "unet": {"down": 5}} pipe.set_adapters("adapter-1", weights_1) output_weights_1 = pipe(**inputs, generator=torch.manual_seed(0)).images weights_2 = {"unet": {"up": 5}} pipe.set_adapters("adapter-1", weights_2) output_weights_2 = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertFalse( np.allclose(output_weights_1, output_weights_2, atol=1e-3, rtol=1e-3), "LoRA weights 1 and 2 should give different results", ) self.assertFalse( np.allclose(output_no_lora, output_weights_1, atol=1e-3, rtol=1e-3), "No adapter and LoRA weights 1 should give different results", ) self.assertFalse( np.allclose(output_no_lora, output_weights_2, atol=1e-3, rtol=1e-3), "No adapter and LoRA weights 2 should give different results", ) pipe.disable_lora() output_disabled = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(output_no_lora, output_disabled, atol=1e-3, rtol=1e-3), "output with no lora and output with lora disabled should give same results", ) def test_simple_inference_with_text_unet_multi_adapter_block_lora(self): """ Tests a simple inference with lora attached to text encoder and unet, attaches multiple adapters and set differnt weights for different blocks (i.e. block lora) """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder.add_adapter(text_lora_config, "adapter-2") pipe.unet.add_adapter(unet_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-2") self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-2") self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) scales_1 = {"text_encoder": 2, "unet": {"down": 5}} scales_2 = {"unet": {"down": 5, "mid": 5}} pipe.set_adapters("adapter-1", scales_1) output_adapter_1 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters("adapter-2", scales_2) output_adapter_2 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters(["adapter-1", "adapter-2"], [scales_1, scales_2]) output_adapter_mixed = pipe(**inputs, generator=torch.manual_seed(0)).images # Fuse and unfuse should lead to the same results self.assertFalse( np.allclose(output_adapter_1, output_adapter_2, atol=1e-3, rtol=1e-3), "Adapter 1 and 2 should give different results", ) self.assertFalse( np.allclose(output_adapter_1, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Adapter 1 and mixed adapters should give different results", ) self.assertFalse( np.allclose(output_adapter_2, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Adapter 2 and mixed adapters should give different results", ) pipe.disable_lora() output_disabled = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(output_no_lora, output_disabled, atol=1e-3, rtol=1e-3), "output with no lora and output with lora disabled should give same results", ) # a mismatching number of adapter_names and adapter_weights should raise an error with self.assertRaises(ValueError): pipe.set_adapters(["adapter-1", "adapter-2"], [scales_1]) def test_simple_inference_with_text_unet_block_scale_for_all_dict_options(self): """Tests that any valid combination of lora block scales can be used in pipe.set_adapter""" def updown_options(blocks_with_tf, layers_per_block, value): """ Generate every possible combination for how a lora weight dict for the up/down part can be. E.g. 2, {"block_1": 2}, {"block_1": [2,2,2]}, {"block_1": 2, "block_2": [2,2,2]}, ... """ num_val = value list_val = [value] * layers_per_block node_opts = [None, num_val, list_val] node_opts_foreach_block = [node_opts] * len(blocks_with_tf) updown_opts = [num_val] for nodes in product(*node_opts_foreach_block): if all(n is None for n in nodes): continue opt = {} for b, n in zip(blocks_with_tf, nodes): if n is not None: opt["block_" + str(b)] = n updown_opts.append(opt) return updown_opts def all_possible_dict_opts(unet, value): """ Generate every possible combination for how a lora weight dict can be. E.g. 2, {"unet: {"down": 2}}, {"unet: {"down": [2,2,2]}}, {"unet: {"mid": 2, "up": [2,2,2]}}, ... """ down_blocks_with_tf = [i for i, d in enumerate(unet.down_blocks) if hasattr(d, "attentions")] up_blocks_with_tf = [i for i, u in enumerate(unet.up_blocks) if hasattr(u, "attentions")] layers_per_block = unet.config.layers_per_block text_encoder_opts = [None, value] text_encoder_2_opts = [None, value] mid_opts = [None, value] down_opts = [None] + updown_options(down_blocks_with_tf, layers_per_block, value) up_opts = [None] + updown_options(up_blocks_with_tf, layers_per_block + 1, value) opts = [] for t1, t2, d, m, u in product(text_encoder_opts, text_encoder_2_opts, down_opts, mid_opts, up_opts): if all(o is None for o in (t1, t2, d, m, u)): continue opt = {} if t1 is not None: opt["text_encoder"] = t1 if t2 is not None: opt["text_encoder_2"] = t2 if all(o is None for o in (d, m, u)): # no unet scaling continue opt["unet"] = {} if d is not None: opt["unet"]["down"] = d if m is not None: opt["unet"]["mid"] = m if u is not None: opt["unet"]["up"] = u opts.append(opt) return opts components, text_lora_config, unet_lora_config = self.get_dummy_components(self.scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-1") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-1") for scale_dict in all_possible_dict_opts(pipe.unet, value=1234): # test if lora block scales can be set with this scale_dict if not self.has_two_text_encoders and "text_encoder_2" in scale_dict: del scale_dict["text_encoder_2"] pipe.set_adapters("adapter-1", scale_dict) # test will fail if this line throws an error def test_simple_inference_with_text_unet_multi_adapter_delete_adapter(self): """ Tests a simple inference with lora attached to text encoder and unet, attaches multiple adapters and set/delete them """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder.add_adapter(text_lora_config, "adapter-2") pipe.unet.add_adapter(unet_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-2") self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-2") self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.set_adapters("adapter-1") output_adapter_1 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters("adapter-2") output_adapter_2 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters(["adapter-1", "adapter-2"]) output_adapter_mixed = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertFalse( np.allclose(output_adapter_1, output_adapter_2, atol=1e-3, rtol=1e-3), "Adapter 1 and 2 should give different results", ) self.assertFalse( np.allclose(output_adapter_1, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Adapter 1 and mixed adapters should give different results", ) self.assertFalse( np.allclose(output_adapter_2, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Adapter 2 and mixed adapters should give different results", ) pipe.delete_adapters("adapter-1") output_deleted_adapter_1 = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(output_deleted_adapter_1, output_adapter_2, atol=1e-3, rtol=1e-3), "Adapter 1 and 2 should give different results", ) pipe.delete_adapters("adapter-2") output_deleted_adapters = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(output_no_lora, output_deleted_adapters, atol=1e-3, rtol=1e-3), "output with no lora and output with lora disabled should give same results", ) pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder.add_adapter(text_lora_config, "adapter-2") pipe.unet.add_adapter(unet_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-2") pipe.set_adapters(["adapter-1", "adapter-2"]) pipe.delete_adapters(["adapter-1", "adapter-2"]) output_deleted_adapters = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(output_no_lora, output_deleted_adapters, atol=1e-3, rtol=1e-3), "output with no lora and output with lora disabled should give same results", ) def test_simple_inference_with_text_unet_multi_adapter_weighted(self): """ Tests a simple inference with lora attached to text encoder and unet, attaches multiple adapters and set them """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder.add_adapter(text_lora_config, "adapter-2") pipe.unet.add_adapter(unet_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-2") self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-2") self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.set_adapters("adapter-1") output_adapter_1 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters("adapter-2") output_adapter_2 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters(["adapter-1", "adapter-2"]) output_adapter_mixed = pipe(**inputs, generator=torch.manual_seed(0)).images # Fuse and unfuse should lead to the same results self.assertFalse( np.allclose(output_adapter_1, output_adapter_2, atol=1e-3, rtol=1e-3), "Adapter 1 and 2 should give different results", ) self.assertFalse( np.allclose(output_adapter_1, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Adapter 1 and mixed adapters should give different results", ) self.assertFalse( np.allclose(output_adapter_2, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Adapter 2 and mixed adapters should give different results", ) pipe.set_adapters(["adapter-1", "adapter-2"], [0.5, 0.6]) output_adapter_mixed_weighted = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertFalse( np.allclose(output_adapter_mixed_weighted, output_adapter_mixed, atol=1e-3, rtol=1e-3), "Weighted adapter and mixed adapter should give different results", ) pipe.disable_lora() output_disabled = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(output_no_lora, output_disabled, atol=1e-3, rtol=1e-3), "output with no lora and output with lora disabled should give same results", ) @skip_mps def test_lora_fuse_nan(self): for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-1") self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") # corrupt one LoRA weight with `inf` values with torch.no_grad(): pipe.unet.mid_block.attentions[0].transformer_blocks[0].attn1.to_q.lora_A["adapter-1"].weight += float( "inf" ) # with `safe_fusing=True` we should see an Error with self.assertRaises(ValueError): pipe.fuse_lora(safe_fusing=True) # without we should not see an error, but every image will be black pipe.fuse_lora(safe_fusing=False) out = pipe("test", num_inference_steps=2, output_type="np").images self.assertTrue(np.isnan(out).all()) def test_get_adapters(self): """ Tests a simple usecase where we attach multiple adapters and check if the results are the expected results """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-1") adapter_names = pipe.get_active_adapters() self.assertListEqual(adapter_names, ["adapter-1"]) pipe.text_encoder.add_adapter(text_lora_config, "adapter-2") pipe.unet.add_adapter(unet_lora_config, "adapter-2") adapter_names = pipe.get_active_adapters() self.assertListEqual(adapter_names, ["adapter-2"]) pipe.set_adapters(["adapter-1", "adapter-2"]) self.assertListEqual(pipe.get_active_adapters(), ["adapter-1", "adapter-2"]) def test_get_list_adapters(self): """ Tests a simple usecase where we attach multiple adapters and check if the results are the expected results """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-1") adapter_names = pipe.get_list_adapters() self.assertDictEqual(adapter_names, {"text_encoder": ["adapter-1"], "unet": ["adapter-1"]}) pipe.text_encoder.add_adapter(text_lora_config, "adapter-2") pipe.unet.add_adapter(unet_lora_config, "adapter-2") adapter_names = pipe.get_list_adapters() self.assertDictEqual( adapter_names, {"text_encoder": ["adapter-1", "adapter-2"], "unet": ["adapter-1", "adapter-2"]} ) pipe.set_adapters(["adapter-1", "adapter-2"]) self.assertDictEqual( pipe.get_list_adapters(), {"unet": ["adapter-1", "adapter-2"], "text_encoder": ["adapter-1", "adapter-2"]}, ) pipe.unet.add_adapter(unet_lora_config, "adapter-3") self.assertDictEqual( pipe.get_list_adapters(), {"unet": ["adapter-1", "adapter-2", "adapter-3"], "text_encoder": ["adapter-1", "adapter-2"]}, ) @require_peft_version_greater(peft_version="0.6.2") def test_simple_inference_with_text_lora_unet_fused_multi(self): """ Tests a simple inference with lora attached into text encoder + fuses the lora weights into base model and makes sure it works as expected - with unet and multi-adapter case """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config, "adapter-1") pipe.unet.add_adapter(unet_lora_config, "adapter-1") # Attach a second adapter pipe.text_encoder.add_adapter(text_lora_config, "adapter-2") pipe.unet.add_adapter(unet_lora_config, "adapter-2") self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-1") pipe.text_encoder_2.add_adapter(text_lora_config, "adapter-2") self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) # set them to multi-adapter inference mode pipe.set_adapters(["adapter-1", "adapter-2"]) ouputs_all_lora = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.set_adapters(["adapter-1"]) ouputs_lora_1 = pipe(**inputs, generator=torch.manual_seed(0)).images pipe.fuse_lora(adapter_names=["adapter-1"]) # Fusing should still keep the LoRA layers so outpout should remain the same outputs_lora_1_fused = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(ouputs_lora_1, outputs_lora_1_fused, atol=1e-3, rtol=1e-3), "Fused lora should not change the output", ) pipe.unfuse_lora() pipe.fuse_lora(adapter_names=["adapter-2", "adapter-1"]) # Fusing should still keep the LoRA layers output_all_lora_fused = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue( np.allclose(output_all_lora_fused, ouputs_all_lora, atol=1e-3, rtol=1e-3), "Fused lora should not change the output", ) @require_peft_version_greater(peft_version="0.9.0") def test_simple_inference_with_dora(self): for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls, use_dora=True) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) output_no_dora_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertTrue(output_no_dora_lora.shape == (1, 64, 64, 3)) pipe.text_encoder.add_adapter(text_lora_config) pipe.unet.add_adapter(unet_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) output_dora_lora = pipe(**inputs, generator=torch.manual_seed(0)).images self.assertFalse( np.allclose(output_dora_lora, output_no_dora_lora, atol=1e-3, rtol=1e-3), "DoRA lora should change the output", ) @unittest.skip("This is failing for now - need to investigate") def test_simple_inference_with_text_unet_lora_unfused_torch_compile(self): """ Tests a simple inference with lora attached to text encoder and unet, then unloads the lora weights and makes sure it works as expected """ for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, text_lora_config, unet_lora_config = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _, _, inputs = self.get_dummy_inputs(with_generator=False) pipe.text_encoder.add_adapter(text_lora_config) pipe.unet.add_adapter(unet_lora_config) self.assertTrue(check_if_lora_correctly_set(pipe.text_encoder), "Lora not correctly set in text encoder") self.assertTrue(check_if_lora_correctly_set(pipe.unet), "Lora not correctly set in Unet") if self.has_two_text_encoders: pipe.text_encoder_2.add_adapter(text_lora_config) self.assertTrue( check_if_lora_correctly_set(pipe.text_encoder_2), "Lora not correctly set in text encoder 2" ) pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) if self.has_two_text_encoders: pipe.text_encoder_2 = torch.compile(pipe.text_encoder_2, mode="reduce-overhead", fullgraph=True) # Just makes sure it works.. _ = pipe(**inputs, generator=torch.manual_seed(0)).images def test_modify_padding_mode(self): def set_pad_mode(network, mode="circular"): for _, module in network.named_modules(): if isinstance(module, torch.nn.Conv2d): module.padding_mode = mode for scheduler_cls in [DDIMScheduler, LCMScheduler]: components, _, _ = self.get_dummy_components(scheduler_cls) pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) _pad_mode = "circular" set_pad_mode(pipe.vae, _pad_mode) set_pad_mode(pipe.unet, _pad_mode) _, _, inputs = self.get_dummy_inputs() _ = pipe(**inputs).images
diffusers/tests/lora/utils.py/0
{ "file_path": "diffusers/tests/lora/utils.py", "repo_id": "diffusers", "token_count": 30903 }
243
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import gc import os import tempfile import unittest from collections import OrderedDict import torch from parameterized import parameterized from pytest import mark from diffusers import UNet2DConditionModel from diffusers.models.attention_processor import ( CustomDiffusionAttnProcessor, IPAdapterAttnProcessor, IPAdapterAttnProcessor2_0, ) from diffusers.models.embeddings import ImageProjection, IPAdapterFaceIDImageProjection, IPAdapterPlusImageProjection from diffusers.utils import logging from diffusers.utils.import_utils import is_xformers_available from diffusers.utils.testing_utils import ( backend_empty_cache, enable_full_determinism, floats_tensor, load_hf_numpy, require_torch_accelerator, require_torch_accelerator_with_fp16, require_torch_accelerator_with_training, require_torch_gpu, skip_mps, slow, torch_all_close, torch_device, ) from ..test_modeling_common import ModelTesterMixin, UNetTesterMixin logger = logging.get_logger(__name__) enable_full_determinism() def create_ip_adapter_state_dict(model): # "ip_adapter" (cross-attention weights) ip_cross_attn_state_dict = {} key_id = 1 for name in model.attn_processors.keys(): cross_attention_dim = ( None if name.endswith("attn1.processor") or "motion_module" in name else model.config.cross_attention_dim ) if name.startswith("mid_block"): hidden_size = model.config.block_out_channels[-1] elif name.startswith("up_blocks"): block_id = int(name[len("up_blocks.")]) hidden_size = list(reversed(model.config.block_out_channels))[block_id] elif name.startswith("down_blocks"): block_id = int(name[len("down_blocks.")]) hidden_size = model.config.block_out_channels[block_id] if cross_attention_dim is not None: sd = IPAdapterAttnProcessor( hidden_size=hidden_size, cross_attention_dim=cross_attention_dim, scale=1.0 ).state_dict() ip_cross_attn_state_dict.update( { f"{key_id}.to_k_ip.weight": sd["to_k_ip.0.weight"], f"{key_id}.to_v_ip.weight": sd["to_v_ip.0.weight"], } ) key_id += 2 # "image_proj" (ImageProjection layer weights) cross_attention_dim = model.config["cross_attention_dim"] image_projection = ImageProjection( cross_attention_dim=cross_attention_dim, image_embed_dim=cross_attention_dim, num_image_text_embeds=4 ) ip_image_projection_state_dict = {} sd = image_projection.state_dict() ip_image_projection_state_dict.update( { "proj.weight": sd["image_embeds.weight"], "proj.bias": sd["image_embeds.bias"], "norm.weight": sd["norm.weight"], "norm.bias": sd["norm.bias"], } ) del sd ip_state_dict = {} ip_state_dict.update({"image_proj": ip_image_projection_state_dict, "ip_adapter": ip_cross_attn_state_dict}) return ip_state_dict def create_ip_adapter_plus_state_dict(model): # "ip_adapter" (cross-attention weights) ip_cross_attn_state_dict = {} key_id = 1 for name in model.attn_processors.keys(): cross_attention_dim = None if name.endswith("attn1.processor") else model.config.cross_attention_dim if name.startswith("mid_block"): hidden_size = model.config.block_out_channels[-1] elif name.startswith("up_blocks"): block_id = int(name[len("up_blocks.")]) hidden_size = list(reversed(model.config.block_out_channels))[block_id] elif name.startswith("down_blocks"): block_id = int(name[len("down_blocks.")]) hidden_size = model.config.block_out_channels[block_id] if cross_attention_dim is not None: sd = IPAdapterAttnProcessor( hidden_size=hidden_size, cross_attention_dim=cross_attention_dim, scale=1.0 ).state_dict() ip_cross_attn_state_dict.update( { f"{key_id}.to_k_ip.weight": sd["to_k_ip.0.weight"], f"{key_id}.to_v_ip.weight": sd["to_v_ip.0.weight"], } ) key_id += 2 # "image_proj" (ImageProjection layer weights) cross_attention_dim = model.config["cross_attention_dim"] image_projection = IPAdapterPlusImageProjection( embed_dims=cross_attention_dim, output_dims=cross_attention_dim, dim_head=32, heads=2, num_queries=4 ) ip_image_projection_state_dict = OrderedDict() for k, v in image_projection.state_dict().items(): if "2.to" in k: k = k.replace("2.to", "0.to") elif "3.0.weight" in k: k = k.replace("3.0.weight", "1.0.weight") elif "3.0.bias" in k: k = k.replace("3.0.bias", "1.0.bias") elif "3.0.weight" in k: k = k.replace("3.0.weight", "1.0.weight") elif "3.1.net.0.proj.weight" in k: k = k.replace("3.1.net.0.proj.weight", "1.1.weight") elif "3.net.2.weight" in k: k = k.replace("3.net.2.weight", "1.3.weight") elif "layers.0.0" in k: k = k.replace("layers.0.0", "layers.0.0.norm1") elif "layers.0.1" in k: k = k.replace("layers.0.1", "layers.0.0.norm2") elif "layers.1.0" in k: k = k.replace("layers.1.0", "layers.1.0.norm1") elif "layers.1.1" in k: k = k.replace("layers.1.1", "layers.1.0.norm2") elif "layers.2.0" in k: k = k.replace("layers.2.0", "layers.2.0.norm1") elif "layers.2.1" in k: k = k.replace("layers.2.1", "layers.2.0.norm2") if "norm_cross" in k: ip_image_projection_state_dict[k.replace("norm_cross", "norm1")] = v elif "layer_norm" in k: ip_image_projection_state_dict[k.replace("layer_norm", "norm2")] = v elif "to_k" in k: ip_image_projection_state_dict[k.replace("to_k", "to_kv")] = torch.cat([v, v], dim=0) elif "to_v" in k: continue elif "to_out.0" in k: ip_image_projection_state_dict[k.replace("to_out.0", "to_out")] = v else: ip_image_projection_state_dict[k] = v ip_state_dict = {} ip_state_dict.update({"image_proj": ip_image_projection_state_dict, "ip_adapter": ip_cross_attn_state_dict}) return ip_state_dict def create_ip_adapter_faceid_state_dict(model): # "ip_adapter" (cross-attention weights) # no LoRA weights ip_cross_attn_state_dict = {} key_id = 1 for name in model.attn_processors.keys(): cross_attention_dim = ( None if name.endswith("attn1.processor") or "motion_module" in name else model.config.cross_attention_dim ) if name.startswith("mid_block"): hidden_size = model.config.block_out_channels[-1] elif name.startswith("up_blocks"): block_id = int(name[len("up_blocks.")]) hidden_size = list(reversed(model.config.block_out_channels))[block_id] elif name.startswith("down_blocks"): block_id = int(name[len("down_blocks.")]) hidden_size = model.config.block_out_channels[block_id] if cross_attention_dim is not None: sd = IPAdapterAttnProcessor( hidden_size=hidden_size, cross_attention_dim=cross_attention_dim, scale=1.0 ).state_dict() ip_cross_attn_state_dict.update( { f"{key_id}.to_k_ip.weight": sd["to_k_ip.0.weight"], f"{key_id}.to_v_ip.weight": sd["to_v_ip.0.weight"], } ) key_id += 2 # "image_proj" (ImageProjection layer weights) cross_attention_dim = model.config["cross_attention_dim"] image_projection = IPAdapterFaceIDImageProjection( cross_attention_dim=cross_attention_dim, image_embed_dim=cross_attention_dim, mult=2, num_tokens=4 ) ip_image_projection_state_dict = {} sd = image_projection.state_dict() ip_image_projection_state_dict.update( { "proj.0.weight": sd["ff.net.0.proj.weight"], "proj.0.bias": sd["ff.net.0.proj.bias"], "proj.2.weight": sd["ff.net.2.weight"], "proj.2.bias": sd["ff.net.2.bias"], "norm.weight": sd["norm.weight"], "norm.bias": sd["norm.bias"], } ) del sd ip_state_dict = {} ip_state_dict.update({"image_proj": ip_image_projection_state_dict, "ip_adapter": ip_cross_attn_state_dict}) return ip_state_dict def create_custom_diffusion_layers(model, mock_weights: bool = True): train_kv = True train_q_out = True custom_diffusion_attn_procs = {} st = model.state_dict() for name, _ in model.attn_processors.items(): cross_attention_dim = None if name.endswith("attn1.processor") else model.config.cross_attention_dim if name.startswith("mid_block"): hidden_size = model.config.block_out_channels[-1] elif name.startswith("up_blocks"): block_id = int(name[len("up_blocks.")]) hidden_size = list(reversed(model.config.block_out_channels))[block_id] elif name.startswith("down_blocks"): block_id = int(name[len("down_blocks.")]) hidden_size = model.config.block_out_channels[block_id] layer_name = name.split(".processor")[0] weights = { "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], } if train_q_out: weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] if cross_attention_dim is not None: custom_diffusion_attn_procs[name] = CustomDiffusionAttnProcessor( train_kv=train_kv, train_q_out=train_q_out, hidden_size=hidden_size, cross_attention_dim=cross_attention_dim, ).to(model.device) custom_diffusion_attn_procs[name].load_state_dict(weights) if mock_weights: # add 1 to weights to mock trained weights with torch.no_grad(): custom_diffusion_attn_procs[name].to_k_custom_diffusion.weight += 1 custom_diffusion_attn_procs[name].to_v_custom_diffusion.weight += 1 else: custom_diffusion_attn_procs[name] = CustomDiffusionAttnProcessor( train_kv=False, train_q_out=False, hidden_size=hidden_size, cross_attention_dim=cross_attention_dim, ) del st return custom_diffusion_attn_procs class UNet2DConditionModelTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase): model_class = UNet2DConditionModel main_input_name = "sample" # We override the items here because the unet under consideration is small. model_split_percents = [0.5, 0.3, 0.4] @property def dummy_input(self): batch_size = 4 num_channels = 4 sizes = (16, 16) noise = floats_tensor((batch_size, num_channels) + sizes).to(torch_device) time_step = torch.tensor([10]).to(torch_device) encoder_hidden_states = floats_tensor((batch_size, 4, 8)).to(torch_device) return {"sample": noise, "timestep": time_step, "encoder_hidden_states": encoder_hidden_states} @property def input_shape(self): return (4, 16, 16) @property def output_shape(self): return (4, 16, 16) def prepare_init_args_and_inputs_for_common(self): init_dict = { "block_out_channels": (4, 8), "norm_num_groups": 4, "down_block_types": ("CrossAttnDownBlock2D", "DownBlock2D"), "up_block_types": ("UpBlock2D", "CrossAttnUpBlock2D"), "cross_attention_dim": 8, "attention_head_dim": 2, "out_channels": 4, "in_channels": 4, "layers_per_block": 1, "sample_size": 16, } inputs_dict = self.dummy_input return init_dict, inputs_dict @unittest.skipIf( torch_device != "cuda" or not is_xformers_available(), reason="XFormers attention is only available with CUDA and `xformers` installed", ) def test_xformers_enable_works(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() model = self.model_class(**init_dict) model.enable_xformers_memory_efficient_attention() assert ( model.mid_block.attentions[0].transformer_blocks[0].attn1.processor.__class__.__name__ == "XFormersAttnProcessor" ), "xformers is not enabled" @require_torch_accelerator_with_training def test_gradient_checkpointing(self): # enable deterministic behavior for gradient checkpointing init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() model = self.model_class(**init_dict) model.to(torch_device) assert not model.is_gradient_checkpointing and model.training out = model(**inputs_dict).sample # run the backwards pass on the model. For backwards pass, for simplicity purpose, # we won't calculate the loss and rather backprop on out.sum() model.zero_grad() labels = torch.randn_like(out) loss = (out - labels).mean() loss.backward() # re-instantiate the model now enabling gradient checkpointing model_2 = self.model_class(**init_dict) # clone model model_2.load_state_dict(model.state_dict()) model_2.to(torch_device) model_2.enable_gradient_checkpointing() assert model_2.is_gradient_checkpointing and model_2.training out_2 = model_2(**inputs_dict).sample # run the backwards pass on the model. For backwards pass, for simplicity purpose, # we won't calculate the loss and rather backprop on out.sum() model_2.zero_grad() loss_2 = (out_2 - labels).mean() loss_2.backward() # compare the output and parameters gradients self.assertTrue((loss - loss_2).abs() < 1e-5) named_params = dict(model.named_parameters()) named_params_2 = dict(model_2.named_parameters()) for name, param in named_params.items(): self.assertTrue(torch_all_close(param.grad.data, named_params_2[name].grad.data, atol=5e-5)) def test_model_with_attention_head_dim_tuple(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model = self.model_class(**init_dict) model.to(torch_device) model.eval() with torch.no_grad(): output = model(**inputs_dict) if isinstance(output, dict): output = output.sample self.assertIsNotNone(output) expected_shape = inputs_dict["sample"].shape self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") def test_model_with_use_linear_projection(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["use_linear_projection"] = True model = self.model_class(**init_dict) model.to(torch_device) model.eval() with torch.no_grad(): output = model(**inputs_dict) if isinstance(output, dict): output = output.sample self.assertIsNotNone(output) expected_shape = inputs_dict["sample"].shape self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") def test_model_with_cross_attention_dim_tuple(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["cross_attention_dim"] = (8, 8) model = self.model_class(**init_dict) model.to(torch_device) model.eval() with torch.no_grad(): output = model(**inputs_dict) if isinstance(output, dict): output = output.sample self.assertIsNotNone(output) expected_shape = inputs_dict["sample"].shape self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") def test_model_with_simple_projection(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() batch_size, _, _, sample_size = inputs_dict["sample"].shape init_dict["class_embed_type"] = "simple_projection" init_dict["projection_class_embeddings_input_dim"] = sample_size inputs_dict["class_labels"] = floats_tensor((batch_size, sample_size)).to(torch_device) model = self.model_class(**init_dict) model.to(torch_device) model.eval() with torch.no_grad(): output = model(**inputs_dict) if isinstance(output, dict): output = output.sample self.assertIsNotNone(output) expected_shape = inputs_dict["sample"].shape self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") def test_model_with_class_embeddings_concat(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() batch_size, _, _, sample_size = inputs_dict["sample"].shape init_dict["class_embed_type"] = "simple_projection" init_dict["projection_class_embeddings_input_dim"] = sample_size init_dict["class_embeddings_concat"] = True inputs_dict["class_labels"] = floats_tensor((batch_size, sample_size)).to(torch_device) model = self.model_class(**init_dict) model.to(torch_device) model.eval() with torch.no_grad(): output = model(**inputs_dict) if isinstance(output, dict): output = output.sample self.assertIsNotNone(output) expected_shape = inputs_dict["sample"].shape self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") def test_model_attention_slicing(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model = self.model_class(**init_dict) model.to(torch_device) model.eval() model.set_attention_slice("auto") with torch.no_grad(): output = model(**inputs_dict) assert output is not None model.set_attention_slice("max") with torch.no_grad(): output = model(**inputs_dict) assert output is not None model.set_attention_slice(2) with torch.no_grad(): output = model(**inputs_dict) assert output is not None def test_model_sliceable_head_dim(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model = self.model_class(**init_dict) def check_sliceable_dim_attr(module: torch.nn.Module): if hasattr(module, "set_attention_slice"): assert isinstance(module.sliceable_head_dim, int) for child in module.children(): check_sliceable_dim_attr(child) # retrieve number of attention layers for module in model.children(): check_sliceable_dim_attr(module) def test_gradient_checkpointing_is_applied(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model_class_copy = copy.copy(self.model_class) modules_with_gc_enabled = {} # now monkey patch the following function: # def _set_gradient_checkpointing(self, module, value=False): # if hasattr(module, "gradient_checkpointing"): # module.gradient_checkpointing = value def _set_gradient_checkpointing_new(self, module, value=False): if hasattr(module, "gradient_checkpointing"): module.gradient_checkpointing = value modules_with_gc_enabled[module.__class__.__name__] = True model_class_copy._set_gradient_checkpointing = _set_gradient_checkpointing_new model = model_class_copy(**init_dict) model.enable_gradient_checkpointing() EXPECTED_SET = { "CrossAttnUpBlock2D", "CrossAttnDownBlock2D", "UNetMidBlock2DCrossAttn", "UpBlock2D", "Transformer2DModel", "DownBlock2D", } assert set(modules_with_gc_enabled.keys()) == EXPECTED_SET assert all(modules_with_gc_enabled.values()), "All modules should be enabled" def test_special_attn_proc(self): class AttnEasyProc(torch.nn.Module): def __init__(self, num): super().__init__() self.weight = torch.nn.Parameter(torch.tensor(num)) self.is_run = False self.number = 0 self.counter = 0 def __call__(self, attn, hidden_states, encoder_hidden_states=None, attention_mask=None, number=None): batch_size, sequence_length, _ = hidden_states.shape attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) query = attn.to_q(hidden_states) encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states key = attn.to_k(encoder_hidden_states) value = attn.to_v(encoder_hidden_states) query = attn.head_to_batch_dim(query) key = attn.head_to_batch_dim(key) value = attn.head_to_batch_dim(value) attention_probs = attn.get_attention_scores(query, key, attention_mask) hidden_states = torch.bmm(attention_probs, value) hidden_states = attn.batch_to_head_dim(hidden_states) # linear proj hidden_states = attn.to_out[0](hidden_states) # dropout hidden_states = attn.to_out[1](hidden_states) hidden_states += self.weight self.is_run = True self.counter += 1 self.number = number return hidden_states # enable deterministic behavior for gradient checkpointing init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model = self.model_class(**init_dict) model.to(torch_device) processor = AttnEasyProc(5.0) model.set_attn_processor(processor) model(**inputs_dict, cross_attention_kwargs={"number": 123}).sample assert processor.counter == 8 assert processor.is_run assert processor.number == 123 @parameterized.expand( [ # fmt: off [torch.bool], [torch.long], [torch.float], # fmt: on ] ) def test_model_xattn_mask(self, mask_dtype): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() model = self.model_class(**{**init_dict, "attention_head_dim": (8, 16), "block_out_channels": (16, 32)}) model.to(torch_device) model.eval() cond = inputs_dict["encoder_hidden_states"] with torch.no_grad(): full_cond_out = model(**inputs_dict).sample assert full_cond_out is not None keepall_mask = torch.ones(*cond.shape[:-1], device=cond.device, dtype=mask_dtype) full_cond_keepallmask_out = model(**{**inputs_dict, "encoder_attention_mask": keepall_mask}).sample assert full_cond_keepallmask_out.allclose( full_cond_out, rtol=1e-05, atol=1e-05 ), "a 'keep all' mask should give the same result as no mask" trunc_cond = cond[:, :-1, :] trunc_cond_out = model(**{**inputs_dict, "encoder_hidden_states": trunc_cond}).sample assert not trunc_cond_out.allclose( full_cond_out, rtol=1e-05, atol=1e-05 ), "discarding the last token from our cond should change the result" batch, tokens, _ = cond.shape mask_last = (torch.arange(tokens) < tokens - 1).expand(batch, -1).to(cond.device, mask_dtype) masked_cond_out = model(**{**inputs_dict, "encoder_attention_mask": mask_last}).sample assert masked_cond_out.allclose( trunc_cond_out, rtol=1e-05, atol=1e-05 ), "masking the last token from our cond should be equivalent to truncating that token out of the condition" # see diffusers.models.attention_processor::Attention#prepare_attention_mask # note: we may not need to fix mask padding to work for stable-diffusion cross-attn masks. # since the use-case (somebody passes in a too-short cross-attn mask) is pretty esoteric. # maybe it's fine that this only works for the unclip use-case. @mark.skip( reason="we currently pad mask by target_length tokens (what unclip needs), whereas stable-diffusion's cross-attn needs to instead pad by remaining_length." ) def test_model_xattn_padding(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() model = self.model_class(**{**init_dict, "attention_head_dim": (8, 16)}) model.to(torch_device) model.eval() cond = inputs_dict["encoder_hidden_states"] with torch.no_grad(): full_cond_out = model(**inputs_dict).sample assert full_cond_out is not None batch, tokens, _ = cond.shape keeplast_mask = (torch.arange(tokens) == tokens - 1).expand(batch, -1).to(cond.device, torch.bool) keeplast_out = model(**{**inputs_dict, "encoder_attention_mask": keeplast_mask}).sample assert not keeplast_out.allclose(full_cond_out), "a 'keep last token' mask should change the result" trunc_mask = torch.zeros(batch, tokens - 1, device=cond.device, dtype=torch.bool) trunc_mask_out = model(**{**inputs_dict, "encoder_attention_mask": trunc_mask}).sample assert trunc_mask_out.allclose( keeplast_out ), "a mask with fewer tokens than condition, will be padded with 'keep' tokens. a 'discard-all' mask missing the final token is thus equivalent to a 'keep last' mask." def test_custom_diffusion_processors(self): # enable deterministic behavior for gradient checkpointing init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model = self.model_class(**init_dict) model.to(torch_device) with torch.no_grad(): sample1 = model(**inputs_dict).sample custom_diffusion_attn_procs = create_custom_diffusion_layers(model, mock_weights=False) # make sure we can set a list of attention processors model.set_attn_processor(custom_diffusion_attn_procs) model.to(torch_device) # test that attn processors can be set to itself model.set_attn_processor(model.attn_processors) with torch.no_grad(): sample2 = model(**inputs_dict).sample assert (sample1 - sample2).abs().max() < 3e-3 def test_custom_diffusion_save_load(self): # enable deterministic behavior for gradient checkpointing init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) torch.manual_seed(0) model = self.model_class(**init_dict) model.to(torch_device) with torch.no_grad(): old_sample = model(**inputs_dict).sample custom_diffusion_attn_procs = create_custom_diffusion_layers(model, mock_weights=False) model.set_attn_processor(custom_diffusion_attn_procs) with torch.no_grad(): sample = model(**inputs_dict).sample with tempfile.TemporaryDirectory() as tmpdirname: model.save_attn_procs(tmpdirname, safe_serialization=False) self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_custom_diffusion_weights.bin"))) torch.manual_seed(0) new_model = self.model_class(**init_dict) new_model.load_attn_procs(tmpdirname, weight_name="pytorch_custom_diffusion_weights.bin") new_model.to(torch_device) with torch.no_grad(): new_sample = new_model(**inputs_dict).sample assert (sample - new_sample).abs().max() < 1e-4 # custom diffusion and no custom diffusion should be the same assert (sample - old_sample).abs().max() < 3e-3 @unittest.skipIf( torch_device != "cuda" or not is_xformers_available(), reason="XFormers attention is only available with CUDA and `xformers` installed", ) def test_custom_diffusion_xformers_on_off(self): # enable deterministic behavior for gradient checkpointing init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) torch.manual_seed(0) model = self.model_class(**init_dict) model.to(torch_device) custom_diffusion_attn_procs = create_custom_diffusion_layers(model, mock_weights=False) model.set_attn_processor(custom_diffusion_attn_procs) # default with torch.no_grad(): sample = model(**inputs_dict).sample model.enable_xformers_memory_efficient_attention() on_sample = model(**inputs_dict).sample model.disable_xformers_memory_efficient_attention() off_sample = model(**inputs_dict).sample assert (sample - on_sample).abs().max() < 1e-4 assert (sample - off_sample).abs().max() < 1e-4 def test_pickle(self): # enable deterministic behavior for gradient checkpointing init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model = self.model_class(**init_dict) model.to(torch_device) with torch.no_grad(): sample = model(**inputs_dict).sample sample_copy = copy.copy(sample) assert (sample - sample_copy).abs().max() < 1e-4 def test_asymmetrical_unet(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() # Add asymmetry to configs init_dict["transformer_layers_per_block"] = [[3, 2], 1] init_dict["reverse_transformer_layers_per_block"] = [[3, 4], 1] torch.manual_seed(0) model = self.model_class(**init_dict) model.to(torch_device) output = model(**inputs_dict).sample expected_shape = inputs_dict["sample"].shape # Check if input and output shapes are the same self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") def test_ip_adapter(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model = self.model_class(**init_dict) model.to(torch_device) # forward pass without ip-adapter with torch.no_grad(): sample1 = model(**inputs_dict).sample # update inputs_dict for ip-adapter batch_size = inputs_dict["encoder_hidden_states"].shape[0] # for ip-adapter image_embeds has shape [batch_size, num_image, embed_dim] image_embeds = floats_tensor((batch_size, 1, model.config.cross_attention_dim)).to(torch_device) inputs_dict["added_cond_kwargs"] = {"image_embeds": [image_embeds]} # make ip_adapter_1 and ip_adapter_2 ip_adapter_1 = create_ip_adapter_state_dict(model) image_proj_state_dict_2 = {k: w + 1.0 for k, w in ip_adapter_1["image_proj"].items()} cross_attn_state_dict_2 = {k: w + 1.0 for k, w in ip_adapter_1["ip_adapter"].items()} ip_adapter_2 = {} ip_adapter_2.update({"image_proj": image_proj_state_dict_2, "ip_adapter": cross_attn_state_dict_2}) # forward pass ip_adapter_1 model._load_ip_adapter_weights([ip_adapter_1]) assert model.config.encoder_hid_dim_type == "ip_image_proj" assert model.encoder_hid_proj is not None assert model.down_blocks[0].attentions[0].transformer_blocks[0].attn2.processor.__class__.__name__ in ( "IPAdapterAttnProcessor", "IPAdapterAttnProcessor2_0", ) with torch.no_grad(): sample2 = model(**inputs_dict).sample # forward pass with ip_adapter_2 model._load_ip_adapter_weights([ip_adapter_2]) with torch.no_grad(): sample3 = model(**inputs_dict).sample # forward pass with ip_adapter_1 again model._load_ip_adapter_weights([ip_adapter_1]) with torch.no_grad(): sample4 = model(**inputs_dict).sample # forward pass with multiple ip-adapters and multiple images model._load_ip_adapter_weights([ip_adapter_1, ip_adapter_2]) # set the scale for ip_adapter_2 to 0 so that result should be same as only load ip_adapter_1 for attn_processor in model.attn_processors.values(): if isinstance(attn_processor, (IPAdapterAttnProcessor, IPAdapterAttnProcessor2_0)): attn_processor.scale = [1, 0] image_embeds_multi = image_embeds.repeat(1, 2, 1) inputs_dict["added_cond_kwargs"] = {"image_embeds": [image_embeds_multi, image_embeds_multi]} with torch.no_grad(): sample5 = model(**inputs_dict).sample # forward pass with single ip-adapter & single image when image_embeds is not a list and a 2-d tensor image_embeds = image_embeds.squeeze(1) inputs_dict["added_cond_kwargs"] = {"image_embeds": image_embeds} model._load_ip_adapter_weights(ip_adapter_1) with torch.no_grad(): sample6 = model(**inputs_dict).sample assert not sample1.allclose(sample2, atol=1e-4, rtol=1e-4) assert not sample2.allclose(sample3, atol=1e-4, rtol=1e-4) assert sample2.allclose(sample4, atol=1e-4, rtol=1e-4) assert sample2.allclose(sample5, atol=1e-4, rtol=1e-4) assert sample2.allclose(sample6, atol=1e-4, rtol=1e-4) def test_ip_adapter_plus(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["block_out_channels"] = (16, 32) init_dict["attention_head_dim"] = (8, 16) model = self.model_class(**init_dict) model.to(torch_device) # forward pass without ip-adapter with torch.no_grad(): sample1 = model(**inputs_dict).sample # update inputs_dict for ip-adapter batch_size = inputs_dict["encoder_hidden_states"].shape[0] # for ip-adapter-plus image_embeds has shape [batch_size, num_image, sequence_length, embed_dim] image_embeds = floats_tensor((batch_size, 1, 1, model.config.cross_attention_dim)).to(torch_device) inputs_dict["added_cond_kwargs"] = {"image_embeds": [image_embeds]} # make ip_adapter_1 and ip_adapter_2 ip_adapter_1 = create_ip_adapter_plus_state_dict(model) image_proj_state_dict_2 = {k: w + 1.0 for k, w in ip_adapter_1["image_proj"].items()} cross_attn_state_dict_2 = {k: w + 1.0 for k, w in ip_adapter_1["ip_adapter"].items()} ip_adapter_2 = {} ip_adapter_2.update({"image_proj": image_proj_state_dict_2, "ip_adapter": cross_attn_state_dict_2}) # forward pass ip_adapter_1 model._load_ip_adapter_weights([ip_adapter_1]) assert model.config.encoder_hid_dim_type == "ip_image_proj" assert model.encoder_hid_proj is not None assert model.down_blocks[0].attentions[0].transformer_blocks[0].attn2.processor.__class__.__name__ in ( "IPAdapterAttnProcessor", "IPAdapterAttnProcessor2_0", ) with torch.no_grad(): sample2 = model(**inputs_dict).sample # forward pass with ip_adapter_2 model._load_ip_adapter_weights([ip_adapter_2]) with torch.no_grad(): sample3 = model(**inputs_dict).sample # forward pass with ip_adapter_1 again model._load_ip_adapter_weights([ip_adapter_1]) with torch.no_grad(): sample4 = model(**inputs_dict).sample # forward pass with multiple ip-adapters and multiple images model._load_ip_adapter_weights([ip_adapter_1, ip_adapter_2]) # set the scale for ip_adapter_2 to 0 so that result should be same as only load ip_adapter_1 for attn_processor in model.attn_processors.values(): if isinstance(attn_processor, (IPAdapterAttnProcessor, IPAdapterAttnProcessor2_0)): attn_processor.scale = [1, 0] image_embeds_multi = image_embeds.repeat(1, 2, 1, 1) inputs_dict["added_cond_kwargs"] = {"image_embeds": [image_embeds_multi, image_embeds_multi]} with torch.no_grad(): sample5 = model(**inputs_dict).sample # forward pass with single ip-adapter & single image when image_embeds is a 3-d tensor image_embeds = image_embeds[:,].squeeze(1) inputs_dict["added_cond_kwargs"] = {"image_embeds": image_embeds} model._load_ip_adapter_weights(ip_adapter_1) with torch.no_grad(): sample6 = model(**inputs_dict).sample assert not sample1.allclose(sample2, atol=1e-4, rtol=1e-4) assert not sample2.allclose(sample3, atol=1e-4, rtol=1e-4) assert sample2.allclose(sample4, atol=1e-4, rtol=1e-4) assert sample2.allclose(sample5, atol=1e-4, rtol=1e-4) assert sample2.allclose(sample6, atol=1e-4, rtol=1e-4) @slow class UNet2DConditionModelIntegrationTests(unittest.TestCase): def get_file_format(self, seed, shape): return f"gaussian_noise_s={seed}_shape={'_'.join([str(s) for s in shape])}.npy" def tearDown(self): # clean up the VRAM after each test super().tearDown() gc.collect() backend_empty_cache(torch_device) def get_latents(self, seed=0, shape=(4, 4, 64, 64), fp16=False): dtype = torch.float16 if fp16 else torch.float32 image = torch.from_numpy(load_hf_numpy(self.get_file_format(seed, shape))).to(torch_device).to(dtype) return image def get_unet_model(self, fp16=False, model_id="CompVis/stable-diffusion-v1-4"): revision = "fp16" if fp16 else None torch_dtype = torch.float16 if fp16 else torch.float32 model = UNet2DConditionModel.from_pretrained( model_id, subfolder="unet", torch_dtype=torch_dtype, revision=revision ) model.to(torch_device).eval() return model @require_torch_gpu def test_set_attention_slice_auto(self): torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() torch.cuda.reset_peak_memory_stats() unet = self.get_unet_model() unet.set_attention_slice("auto") latents = self.get_latents(33) encoder_hidden_states = self.get_encoder_hidden_states(33) timestep = 1 with torch.no_grad(): _ = unet(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample mem_bytes = torch.cuda.max_memory_allocated() assert mem_bytes < 5 * 10**9 @require_torch_gpu def test_set_attention_slice_max(self): torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() torch.cuda.reset_peak_memory_stats() unet = self.get_unet_model() unet.set_attention_slice("max") latents = self.get_latents(33) encoder_hidden_states = self.get_encoder_hidden_states(33) timestep = 1 with torch.no_grad(): _ = unet(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample mem_bytes = torch.cuda.max_memory_allocated() assert mem_bytes < 5 * 10**9 @require_torch_gpu def test_set_attention_slice_int(self): torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() torch.cuda.reset_peak_memory_stats() unet = self.get_unet_model() unet.set_attention_slice(2) latents = self.get_latents(33) encoder_hidden_states = self.get_encoder_hidden_states(33) timestep = 1 with torch.no_grad(): _ = unet(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample mem_bytes = torch.cuda.max_memory_allocated() assert mem_bytes < 5 * 10**9 @require_torch_gpu def test_set_attention_slice_list(self): torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() torch.cuda.reset_peak_memory_stats() # there are 32 sliceable layers slice_list = 16 * [2, 3] unet = self.get_unet_model() unet.set_attention_slice(slice_list) latents = self.get_latents(33) encoder_hidden_states = self.get_encoder_hidden_states(33) timestep = 1 with torch.no_grad(): _ = unet(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample mem_bytes = torch.cuda.max_memory_allocated() assert mem_bytes < 5 * 10**9 def get_encoder_hidden_states(self, seed=0, shape=(4, 77, 768), fp16=False): dtype = torch.float16 if fp16 else torch.float32 hidden_states = torch.from_numpy(load_hf_numpy(self.get_file_format(seed, shape))).to(torch_device).to(dtype) return hidden_states @parameterized.expand( [ # fmt: off [33, 4, [-0.4424, 0.1510, -0.1937, 0.2118, 0.3746, -0.3957, 0.0160, -0.0435]], [47, 0.55, [-0.1508, 0.0379, -0.3075, 0.2540, 0.3633, -0.0821, 0.1719, -0.0207]], [21, 0.89, [-0.6479, 0.6364, -0.3464, 0.8697, 0.4443, -0.6289, -0.0091, 0.1778]], [9, 1000, [0.8888, -0.5659, 0.5834, -0.7469, 1.1912, -0.3923, 1.1241, -0.4424]], # fmt: on ] ) @require_torch_accelerator_with_fp16 def test_compvis_sd_v1_4(self, seed, timestep, expected_slice): model = self.get_unet_model(model_id="CompVis/stable-diffusion-v1-4") latents = self.get_latents(seed) encoder_hidden_states = self.get_encoder_hidden_states(seed) timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device) with torch.no_grad(): sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample assert sample.shape == latents.shape output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu() expected_output_slice = torch.tensor(expected_slice) assert torch_all_close(output_slice, expected_output_slice, atol=1e-3) @parameterized.expand( [ # fmt: off [83, 4, [-0.2323, -0.1304, 0.0813, -0.3093, -0.0919, -0.1571, -0.1125, -0.5806]], [17, 0.55, [-0.0831, -0.2443, 0.0901, -0.0919, 0.3396, 0.0103, -0.3743, 0.0701]], [8, 0.89, [-0.4863, 0.0859, 0.0875, -0.1658, 0.9199, -0.0114, 0.4839, 0.4639]], [3, 1000, [-0.5649, 0.2402, -0.5518, 0.1248, 1.1328, -0.2443, -0.0325, -1.0078]], # fmt: on ] ) @require_torch_accelerator_with_fp16 def test_compvis_sd_v1_4_fp16(self, seed, timestep, expected_slice): model = self.get_unet_model(model_id="CompVis/stable-diffusion-v1-4", fp16=True) latents = self.get_latents(seed, fp16=True) encoder_hidden_states = self.get_encoder_hidden_states(seed, fp16=True) timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device) with torch.no_grad(): sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample assert sample.shape == latents.shape output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu() expected_output_slice = torch.tensor(expected_slice) assert torch_all_close(output_slice, expected_output_slice, atol=5e-3) @parameterized.expand( [ # fmt: off [33, 4, [-0.4430, 0.1570, -0.1867, 0.2376, 0.3205, -0.3681, 0.0525, -0.0722]], [47, 0.55, [-0.1415, 0.0129, -0.3136, 0.2257, 0.3430, -0.0536, 0.2114, -0.0436]], [21, 0.89, [-0.7091, 0.6664, -0.3643, 0.9032, 0.4499, -0.6541, 0.0139, 0.1750]], [9, 1000, [0.8878, -0.5659, 0.5844, -0.7442, 1.1883, -0.3927, 1.1192, -0.4423]], # fmt: on ] ) @require_torch_accelerator @skip_mps def test_compvis_sd_v1_5(self, seed, timestep, expected_slice): model = self.get_unet_model(model_id="runwayml/stable-diffusion-v1-5") latents = self.get_latents(seed) encoder_hidden_states = self.get_encoder_hidden_states(seed) timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device) with torch.no_grad(): sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample assert sample.shape == latents.shape output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu() expected_output_slice = torch.tensor(expected_slice) assert torch_all_close(output_slice, expected_output_slice, atol=1e-3) @parameterized.expand( [ # fmt: off [83, 4, [-0.2695, -0.1669, 0.0073, -0.3181, -0.1187, -0.1676, -0.1395, -0.5972]], [17, 0.55, [-0.1290, -0.2588, 0.0551, -0.0916, 0.3286, 0.0238, -0.3669, 0.0322]], [8, 0.89, [-0.5283, 0.1198, 0.0870, -0.1141, 0.9189, -0.0150, 0.5474, 0.4319]], [3, 1000, [-0.5601, 0.2411, -0.5435, 0.1268, 1.1338, -0.2427, -0.0280, -1.0020]], # fmt: on ] ) @require_torch_accelerator_with_fp16 def test_compvis_sd_v1_5_fp16(self, seed, timestep, expected_slice): model = self.get_unet_model(model_id="runwayml/stable-diffusion-v1-5", fp16=True) latents = self.get_latents(seed, fp16=True) encoder_hidden_states = self.get_encoder_hidden_states(seed, fp16=True) timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device) with torch.no_grad(): sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample assert sample.shape == latents.shape output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu() expected_output_slice = torch.tensor(expected_slice) assert torch_all_close(output_slice, expected_output_slice, atol=5e-3) @parameterized.expand( [ # fmt: off [33, 4, [-0.7639, 0.0106, -0.1615, -0.3487, -0.0423, -0.7972, 0.0085, -0.4858]], [47, 0.55, [-0.6564, 0.0795, -1.9026, -0.6258, 1.8235, 1.2056, 1.2169, 0.9073]], [21, 0.89, [0.0327, 0.4399, -0.6358, 0.3417, 0.4120, -0.5621, -0.0397, -1.0430]], [9, 1000, [0.1600, 0.7303, -1.0556, -0.3515, -0.7440, -1.2037, -1.8149, -1.8931]], # fmt: on ] ) @require_torch_accelerator @skip_mps def test_compvis_sd_inpaint(self, seed, timestep, expected_slice): model = self.get_unet_model(model_id="runwayml/stable-diffusion-inpainting") latents = self.get_latents(seed, shape=(4, 9, 64, 64)) encoder_hidden_states = self.get_encoder_hidden_states(seed) timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device) with torch.no_grad(): sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample assert sample.shape == (4, 4, 64, 64) output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu() expected_output_slice = torch.tensor(expected_slice) assert torch_all_close(output_slice, expected_output_slice, atol=3e-3) @parameterized.expand( [ # fmt: off [83, 4, [-0.1047, -1.7227, 0.1067, 0.0164, -0.5698, -0.4172, -0.1388, 1.1387]], [17, 0.55, [0.0975, -0.2856, -0.3508, -0.4600, 0.3376, 0.2930, -0.2747, -0.7026]], [8, 0.89, [-0.0952, 0.0183, -0.5825, -0.1981, 0.1131, 0.4668, -0.0395, -0.3486]], [3, 1000, [0.4790, 0.4949, -1.0732, -0.7158, 0.7959, -0.9478, 0.1105, -0.9741]], # fmt: on ] ) @require_torch_accelerator_with_fp16 def test_compvis_sd_inpaint_fp16(self, seed, timestep, expected_slice): model = self.get_unet_model(model_id="runwayml/stable-diffusion-inpainting", fp16=True) latents = self.get_latents(seed, shape=(4, 9, 64, 64), fp16=True) encoder_hidden_states = self.get_encoder_hidden_states(seed, fp16=True) timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device) with torch.no_grad(): sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample assert sample.shape == (4, 4, 64, 64) output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu() expected_output_slice = torch.tensor(expected_slice) assert torch_all_close(output_slice, expected_output_slice, atol=5e-3) @parameterized.expand( [ # fmt: off [83, 4, [0.1514, 0.0807, 0.1624, 0.1016, -0.1896, 0.0263, 0.0677, 0.2310]], [17, 0.55, [0.1164, -0.0216, 0.0170, 0.1589, -0.3120, 0.1005, -0.0581, -0.1458]], [8, 0.89, [-0.1758, -0.0169, 0.1004, -0.1411, 0.1312, 0.1103, -0.1996, 0.2139]], [3, 1000, [0.1214, 0.0352, -0.0731, -0.1562, -0.0994, -0.0906, -0.2340, -0.0539]], # fmt: on ] ) @require_torch_accelerator_with_fp16 def test_stabilityai_sd_v2_fp16(self, seed, timestep, expected_slice): model = self.get_unet_model(model_id="stabilityai/stable-diffusion-2", fp16=True) latents = self.get_latents(seed, shape=(4, 4, 96, 96), fp16=True) encoder_hidden_states = self.get_encoder_hidden_states(seed, shape=(4, 77, 1024), fp16=True) timestep = torch.tensor([timestep], dtype=torch.long, device=torch_device) with torch.no_grad(): sample = model(latents, timestep=timestep, encoder_hidden_states=encoder_hidden_states).sample assert sample.shape == latents.shape output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu() expected_output_slice = torch.tensor(expected_slice) assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
diffusers/tests/models/unets/test_models_unet_2d_condition.py/0
{ "file_path": "diffusers/tests/models/unets/test_models_unet_2d_condition.py", "repo_id": "diffusers", "token_count": 24516 }
244
import pickle as pkl import unittest from dataclasses import dataclass from typing import List, Union import numpy as np import PIL.Image from diffusers.utils.outputs import BaseOutput from diffusers.utils.testing_utils import require_torch @dataclass class CustomOutput(BaseOutput): images: Union[List[PIL.Image.Image], np.ndarray] class ConfigTester(unittest.TestCase): def test_outputs_single_attribute(self): outputs = CustomOutput(images=np.random.rand(1, 3, 4, 4)) # check every way of getting the attribute assert isinstance(outputs.images, np.ndarray) assert outputs.images.shape == (1, 3, 4, 4) assert isinstance(outputs["images"], np.ndarray) assert outputs["images"].shape == (1, 3, 4, 4) assert isinstance(outputs[0], np.ndarray) assert outputs[0].shape == (1, 3, 4, 4) # test with a non-tensor attribute outputs = CustomOutput(images=[PIL.Image.new("RGB", (4, 4))]) # check every way of getting the attribute assert isinstance(outputs.images, list) assert isinstance(outputs.images[0], PIL.Image.Image) assert isinstance(outputs["images"], list) assert isinstance(outputs["images"][0], PIL.Image.Image) assert isinstance(outputs[0], list) assert isinstance(outputs[0][0], PIL.Image.Image) def test_outputs_dict_init(self): # test output reinitialization with a `dict` for compatibility with `accelerate` outputs = CustomOutput({"images": np.random.rand(1, 3, 4, 4)}) # check every way of getting the attribute assert isinstance(outputs.images, np.ndarray) assert outputs.images.shape == (1, 3, 4, 4) assert isinstance(outputs["images"], np.ndarray) assert outputs["images"].shape == (1, 3, 4, 4) assert isinstance(outputs[0], np.ndarray) assert outputs[0].shape == (1, 3, 4, 4) # test with a non-tensor attribute outputs = CustomOutput({"images": [PIL.Image.new("RGB", (4, 4))]}) # check every way of getting the attribute assert isinstance(outputs.images, list) assert isinstance(outputs.images[0], PIL.Image.Image) assert isinstance(outputs["images"], list) assert isinstance(outputs["images"][0], PIL.Image.Image) assert isinstance(outputs[0], list) assert isinstance(outputs[0][0], PIL.Image.Image) def test_outputs_serialization(self): outputs_orig = CustomOutput(images=[PIL.Image.new("RGB", (4, 4))]) serialized = pkl.dumps(outputs_orig) outputs_copy = pkl.loads(serialized) # Check original and copy are equal assert dir(outputs_orig) == dir(outputs_copy) assert dict(outputs_orig) == dict(outputs_copy) assert vars(outputs_orig) == vars(outputs_copy) @require_torch def test_torch_pytree(self): # ensure torch.utils._pytree treats ModelOutput subclasses as nodes (and not leaves) # this is important for DistributedDataParallel gradient synchronization with static_graph=True import torch import torch.utils._pytree data = np.random.rand(1, 3, 4, 4) x = CustomOutput(images=data) self.assertFalse(torch.utils._pytree._is_leaf(x)) expected_flat_outs = [data] expected_tree_spec = torch.utils._pytree.TreeSpec(CustomOutput, ["images"], [torch.utils._pytree.LeafSpec()]) actual_flat_outs, actual_tree_spec = torch.utils._pytree.tree_flatten(x) self.assertEqual(expected_flat_outs, actual_flat_outs) self.assertEqual(expected_tree_spec, actual_tree_spec) unflattened_x = torch.utils._pytree.tree_unflatten(actual_flat_outs, actual_tree_spec) self.assertEqual(x, unflattened_x)
diffusers/tests/others/test_outputs.py/0
{ "file_path": "diffusers/tests/others/test_outputs.py", "repo_id": "diffusers", "token_count": 1506 }
245
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import unittest import numpy as np import torch from transformers import ( ClapAudioConfig, ClapConfig, ClapFeatureExtractor, ClapModel, ClapTextConfig, GPT2Config, GPT2Model, RobertaTokenizer, SpeechT5HifiGan, SpeechT5HifiGanConfig, T5Config, T5EncoderModel, T5Tokenizer, ) from diffusers import ( AudioLDM2Pipeline, AudioLDM2ProjectionModel, AudioLDM2UNet2DConditionModel, AutoencoderKL, DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler, ) from diffusers.utils.testing_utils import enable_full_determinism, nightly, torch_device from ..pipeline_params import TEXT_TO_AUDIO_BATCH_PARAMS, TEXT_TO_AUDIO_PARAMS from ..test_pipelines_common import PipelineTesterMixin enable_full_determinism() class AudioLDM2PipelineFastTests(PipelineTesterMixin, unittest.TestCase): pipeline_class = AudioLDM2Pipeline params = TEXT_TO_AUDIO_PARAMS batch_params = TEXT_TO_AUDIO_BATCH_PARAMS required_optional_params = frozenset( [ "num_inference_steps", "num_waveforms_per_prompt", "generator", "latents", "output_type", "return_dict", "callback", "callback_steps", ] ) def get_dummy_components(self): torch.manual_seed(0) unet = AudioLDM2UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), cross_attention_dim=([None, 16, 32], [None, 16, 32]), ) scheduler = DDIMScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False, ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=1, out_channels=1, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, ) torch.manual_seed(0) text_branch_config = ClapTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=16, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=2, num_hidden_layers=2, pad_token_id=1, vocab_size=1000, projection_dim=16, ) audio_branch_config = ClapAudioConfig( spec_size=64, window_size=4, num_mel_bins=64, intermediate_size=37, layer_norm_eps=1e-05, depths=[2, 2], num_attention_heads=[2, 2], num_hidden_layers=2, hidden_size=192, projection_dim=16, patch_size=2, patch_stride=2, patch_embed_input_channels=4, ) text_encoder_config = ClapConfig.from_text_audio_configs( text_config=text_branch_config, audio_config=audio_branch_config, projection_dim=16 ) text_encoder = ClapModel(text_encoder_config) tokenizer = RobertaTokenizer.from_pretrained("hf-internal-testing/tiny-random-roberta", model_max_length=77) feature_extractor = ClapFeatureExtractor.from_pretrained( "hf-internal-testing/tiny-random-ClapModel", hop_length=7900 ) torch.manual_seed(0) text_encoder_2_config = T5Config( vocab_size=32100, d_model=32, d_ff=37, d_kv=8, num_heads=2, num_layers=2, ) text_encoder_2 = T5EncoderModel(text_encoder_2_config) tokenizer_2 = T5Tokenizer.from_pretrained("hf-internal-testing/tiny-random-T5Model", model_max_length=77) torch.manual_seed(0) language_model_config = GPT2Config( n_embd=16, n_head=2, n_layer=2, vocab_size=1000, n_ctx=99, n_positions=99, ) language_model = GPT2Model(language_model_config) language_model.config.max_new_tokens = 8 torch.manual_seed(0) projection_model = AudioLDM2ProjectionModel(text_encoder_dim=16, text_encoder_1_dim=32, langauge_model_dim=16) vocoder_config = SpeechT5HifiGanConfig( model_in_dim=8, sampling_rate=16000, upsample_initial_channel=16, upsample_rates=[2, 2], upsample_kernel_sizes=[4, 4], resblock_kernel_sizes=[3, 7], resblock_dilation_sizes=[[1, 3, 5], [1, 3, 5]], normalize_before=False, ) vocoder = SpeechT5HifiGan(vocoder_config) components = { "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "text_encoder_2": text_encoder_2, "tokenizer": tokenizer, "tokenizer_2": tokenizer_2, "feature_extractor": feature_extractor, "language_model": language_model, "projection_model": projection_model, "vocoder": vocoder, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "A hammer hitting a wooden surface", "generator": generator, "num_inference_steps": 2, "guidance_scale": 6.0, } return inputs def test_audioldm2_ddim(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() audioldm_pipe = AudioLDM2Pipeline(**components) audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) output = audioldm_pipe(**inputs) audio = output.audios[0] assert audio.ndim == 1 assert len(audio) == 256 audio_slice = audio[:10] expected_slice = np.array( [0.0025, 0.0018, 0.0018, -0.0023, -0.0026, -0.0020, -0.0026, -0.0021, -0.0027, -0.0020] ) assert np.abs(audio_slice - expected_slice).max() < 1e-4 def test_audioldm2_prompt_embeds(self): components = self.get_dummy_components() audioldm_pipe = AudioLDM2Pipeline(**components) audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) inputs["prompt"] = 3 * [inputs["prompt"]] # forward output = audioldm_pipe(**inputs) audio_1 = output.audios[0] inputs = self.get_dummy_inputs(torch_device) prompt = 3 * [inputs.pop("prompt")] text_inputs = audioldm_pipe.tokenizer( prompt, padding="max_length", max_length=audioldm_pipe.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_inputs = text_inputs["input_ids"].to(torch_device) clap_prompt_embeds = audioldm_pipe.text_encoder.get_text_features(text_inputs) clap_prompt_embeds = clap_prompt_embeds[:, None, :] text_inputs = audioldm_pipe.tokenizer_2( prompt, padding="max_length", max_length=True, truncation=True, return_tensors="pt", ) text_inputs = text_inputs["input_ids"].to(torch_device) t5_prompt_embeds = audioldm_pipe.text_encoder_2( text_inputs, ) t5_prompt_embeds = t5_prompt_embeds[0] projection_embeds = audioldm_pipe.projection_model(clap_prompt_embeds, t5_prompt_embeds)[0] generated_prompt_embeds = audioldm_pipe.generate_language_model(projection_embeds, max_new_tokens=8) inputs["prompt_embeds"] = t5_prompt_embeds inputs["generated_prompt_embeds"] = generated_prompt_embeds # forward output = audioldm_pipe(**inputs) audio_2 = output.audios[0] assert np.abs(audio_1 - audio_2).max() < 1e-2 def test_audioldm2_negative_prompt_embeds(self): components = self.get_dummy_components() audioldm_pipe = AudioLDM2Pipeline(**components) audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) negative_prompt = 3 * ["this is a negative prompt"] inputs["negative_prompt"] = negative_prompt inputs["prompt"] = 3 * [inputs["prompt"]] # forward output = audioldm_pipe(**inputs) audio_1 = output.audios[0] inputs = self.get_dummy_inputs(torch_device) prompt = 3 * [inputs.pop("prompt")] embeds = [] generated_embeds = [] for p in [prompt, negative_prompt]: text_inputs = audioldm_pipe.tokenizer( p, padding="max_length", max_length=audioldm_pipe.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_inputs = text_inputs["input_ids"].to(torch_device) clap_prompt_embeds = audioldm_pipe.text_encoder.get_text_features(text_inputs) clap_prompt_embeds = clap_prompt_embeds[:, None, :] text_inputs = audioldm_pipe.tokenizer_2( prompt, padding="max_length", max_length=True if len(embeds) == 0 else embeds[0].shape[1], truncation=True, return_tensors="pt", ) text_inputs = text_inputs["input_ids"].to(torch_device) t5_prompt_embeds = audioldm_pipe.text_encoder_2( text_inputs, ) t5_prompt_embeds = t5_prompt_embeds[0] projection_embeds = audioldm_pipe.projection_model(clap_prompt_embeds, t5_prompt_embeds)[0] generated_prompt_embeds = audioldm_pipe.generate_language_model(projection_embeds, max_new_tokens=8) embeds.append(t5_prompt_embeds) generated_embeds.append(generated_prompt_embeds) inputs["prompt_embeds"], inputs["negative_prompt_embeds"] = embeds inputs["generated_prompt_embeds"], inputs["negative_generated_prompt_embeds"] = generated_embeds # forward output = audioldm_pipe(**inputs) audio_2 = output.audios[0] assert np.abs(audio_1 - audio_2).max() < 1e-2 def test_audioldm2_negative_prompt(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() components["scheduler"] = PNDMScheduler(skip_prk_steps=True) audioldm_pipe = AudioLDM2Pipeline(**components) audioldm_pipe = audioldm_pipe.to(device) audioldm_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) negative_prompt = "egg cracking" output = audioldm_pipe(**inputs, negative_prompt=negative_prompt) audio = output.audios[0] assert audio.ndim == 1 assert len(audio) == 256 audio_slice = audio[:10] expected_slice = np.array( [0.0025, 0.0018, 0.0018, -0.0023, -0.0026, -0.0020, -0.0026, -0.0021, -0.0027, -0.0020] ) assert np.abs(audio_slice - expected_slice).max() < 1e-4 def test_audioldm2_num_waveforms_per_prompt(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() components["scheduler"] = PNDMScheduler(skip_prk_steps=True) audioldm_pipe = AudioLDM2Pipeline(**components) audioldm_pipe = audioldm_pipe.to(device) audioldm_pipe.set_progress_bar_config(disable=None) prompt = "A hammer hitting a wooden surface" # test num_waveforms_per_prompt=1 (default) audios = audioldm_pipe(prompt, num_inference_steps=2).audios assert audios.shape == (1, 256) # test num_waveforms_per_prompt=1 (default) for batch of prompts batch_size = 2 audios = audioldm_pipe([prompt] * batch_size, num_inference_steps=2).audios assert audios.shape == (batch_size, 256) # test num_waveforms_per_prompt for single prompt num_waveforms_per_prompt = 2 audios = audioldm_pipe(prompt, num_inference_steps=2, num_waveforms_per_prompt=num_waveforms_per_prompt).audios assert audios.shape == (num_waveforms_per_prompt, 256) # test num_waveforms_per_prompt for batch of prompts batch_size = 2 audios = audioldm_pipe( [prompt] * batch_size, num_inference_steps=2, num_waveforms_per_prompt=num_waveforms_per_prompt ).audios assert audios.shape == (batch_size * num_waveforms_per_prompt, 256) def test_audioldm2_audio_length_in_s(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() audioldm_pipe = AudioLDM2Pipeline(**components) audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe.set_progress_bar_config(disable=None) vocoder_sampling_rate = audioldm_pipe.vocoder.config.sampling_rate inputs = self.get_dummy_inputs(device) output = audioldm_pipe(audio_length_in_s=0.016, **inputs) audio = output.audios[0] assert audio.ndim == 1 assert len(audio) / vocoder_sampling_rate == 0.016 output = audioldm_pipe(audio_length_in_s=0.032, **inputs) audio = output.audios[0] assert audio.ndim == 1 assert len(audio) / vocoder_sampling_rate == 0.032 def test_audioldm2_vocoder_model_in_dim(self): components = self.get_dummy_components() audioldm_pipe = AudioLDM2Pipeline(**components) audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe.set_progress_bar_config(disable=None) prompt = ["hey"] output = audioldm_pipe(prompt, num_inference_steps=1) audio_shape = output.audios.shape assert audio_shape == (1, 256) config = audioldm_pipe.vocoder.config config.model_in_dim *= 2 audioldm_pipe.vocoder = SpeechT5HifiGan(config).to(torch_device) output = audioldm_pipe(prompt, num_inference_steps=1) audio_shape = output.audios.shape # waveform shape is unchanged, we just have 2x the number of mel channels in the spectrogram assert audio_shape == (1, 256) def test_attention_slicing_forward_pass(self): self._test_attention_slicing_forward_pass(test_mean_pixel_difference=False) @unittest.skip("Raises a not implemented error in AudioLDM2") def test_xformers_attention_forwardGenerator_pass(self): pass def test_dict_tuple_outputs_equivalent(self): # increase tolerance from 1e-4 -> 2e-4 to account for large composite model super().test_dict_tuple_outputs_equivalent(expected_max_difference=2e-4) def test_inference_batch_single_identical(self): # increase tolerance from 1e-4 -> 2e-4 to account for large composite model self._test_inference_batch_single_identical(expected_max_diff=2e-4) def test_save_load_local(self): # increase tolerance from 1e-4 -> 2e-4 to account for large composite model super().test_save_load_local(expected_max_difference=2e-4) def test_save_load_optional_components(self): # increase tolerance from 1e-4 -> 2e-4 to account for large composite model super().test_save_load_optional_components(expected_max_difference=2e-4) def test_to_dtype(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.set_progress_bar_config(disable=None) # The method component.dtype returns the dtype of the first parameter registered in the model, not the # dtype of the entire model. In the case of CLAP, the first parameter is a float64 constant (logit scale) model_dtypes = {key: component.dtype for key, component in components.items() if hasattr(component, "dtype")} # Without the logit scale parameters, everything is float32 model_dtypes.pop("text_encoder") self.assertTrue(all(dtype == torch.float32 for dtype in model_dtypes.values())) # the CLAP sub-models are float32 model_dtypes["clap_text_branch"] = components["text_encoder"].text_model.dtype self.assertTrue(all(dtype == torch.float32 for dtype in model_dtypes.values())) # Once we send to fp16, all params are in half-precision, including the logit scale pipe.to(dtype=torch.float16) model_dtypes = {key: component.dtype for key, component in components.items() if hasattr(component, "dtype")} self.assertTrue(all(dtype == torch.float16 for dtype in model_dtypes.values())) def test_sequential_cpu_offload_forward_pass(self): pass @nightly class AudioLDM2PipelineSlowTests(unittest.TestCase): def setUp(self): super().setUp() gc.collect() torch.cuda.empty_cache() def tearDown(self): super().tearDown() gc.collect() torch.cuda.empty_cache() def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0): generator = torch.Generator(device=generator_device).manual_seed(seed) latents = np.random.RandomState(seed).standard_normal((1, 8, 128, 16)) latents = torch.from_numpy(latents).to(device=device, dtype=dtype) inputs = { "prompt": "A hammer hitting a wooden surface", "latents": latents, "generator": generator, "num_inference_steps": 3, "guidance_scale": 2.5, } return inputs def get_inputs_tts(self, device, generator_device="cpu", dtype=torch.float32, seed=0): generator = torch.Generator(device=generator_device).manual_seed(seed) latents = np.random.RandomState(seed).standard_normal((1, 8, 128, 16)) latents = torch.from_numpy(latents).to(device=device, dtype=dtype) inputs = { "prompt": "A men saying", "transcription": "hello my name is John", "latents": latents, "generator": generator, "num_inference_steps": 3, "guidance_scale": 2.5, } return inputs def test_audioldm2(self): audioldm_pipe = AudioLDM2Pipeline.from_pretrained("cvssp/audioldm2") audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs(torch_device) inputs["num_inference_steps"] = 25 audio = audioldm_pipe(**inputs).audios[0] assert audio.ndim == 1 assert len(audio) == 81952 # check the portion of the generated audio with the largest dynamic range (reduces flakiness) audio_slice = audio[17275:17285] expected_slice = np.array([0.0791, 0.0666, 0.1158, 0.1227, 0.1171, -0.2880, -0.1940, -0.0283, -0.0126, 0.1127]) max_diff = np.abs(expected_slice - audio_slice).max() assert max_diff < 1e-3 def test_audioldm2_lms(self): audioldm_pipe = AudioLDM2Pipeline.from_pretrained("cvssp/audioldm2") audioldm_pipe.scheduler = LMSDiscreteScheduler.from_config(audioldm_pipe.scheduler.config) audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs(torch_device) audio = audioldm_pipe(**inputs).audios[0] assert audio.ndim == 1 assert len(audio) == 81952 # check the portion of the generated audio with the largest dynamic range (reduces flakiness) audio_slice = audio[31390:31400] expected_slice = np.array( [-0.1318, -0.0577, 0.0446, -0.0573, 0.0659, 0.1074, -0.2600, 0.0080, -0.2190, -0.4301] ) max_diff = np.abs(expected_slice - audio_slice).max() assert max_diff < 1e-3 def test_audioldm2_large(self): audioldm_pipe = AudioLDM2Pipeline.from_pretrained("cvssp/audioldm2-large") audioldm_pipe = audioldm_pipe.to(torch_device) audioldm_pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs(torch_device) audio = audioldm_pipe(**inputs).audios[0] assert audio.ndim == 1 assert len(audio) == 81952 # check the portion of the generated audio with the largest dynamic range (reduces flakiness) audio_slice = audio[8825:8835] expected_slice = np.array( [-0.1829, -0.1461, 0.0759, -0.1493, -0.1396, 0.5783, 0.3001, -0.3038, -0.0639, -0.2244] ) max_diff = np.abs(expected_slice - audio_slice).max() assert max_diff < 1e-3 def test_audioldm2_tts(self): audioldm_tts_pipe = AudioLDM2Pipeline.from_pretrained("anhnct/audioldm2_gigaspeech") audioldm_tts_pipe = audioldm_tts_pipe.to(torch_device) audioldm_tts_pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs_tts(torch_device) audio = audioldm_tts_pipe(**inputs).audios[0] assert audio.ndim == 1 assert len(audio) == 81952 # check the portion of the generated audio with the largest dynamic range (reduces flakiness) audio_slice = audio[8825:8835] expected_slice = np.array( [-0.1829, -0.1461, 0.0759, -0.1493, -0.1396, 0.5783, 0.3001, -0.3038, -0.0639, -0.2244] ) max_diff = np.abs(expected_slice - audio_slice).max() assert max_diff < 1e-3
diffusers/tests/pipelines/audioldm2/test_audioldm2.py/0
{ "file_path": "diffusers/tests/pipelines/audioldm2/test_audioldm2.py", "repo_id": "diffusers", "token_count": 10821 }
246
# coding=utf-8 # Copyright 2023 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import unittest import numpy as np import torch from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer from diffusers import ( AsymmetricAutoencoderKL, AutoencoderKL, AutoencoderTiny, ConsistencyDecoderVAE, ControlNetXSAdapter, EulerDiscreteScheduler, StableDiffusionXLControlNetXSPipeline, UNet2DConditionModel, ) from diffusers.utils.import_utils import is_xformers_available from diffusers.utils.testing_utils import enable_full_determinism, load_image, require_torch_gpu, slow, torch_device from diffusers.utils.torch_utils import randn_tensor from ...models.autoencoders.test_models_vae import ( get_asym_autoencoder_kl_config, get_autoencoder_kl_config, get_autoencoder_tiny_config, get_consistency_vae_config, ) from ..pipeline_params import ( IMAGE_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS, ) from ..test_pipelines_common import ( PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin, SDXLOptionalComponentsTesterMixin, ) enable_full_determinism() class StableDiffusionXLControlNetXSPipelineFastTests( PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, SDXLOptionalComponentsTesterMixin, unittest.TestCase, ): pipeline_class = StableDiffusionXLControlNetXSPipeline params = TEXT_TO_IMAGE_PARAMS batch_params = TEXT_TO_IMAGE_BATCH_PARAMS image_params = IMAGE_TO_IMAGE_IMAGE_PARAMS image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS test_attention_slicing = False def get_dummy_components(self): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(4, 8), layers_per_block=2, sample_size=16, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), use_linear_projection=True, norm_num_groups=4, # SD2-specific config below attention_head_dim=(2, 4), addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=56, # 6 * 8 (addition_time_embed_dim) + 8 (cross_attention_dim) cross_attention_dim=8, ) torch.manual_seed(0) controlnet = ControlNetXSAdapter.from_unet( unet=unet, size_ratio=0.5, learn_time_embedding=True, conditioning_embedding_out_channels=(2, 2), ) torch.manual_seed(0) scheduler = EulerDiscreteScheduler( beta_start=0.00085, beta_end=0.012, steps_offset=1, beta_schedule="scaled_linear", timestep_spacing="leading", ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[4, 8], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, norm_num_groups=2, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=4, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=8, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") components = { "unet": unet, "controlnet": controlnet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, "feature_extractor": None, } return components # copied from test_controlnet_sdxl.py def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) controlnet_embedder_scale_factor = 2 image = randn_tensor( (1, 3, 8 * controlnet_embedder_scale_factor, 8 * controlnet_embedder_scale_factor), generator=generator, device=torch.device(device), ) inputs = { "prompt": "A painting of a squirrel eating a burger", "generator": generator, "num_inference_steps": 2, "guidance_scale": 6.0, "output_type": "np", "image": image, } return inputs # copied from test_controlnet_sdxl.py def test_attention_slicing_forward_pass(self): return self._test_attention_slicing_forward_pass(expected_max_diff=2e-3) # copied from test_controlnet_sdxl.py @unittest.skipIf( torch_device != "cuda" or not is_xformers_available(), reason="XFormers attention is only available with CUDA and `xformers` installed", ) def test_xformers_attention_forwardGenerator_pass(self): self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=2e-3) # copied from test_controlnet_sdxl.py def test_inference_batch_single_identical(self): self._test_inference_batch_single_identical(expected_max_diff=2e-3) # copied from test_controlnet_sdxl.py @require_torch_gpu def test_stable_diffusion_xl_offloads(self): pipes = [] components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components).to(torch_device) pipes.append(sd_pipe) components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe.enable_model_cpu_offload() pipes.append(sd_pipe) components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe.enable_sequential_cpu_offload() pipes.append(sd_pipe) image_slices = [] for pipe in pipes: pipe.unet.set_default_attn_processor() inputs = self.get_dummy_inputs(torch_device) image = pipe(**inputs).images image_slices.append(image[0, -3:, -3:, -1].flatten()) assert np.abs(image_slices[0] - image_slices[1]).max() < 1e-3 assert np.abs(image_slices[0] - image_slices[2]).max() < 1e-3 # copied from test_controlnet_sdxl.py def test_stable_diffusion_xl_multi_prompts(self): components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components).to(torch_device) # forward with single prompt inputs = self.get_dummy_inputs(torch_device) output = sd_pipe(**inputs) image_slice_1 = output.images[0, -3:, -3:, -1] # forward with same prompt duplicated inputs = self.get_dummy_inputs(torch_device) inputs["prompt_2"] = inputs["prompt"] output = sd_pipe(**inputs) image_slice_2 = output.images[0, -3:, -3:, -1] # ensure the results are equal assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 # forward with different prompt inputs = self.get_dummy_inputs(torch_device) inputs["prompt_2"] = "different prompt" output = sd_pipe(**inputs) image_slice_3 = output.images[0, -3:, -3:, -1] # ensure the results are not equal assert np.abs(image_slice_1.flatten() - image_slice_3.flatten()).max() > 1e-4 # manually set a negative_prompt inputs = self.get_dummy_inputs(torch_device) inputs["negative_prompt"] = "negative prompt" output = sd_pipe(**inputs) image_slice_1 = output.images[0, -3:, -3:, -1] # forward with same negative_prompt duplicated inputs = self.get_dummy_inputs(torch_device) inputs["negative_prompt"] = "negative prompt" inputs["negative_prompt_2"] = inputs["negative_prompt"] output = sd_pipe(**inputs) image_slice_2 = output.images[0, -3:, -3:, -1] # ensure the results are equal assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 # forward with different negative_prompt inputs = self.get_dummy_inputs(torch_device) inputs["negative_prompt"] = "negative prompt" inputs["negative_prompt_2"] = "different negative prompt" output = sd_pipe(**inputs) image_slice_3 = output.images[0, -3:, -3:, -1] # ensure the results are not equal assert np.abs(image_slice_1.flatten() - image_slice_3.flatten()).max() > 1e-4 # copied from test_stable_diffusion_xl.py def test_stable_diffusion_xl_prompt_embeds(self): components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe = sd_pipe.to(torch_device) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) # forward without prompt embeds inputs = self.get_dummy_inputs(torch_device) inputs["prompt"] = 2 * [inputs["prompt"]] inputs["num_images_per_prompt"] = 2 output = sd_pipe(**inputs) image_slice_1 = output.images[0, -3:, -3:, -1] # forward with prompt embeds inputs = self.get_dummy_inputs(torch_device) prompt = 2 * [inputs.pop("prompt")] ( prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, ) = sd_pipe.encode_prompt(prompt) output = sd_pipe( **inputs, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, pooled_prompt_embeds=pooled_prompt_embeds, negative_pooled_prompt_embeds=negative_pooled_prompt_embeds, ) image_slice_2 = output.images[0, -3:, -3:, -1] # make sure that it's equal assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1.1e-4 # copied from test_stable_diffusion_xl.py def test_save_load_optional_components(self): self._test_save_load_optional_components() # copied from test_controlnetxs.py def test_to_dtype(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.set_progress_bar_config(disable=None) # pipeline creates a new UNetControlNetXSModel under the hood. So we need to check the dtype from pipe.components model_dtypes = [component.dtype for component in pipe.components.values() if hasattr(component, "dtype")] self.assertTrue(all(dtype == torch.float32 for dtype in model_dtypes)) pipe.to(dtype=torch.float16) model_dtypes = [component.dtype for component in pipe.components.values() if hasattr(component, "dtype")] self.assertTrue(all(dtype == torch.float16 for dtype in model_dtypes)) def test_multi_vae(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) block_out_channels = pipe.vae.config.block_out_channels norm_num_groups = pipe.vae.config.norm_num_groups vae_classes = [AutoencoderKL, AsymmetricAutoencoderKL, ConsistencyDecoderVAE, AutoencoderTiny] configs = [ get_autoencoder_kl_config(block_out_channels, norm_num_groups), get_asym_autoencoder_kl_config(block_out_channels, norm_num_groups), get_consistency_vae_config(block_out_channels, norm_num_groups), get_autoencoder_tiny_config(block_out_channels), ] out_np = pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="np"))[0] for vae_cls, config in zip(vae_classes, configs): vae = vae_cls(**config) vae = vae.to(torch_device) components["vae"] = vae vae_pipe = self.pipeline_class(**components) # pipeline creates a new UNetControlNetXSModel under the hood, which aren't on device. # So we need to move the new pipe to device. vae_pipe.to(torch_device) vae_pipe.set_progress_bar_config(disable=None) out_vae_np = vae_pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="np"))[0] assert out_vae_np.shape == out_np.shape @slow @require_torch_gpu class StableDiffusionXLControlNetXSPipelineSlowTests(unittest.TestCase): def tearDown(self): super().tearDown() gc.collect() torch.cuda.empty_cache() def test_canny(self): controlnet = ControlNetXSAdapter.from_pretrained( "UmerHA/Testing-ConrolNetXS-SDXL-canny", torch_dtype=torch.float16 ) pipe = StableDiffusionXLControlNetXSPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 ) pipe.enable_sequential_cpu_offload() pipe.set_progress_bar_config(disable=None) generator = torch.Generator(device="cpu").manual_seed(0) prompt = "bird" image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png" ) images = pipe(prompt, image=image, generator=generator, output_type="np", num_inference_steps=3).images assert images[0].shape == (768, 512, 3) original_image = images[0, -3:, -3:, -1].flatten() expected_image = np.array([0.3202, 0.3151, 0.3328, 0.3172, 0.337, 0.3381, 0.3378, 0.3389, 0.3224]) assert np.allclose(original_image, expected_image, atol=1e-04) def test_depth(self): controlnet = ControlNetXSAdapter.from_pretrained( "UmerHA/Testing-ConrolNetXS-SDXL-depth", torch_dtype=torch.float16 ) pipe = StableDiffusionXLControlNetXSPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 ) pipe.enable_sequential_cpu_offload() pipe.set_progress_bar_config(disable=None) generator = torch.Generator(device="cpu").manual_seed(0) prompt = "Stormtrooper's lecture" image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/stormtrooper_depth.png" ) images = pipe(prompt, image=image, generator=generator, output_type="np", num_inference_steps=3).images assert images[0].shape == (512, 512, 3) original_image = images[0, -3:, -3:, -1].flatten() expected_image = np.array([0.5448, 0.5437, 0.5426, 0.5543, 0.553, 0.5475, 0.5595, 0.5602, 0.5529]) assert np.allclose(original_image, expected_image, atol=1e-04)
diffusers/tests/pipelines/controlnet_xs/test_controlnetxs_sdxl.py/0
{ "file_path": "diffusers/tests/pipelines/controlnet_xs/test_controlnetxs_sdxl.py", "repo_id": "diffusers", "token_count": 7417 }
247
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import unittest import numpy as np import torch from torch import nn from transformers import ( CLIPImageProcessor, CLIPTextConfig, CLIPTextModelWithProjection, CLIPTokenizer, CLIPVisionConfig, CLIPVisionModelWithProjection, ) from diffusers import KandinskyV22PriorPipeline, PriorTransformer, UnCLIPScheduler from diffusers.utils.testing_utils import enable_full_determinism, skip_mps, torch_device from ..test_pipelines_common import PipelineTesterMixin enable_full_determinism() class Dummies: @property def text_embedder_hidden_size(self): return 32 @property def time_input_dim(self): return 32 @property def block_out_channels_0(self): return self.time_input_dim @property def time_embed_dim(self): return self.time_input_dim * 4 @property def cross_attention_dim(self): return 100 @property def dummy_tokenizer(self): tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") return tokenizer @property def dummy_text_encoder(self): torch.manual_seed(0) config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=self.text_embedder_hidden_size, projection_dim=self.text_embedder_hidden_size, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, ) return CLIPTextModelWithProjection(config) @property def dummy_prior(self): torch.manual_seed(0) model_kwargs = { "num_attention_heads": 2, "attention_head_dim": 12, "embedding_dim": self.text_embedder_hidden_size, "num_layers": 1, } model = PriorTransformer(**model_kwargs) # clip_std and clip_mean is initialized to be 0 so PriorTransformer.post_process_latents will always return 0 - set clip_std to be 1 so it won't return 0 model.clip_std = nn.Parameter(torch.ones(model.clip_std.shape)) return model @property def dummy_image_encoder(self): torch.manual_seed(0) config = CLIPVisionConfig( hidden_size=self.text_embedder_hidden_size, image_size=224, projection_dim=self.text_embedder_hidden_size, intermediate_size=37, num_attention_heads=4, num_channels=3, num_hidden_layers=5, patch_size=14, ) model = CLIPVisionModelWithProjection(config) return model @property def dummy_image_processor(self): image_processor = CLIPImageProcessor( crop_size=224, do_center_crop=True, do_normalize=True, do_resize=True, image_mean=[0.48145466, 0.4578275, 0.40821073], image_std=[0.26862954, 0.26130258, 0.27577711], resample=3, size=224, ) return image_processor def get_dummy_components(self): prior = self.dummy_prior image_encoder = self.dummy_image_encoder text_encoder = self.dummy_text_encoder tokenizer = self.dummy_tokenizer image_processor = self.dummy_image_processor scheduler = UnCLIPScheduler( variance_type="fixed_small_log", prediction_type="sample", num_train_timesteps=1000, clip_sample=True, clip_sample_range=10.0, ) components = { "prior": prior, "image_encoder": image_encoder, "text_encoder": text_encoder, "tokenizer": tokenizer, "scheduler": scheduler, "image_processor": image_processor, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "horse", "generator": generator, "guidance_scale": 4.0, "num_inference_steps": 2, "output_type": "np", } return inputs class KandinskyV22PriorPipelineFastTests(PipelineTesterMixin, unittest.TestCase): pipeline_class = KandinskyV22PriorPipeline params = ["prompt"] batch_params = ["prompt", "negative_prompt"] required_optional_params = [ "num_images_per_prompt", "generator", "num_inference_steps", "latents", "negative_prompt", "guidance_scale", "output_type", "return_dict", ] callback_cfg_params = ["prompt_embeds", "text_encoder_hidden_states", "text_mask"] test_xformers_attention = False def get_dummy_components(self): dummies = Dummies() return dummies.get_dummy_components() def get_dummy_inputs(self, device, seed=0): dummies = Dummies() return dummies.get_dummy_inputs(device=device, seed=seed) def test_kandinsky_prior(self): device = "cpu" components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) output = pipe(**self.get_dummy_inputs(device)) image = output.image_embeds image_from_tuple = pipe( **self.get_dummy_inputs(device), return_dict=False, )[0] image_slice = image[0, -10:] image_from_tuple_slice = image_from_tuple[0, -10:] assert image.shape == (1, 32) expected_slice = np.array( [-0.0532, 1.7120, 0.3656, -1.0852, -0.8946, -1.1756, 0.4348, 0.2482, 0.5146, -0.1156] ) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 @skip_mps def test_inference_batch_single_identical(self): self._test_inference_batch_single_identical(expected_max_diff=1e-3) @skip_mps def test_attention_slicing_forward_pass(self): test_max_difference = torch_device == "cpu" test_mean_pixel_difference = False self._test_attention_slicing_forward_pass( test_max_difference=test_max_difference, test_mean_pixel_difference=test_mean_pixel_difference, ) # override default test because no output_type "latent", use "pt" instead def test_callback_inputs(self): sig = inspect.signature(self.pipeline_class.__call__) if not ("callback_on_step_end_tensor_inputs" in sig.parameters and "callback_on_step_end" in sig.parameters): return components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) self.assertTrue( hasattr(pipe, "_callback_tensor_inputs"), f" {self.pipeline_class} should have `_callback_tensor_inputs` that defines a list of tensor variables its callback function can use as inputs", ) def callback_inputs_test(pipe, i, t, callback_kwargs): missing_callback_inputs = set() for v in pipe._callback_tensor_inputs: if v not in callback_kwargs: missing_callback_inputs.add(v) self.assertTrue( len(missing_callback_inputs) == 0, f"Missing callback tensor inputs: {missing_callback_inputs}" ) last_i = pipe.num_timesteps - 1 if i == last_i: callback_kwargs["latents"] = torch.zeros_like(callback_kwargs["latents"]) return callback_kwargs inputs = self.get_dummy_inputs(torch_device) inputs["callback_on_step_end"] = callback_inputs_test inputs["callback_on_step_end_tensor_inputs"] = pipe._callback_tensor_inputs inputs["num_inference_steps"] = 2 inputs["output_type"] = "pt" output = pipe(**inputs)[0] assert output.abs().sum() == 0
diffusers/tests/pipelines/kandinsky2_2/test_kandinsky_prior.py/0
{ "file_path": "diffusers/tests/pipelines/kandinsky2_2/test_kandinsky_prior.py", "repo_id": "diffusers", "token_count": 4049 }
248
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import random import tempfile import unittest import numpy as np import torch from PIL import Image from transformers import ( CLIPTextConfig, CLIPTextModel, CLIPTokenizer, DPTConfig, DPTFeatureExtractor, DPTForDepthEstimation, ) from diffusers import ( AutoencoderKL, DDIMScheduler, DPMSolverMultistepScheduler, LMSDiscreteScheduler, PNDMScheduler, StableDiffusionDepth2ImgPipeline, UNet2DConditionModel, ) from diffusers.utils import is_accelerate_available, is_accelerate_version from diffusers.utils.testing_utils import ( enable_full_determinism, floats_tensor, load_image, load_numpy, nightly, require_torch_gpu, skip_mps, slow, torch_device, ) from ..pipeline_params import ( IMAGE_TO_IMAGE_IMAGE_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS, TEXT_TO_IMAGE_CALLBACK_CFG_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, ) from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin enable_full_determinism() @skip_mps class StableDiffusionDepth2ImgPipelineFastTests( PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase ): pipeline_class = StableDiffusionDepth2ImgPipeline test_save_load_optional_components = False params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"height", "width"} required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"} batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS image_params = IMAGE_TO_IMAGE_IMAGE_PARAMS image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS callback_cfg_params = TEXT_TO_IMAGE_CALLBACK_CFG_PARAMS.union({"depth_mask"}) def get_dummy_components(self): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=5, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), cross_attention_dim=32, attention_head_dim=(2, 4), use_linear_projection=True, ) scheduler = PNDMScheduler(skip_prk_steps=True) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") backbone_config = { "global_padding": "same", "layer_type": "bottleneck", "depths": [3, 4, 9], "out_features": ["stage1", "stage2", "stage3"], "embedding_dynamic_padding": True, "hidden_sizes": [96, 192, 384, 768], "num_groups": 2, } depth_estimator_config = DPTConfig( image_size=32, patch_size=16, num_channels=3, hidden_size=32, num_hidden_layers=4, backbone_out_indices=(0, 1, 2, 3), num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, is_decoder=False, initializer_range=0.02, is_hybrid=True, backbone_config=backbone_config, backbone_featmap_shape=[1, 384, 24, 24], ) depth_estimator = DPTForDepthEstimation(depth_estimator_config).eval() feature_extractor = DPTFeatureExtractor.from_pretrained( "hf-internal-testing/tiny-random-DPTForDepthEstimation" ) components = { "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "depth_estimator": depth_estimator, "feature_extractor": feature_extractor, } return components def get_dummy_inputs(self, device, seed=0): image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)) image = image.cpu().permute(0, 2, 3, 1)[0] image = Image.fromarray(np.uint8(image)).convert("RGB").resize((32, 32)) if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "A painting of a squirrel eating a burger", "image": image, "generator": generator, "num_inference_steps": 2, "guidance_scale": 6.0, "output_type": "np", } return inputs def test_save_load_local(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) output = pipe(**inputs)[0] with tempfile.TemporaryDirectory() as tmpdir: pipe.save_pretrained(tmpdir) pipe_loaded = self.pipeline_class.from_pretrained(tmpdir) pipe_loaded.to(torch_device) pipe_loaded.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) output_loaded = pipe_loaded(**inputs)[0] max_diff = np.abs(output - output_loaded).max() self.assertLess(max_diff, 1e-4) @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA") def test_save_load_float16(self): components = self.get_dummy_components() for name, module in components.items(): if hasattr(module, "half"): components[name] = module.to(torch_device).half() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) output = pipe(**inputs)[0] with tempfile.TemporaryDirectory() as tmpdir: pipe.save_pretrained(tmpdir) pipe_loaded = self.pipeline_class.from_pretrained(tmpdir, torch_dtype=torch.float16) pipe_loaded.to(torch_device) pipe_loaded.set_progress_bar_config(disable=None) for name, component in pipe_loaded.components.items(): if hasattr(component, "dtype"): self.assertTrue( component.dtype == torch.float16, f"`{name}.dtype` switched from `float16` to {component.dtype} after loading.", ) inputs = self.get_dummy_inputs(torch_device) output_loaded = pipe_loaded(**inputs)[0] max_diff = np.abs(output - output_loaded).max() self.assertLess(max_diff, 2e-2, "The output of the fp16 pipeline changed after saving and loading.") @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA") def test_float16_inference(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) for name, module in components.items(): if hasattr(module, "half"): components[name] = module.half() pipe_fp16 = self.pipeline_class(**components) pipe_fp16.to(torch_device) pipe_fp16.set_progress_bar_config(disable=None) output = pipe(**self.get_dummy_inputs(torch_device))[0] output_fp16 = pipe_fp16(**self.get_dummy_inputs(torch_device))[0] max_diff = np.abs(output - output_fp16).max() self.assertLess(max_diff, 1.3e-2, "The outputs of the fp16 and fp32 pipelines are too different.") @unittest.skipIf( torch_device != "cuda" or not is_accelerate_available() or is_accelerate_version("<", "0.14.0"), reason="CPU offload is only available with CUDA and `accelerate v0.14.0` or higher", ) def test_cpu_offload_forward_pass(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) output_without_offload = pipe(**inputs)[0] pipe.enable_sequential_cpu_offload() inputs = self.get_dummy_inputs(torch_device) output_with_offload = pipe(**inputs)[0] max_diff = np.abs(output_with_offload - output_without_offload).max() self.assertLess(max_diff, 1e-4, "CPU offloading should not affect the inference results") def test_dict_tuple_outputs_equivalent(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) output = pipe(**self.get_dummy_inputs(torch_device))[0] output_tuple = pipe(**self.get_dummy_inputs(torch_device), return_dict=False)[0] max_diff = np.abs(output - output_tuple).max() self.assertLess(max_diff, 1e-4) def test_progress_bar(self): super().test_progress_bar() def test_stable_diffusion_depth2img_default_case(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = StableDiffusionDepth2ImgPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = pipe(**inputs).images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 32, 32, 3) if torch_device == "mps": expected_slice = np.array([0.6071, 0.5035, 0.4378, 0.5776, 0.5753, 0.4316, 0.4513, 0.5263, 0.4546]) else: expected_slice = np.array([0.5435, 0.4992, 0.3783, 0.4411, 0.5842, 0.4654, 0.3786, 0.5077, 0.4655]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 def test_stable_diffusion_depth2img_negative_prompt(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = StableDiffusionDepth2ImgPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) negative_prompt = "french fries" output = pipe(**inputs, negative_prompt=negative_prompt) image = output.images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 32, 32, 3) if torch_device == "mps": expected_slice = np.array([0.6296, 0.5125, 0.3890, 0.4456, 0.5955, 0.4621, 0.3810, 0.5310, 0.4626]) else: expected_slice = np.array([0.6012, 0.4507, 0.3769, 0.4121, 0.5566, 0.4585, 0.3803, 0.5045, 0.4631]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 def test_stable_diffusion_depth2img_multiple_init_images(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = StableDiffusionDepth2ImgPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) inputs["prompt"] = [inputs["prompt"]] * 2 inputs["image"] = 2 * [inputs["image"]] image = pipe(**inputs).images image_slice = image[-1, -3:, -3:, -1] assert image.shape == (2, 32, 32, 3) if torch_device == "mps": expected_slice = np.array([0.6501, 0.5150, 0.4939, 0.6688, 0.5437, 0.5758, 0.5115, 0.4406, 0.4551]) else: expected_slice = np.array([0.6557, 0.6214, 0.6254, 0.5775, 0.4785, 0.5949, 0.5904, 0.4785, 0.4730]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 def test_stable_diffusion_depth2img_pil(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = StableDiffusionDepth2ImgPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = pipe(**inputs).images image_slice = image[0, -3:, -3:, -1] if torch_device == "mps": expected_slice = np.array([0.53232, 0.47015, 0.40868, 0.45651, 0.4891, 0.4668, 0.4287, 0.48822, 0.47439]) else: expected_slice = np.array([0.5435, 0.4992, 0.3783, 0.4411, 0.5842, 0.4654, 0.3786, 0.5077, 0.4655]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 @skip_mps def test_attention_slicing_forward_pass(self): return super().test_attention_slicing_forward_pass() def test_inference_batch_single_identical(self): super().test_inference_batch_single_identical(expected_max_diff=7e-3) @slow @require_torch_gpu class StableDiffusionDepth2ImgPipelineSlowTests(unittest.TestCase): def setUp(self): super().setUp() gc.collect() torch.cuda.empty_cache() def tearDown(self): super().tearDown() gc.collect() torch.cuda.empty_cache() def get_inputs(self, device="cpu", dtype=torch.float32, seed=0): generator = torch.Generator(device=device).manual_seed(seed) init_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/depth2img/two_cats.png" ) inputs = { "prompt": "two tigers", "image": init_image, "generator": generator, "num_inference_steps": 3, "strength": 0.75, "guidance_scale": 7.5, "output_type": "np", } return inputs def test_stable_diffusion_depth2img_pipeline_default(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", safety_checker=None ) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing() inputs = self.get_inputs() image = pipe(**inputs).images image_slice = image[0, 253:256, 253:256, -1].flatten() assert image.shape == (1, 480, 640, 3) expected_slice = np.array([0.5435, 0.4992, 0.3783, 0.4411, 0.5842, 0.4654, 0.3786, 0.5077, 0.4655]) assert np.abs(expected_slice - image_slice).max() < 6e-1 def test_stable_diffusion_depth2img_pipeline_k_lms(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", safety_checker=None ) pipe.unet.set_default_attn_processor() pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing() inputs = self.get_inputs() image = pipe(**inputs).images image_slice = image[0, 253:256, 253:256, -1].flatten() assert image.shape == (1, 480, 640, 3) expected_slice = np.array([0.6363, 0.6274, 0.6309, 0.6370, 0.6226, 0.6286, 0.6213, 0.6453, 0.6306]) assert np.abs(expected_slice - image_slice).max() < 8e-4 def test_stable_diffusion_depth2img_pipeline_ddim(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", safety_checker=None ) pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing() inputs = self.get_inputs() image = pipe(**inputs).images image_slice = image[0, 253:256, 253:256, -1].flatten() assert image.shape == (1, 480, 640, 3) expected_slice = np.array([0.6424, 0.6524, 0.6249, 0.6041, 0.6634, 0.6420, 0.6522, 0.6555, 0.6436]) assert np.abs(expected_slice - image_slice).max() < 5e-4 def test_stable_diffusion_depth2img_intermediate_state(self): number_of_steps = 0 def callback_fn(step: int, timestep: int, latents: torch.Tensor) -> None: callback_fn.has_been_called = True nonlocal number_of_steps number_of_steps += 1 if step == 1: latents = latents.detach().cpu().numpy() assert latents.shape == (1, 4, 60, 80) latents_slice = latents[0, -3:, -3:, -1] expected_slice = np.array( [-0.7168, -1.5137, -0.1418, -2.9219, -2.7266, -2.4414, -2.1035, -3.0078, -1.7051] ) assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 elif step == 2: latents = latents.detach().cpu().numpy() assert latents.shape == (1, 4, 60, 80) latents_slice = latents[0, -3:, -3:, -1] expected_slice = np.array( [-0.7109, -1.5068, -0.1403, -2.9160, -2.7207, -2.4414, -2.1035, -3.0059, -1.7090] ) assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 callback_fn.has_been_called = False pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", safety_checker=None, torch_dtype=torch.float16 ) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing() inputs = self.get_inputs(dtype=torch.float16) pipe(**inputs, callback=callback_fn, callback_steps=1) assert callback_fn.has_been_called assert number_of_steps == 2 def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() torch.cuda.reset_peak_memory_stats() pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", safety_checker=None, torch_dtype=torch.float16 ) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing(1) pipe.enable_sequential_cpu_offload() inputs = self.get_inputs(dtype=torch.float16) _ = pipe(**inputs) mem_bytes = torch.cuda.max_memory_allocated() # make sure that less than 2.9 GB is allocated assert mem_bytes < 2.9 * 10**9 @nightly @require_torch_gpu class StableDiffusionImg2ImgPipelineNightlyTests(unittest.TestCase): def setUp(self): super().setUp() gc.collect() torch.cuda.empty_cache() def tearDown(self): super().tearDown() gc.collect() torch.cuda.empty_cache() def get_inputs(self, device="cpu", dtype=torch.float32, seed=0): generator = torch.Generator(device=device).manual_seed(seed) init_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/depth2img/two_cats.png" ) inputs = { "prompt": "two tigers", "image": init_image, "generator": generator, "num_inference_steps": 3, "strength": 0.75, "guidance_scale": 7.5, "output_type": "np", } return inputs def test_depth2img_pndm(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-depth") pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs() image = pipe(**inputs).images[0] expected_image = load_numpy( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" "/stable_diffusion_depth2img/stable_diffusion_2_0_pndm.npy" ) max_diff = np.abs(expected_image - image).max() assert max_diff < 1e-3 def test_depth2img_ddim(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-depth") pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs() image = pipe(**inputs).images[0] expected_image = load_numpy( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" "/stable_diffusion_depth2img/stable_diffusion_2_0_ddim.npy" ) max_diff = np.abs(expected_image - image).max() assert max_diff < 1e-3 def test_img2img_lms(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-depth") pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs() image = pipe(**inputs).images[0] expected_image = load_numpy( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" "/stable_diffusion_depth2img/stable_diffusion_2_0_lms.npy" ) max_diff = np.abs(expected_image - image).max() assert max_diff < 1e-3 def test_img2img_dpm(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-depth") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs() inputs["num_inference_steps"] = 30 image = pipe(**inputs).images[0] expected_image = load_numpy( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" "/stable_diffusion_depth2img/stable_diffusion_2_0_dpm_multi.npy" ) max_diff = np.abs(expected_image - image).max() assert max_diff < 1e-3
diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_depth.py/0
{ "file_path": "diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_depth.py", "repo_id": "diffusers", "token_count": 11126 }
249
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import gc import inspect import io import re import tempfile import unittest import numpy as np import torch from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer from diffusers import AutoencoderKL, DDIMScheduler, TextToVideoZeroSDXLPipeline, UNet2DConditionModel from diffusers.utils.import_utils import is_accelerate_available, is_accelerate_version from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu, torch_device from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS from ..test_pipelines_common import PipelineFromPipeTesterMixin, PipelineTesterMixin enable_full_determinism() def to_np(tensor): if isinstance(tensor, torch.Tensor): tensor = tensor.detach().cpu().numpy() return tensor class TextToVideoZeroSDXLPipelineFastTests(PipelineTesterMixin, PipelineFromPipeTesterMixin, unittest.TestCase): pipeline_class = TextToVideoZeroSDXLPipeline params = TEXT_TO_IMAGE_PARAMS batch_params = TEXT_TO_IMAGE_BATCH_PARAMS image_params = TEXT_TO_IMAGE_IMAGE_PARAMS image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS generator_device = "cpu" def get_dummy_components(self, seed=0): torch.manual_seed(seed) unet = UNet2DConditionModel( block_out_channels=(2, 4), layers_per_block=2, sample_size=2, norm_num_groups=2, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) scheduler = DDIMScheduler( num_train_timesteps=1000, beta_start=0.0001, beta_end=0.02, beta_schedule="linear", trained_betas=None, clip_sample=True, set_alpha_to_one=True, steps_offset=0, prediction_type="epsilon", thresholding=False, dynamic_thresholding_ratio=0.995, clip_sample_range=1.0, sample_max_value=1.0, timestep_spacing="leading", rescale_betas_zero_snr=False, ) torch.manual_seed(seed) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, sample_size=128, ) torch.manual_seed(seed) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=32, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") components = { "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, "image_encoder": None, "feature_extractor": None, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "A panda dancing in Antarctica", "generator": generator, "num_inference_steps": 5, "t0": 1, "t1": 3, "height": 64, "width": 64, "video_length": 3, "output_type": "np", } return inputs def get_generator(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) return generator def test_text_to_video_zero_sdxl(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) inputs = self.get_dummy_inputs(self.generator_device) result = pipe(**inputs).images first_frame_slice = result[0, -3:, -3:, -1] last_frame_slice = result[-1, -3:, -3:, 0] expected_slice1 = np.array([0.48, 0.58, 0.53, 0.59, 0.50, 0.44, 0.60, 0.65, 0.52]) expected_slice2 = np.array([0.66, 0.49, 0.40, 0.70, 0.47, 0.51, 0.73, 0.65, 0.52]) assert np.abs(first_frame_slice.flatten() - expected_slice1).max() < 1e-2 assert np.abs(last_frame_slice.flatten() - expected_slice2).max() < 1e-2 @unittest.skip( reason="Cannot call `set_default_attn_processor` as this pipeline uses a specific attention processor." ) def test_attention_slicing_forward_pass(self): pass def test_cfg(self): sig = inspect.signature(self.pipeline_class.__call__) if "guidance_scale" not in sig.parameters: return components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(self.generator_device) inputs["guidance_scale"] = 1.0 out_no_cfg = pipe(**inputs)[0] inputs["guidance_scale"] = 7.5 out_cfg = pipe(**inputs)[0] assert out_cfg.shape == out_no_cfg.shape def test_dict_tuple_outputs_equivalent(self, expected_max_difference=1e-4): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) output = pipe(**self.get_dummy_inputs(self.generator_device))[0] output_tuple = pipe(**self.get_dummy_inputs(self.generator_device), return_dict=False)[0] max_diff = np.abs(to_np(output) - to_np(output_tuple)).max() self.assertLess(max_diff, expected_max_difference) @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA") def test_float16_inference(self, expected_max_diff=5e-2): components = self.get_dummy_components() for name, module in components.items(): if hasattr(module, "half"): components[name] = module.to(torch_device).half() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) components = self.get_dummy_components() pipe_fp16 = self.pipeline_class(**components) pipe_fp16.to(torch_device, torch.float16) pipe_fp16.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(self.generator_device) # # Reset generator in case it is used inside dummy inputs if "generator" in inputs: inputs["generator"] = self.get_generator(self.generator_device) output = pipe(**inputs)[0] fp16_inputs = self.get_dummy_inputs(self.generator_device) # Reset generator in case it is used inside dummy inputs if "generator" in fp16_inputs: fp16_inputs["generator"] = self.get_generator(self.generator_device) output_fp16 = pipe_fp16(**fp16_inputs)[0] max_diff = np.abs(to_np(output) - to_np(output_fp16)).max() self.assertLess(max_diff, expected_max_diff, "The outputs of the fp16 and fp32 pipelines are too different.") @unittest.skip(reason="Batching needs to be properly figured out first for this pipeline.") def test_inference_batch_consistent(self): pass @unittest.skip( reason="Cannot call `set_default_attn_processor` as this pipeline uses a specific attention processor." ) def test_inference_batch_single_identical(self): pass @unittest.skipIf( torch_device != "cuda" or not is_accelerate_available() or is_accelerate_version("<", "0.17.0"), reason="CPU offload is only available with CUDA and `accelerate v0.17.0` or higher", ) def test_model_cpu_offload_forward_pass(self, expected_max_diff=2e-4): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(self.generator_device) output_without_offload = pipe(**inputs)[0] pipe.enable_model_cpu_offload() inputs = self.get_dummy_inputs(self.generator_device) output_with_offload = pipe(**inputs)[0] max_diff = np.abs(to_np(output_with_offload) - to_np(output_without_offload)).max() self.assertLess(max_diff, expected_max_diff, "CPU offloading should not affect the inference results") @unittest.skip(reason="`num_images_per_prompt` argument is not supported for this pipeline.") def test_pipeline_call_signature(self): pass def test_progress_bar(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) inputs = self.get_dummy_inputs(self.generator_device) with io.StringIO() as stderr, contextlib.redirect_stderr(stderr): _ = pipe(**inputs) stderr = stderr.getvalue() # we can't calculate the number of progress steps beforehand e.g. for strength-dependent img2img, # so we just match "5" in "#####| 1/5 [00:01<00:00]" max_steps = re.search("/(.*?) ", stderr).group(1) self.assertTrue(max_steps is not None and len(max_steps) > 0) self.assertTrue( f"{max_steps}/{max_steps}" in stderr, "Progress bar should be enabled and stopped at the max step" ) pipe.set_progress_bar_config(disable=True) with io.StringIO() as stderr, contextlib.redirect_stderr(stderr): _ = pipe(**inputs) self.assertTrue(stderr.getvalue() == "", "Progress bar should be disabled") @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA") def test_save_load_float16(self, expected_max_diff=1e-2): components = self.get_dummy_components() for name, module in components.items(): if hasattr(module, "half"): components[name] = module.to(torch_device).half() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(self.generator_device) output = pipe(**inputs)[0] with tempfile.TemporaryDirectory() as tmpdir: pipe.save_pretrained(tmpdir) pipe_loaded = self.pipeline_class.from_pretrained(tmpdir, torch_dtype=torch.float16) pipe_loaded.to(torch_device) pipe_loaded.set_progress_bar_config(disable=None) for name, component in pipe_loaded.components.items(): if hasattr(component, "dtype"): self.assertTrue( component.dtype == torch.float16, f"`{name}.dtype` switched from `float16` to {component.dtype} after loading.", ) inputs = self.get_dummy_inputs(self.generator_device) output_loaded = pipe_loaded(**inputs)[0] max_diff = np.abs(to_np(output) - to_np(output_loaded)).max() self.assertLess( max_diff, expected_max_diff, "The output of the fp16 pipeline changed after saving and loading." ) @unittest.skip( reason="Cannot call `set_default_attn_processor` as this pipeline uses a specific attention processor." ) def test_save_load_local(self): pass @unittest.skip( reason="Cannot call `set_default_attn_processor` as this pipeline uses a specific attention processor." ) def test_save_load_optional_components(self): pass @unittest.skip( reason="Cannot call `set_default_attn_processor` as this pipeline uses a specific attention processor." ) def test_sequential_cpu_offload_forward_pass(self): pass @unittest.skipIf(torch_device != "cuda", reason="CUDA and CPU are required to switch devices") def test_to_device(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.set_progress_bar_config(disable=None) pipe.to("cpu") model_devices = [component.device.type for component in components.values() if hasattr(component, "device")] self.assertTrue(all(device == "cpu" for device in model_devices)) output_cpu = pipe(**self.get_dummy_inputs("cpu"))[0] # generator set to cpu self.assertTrue(np.isnan(output_cpu).sum() == 0) pipe.to("cuda") model_devices = [component.device.type for component in components.values() if hasattr(component, "device")] self.assertTrue(all(device == "cuda" for device in model_devices)) output_cuda = pipe(**self.get_dummy_inputs("cpu"))[0] # generator set to cpu self.assertTrue(np.isnan(to_np(output_cuda)).sum() == 0) @unittest.skip( reason="Cannot call `set_default_attn_processor` as this pipeline uses a specific attention processor." ) def test_xformers_attention_forwardGenerator_pass(self): pass @nightly @require_torch_gpu class TextToVideoZeroSDXLPipelineSlowTests(unittest.TestCase): def setUp(self): # clean up the VRAM before each test super().setUp() gc.collect() torch.cuda.empty_cache() def tearDown(self): # clean up the VRAM after each test super().tearDown() gc.collect() torch.cuda.empty_cache() def test_full_model(self): model_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = TextToVideoZeroSDXLPipeline.from_pretrained( model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) pipe.enable_model_cpu_offload() pipe.enable_vae_slicing() pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) generator = torch.Generator(device="cpu").manual_seed(0) prompt = "A panda dancing in Antarctica" result = pipe(prompt=prompt, generator=generator).images first_frame_slice = result[0, -3:, -3:, -1] last_frame_slice = result[-1, -3:, -3:, 0] expected_slice1 = np.array([0.57, 0.57, 0.57, 0.57, 0.57, 0.56, 0.55, 0.56, 0.56]) expected_slice2 = np.array([0.54, 0.53, 0.53, 0.53, 0.53, 0.52, 0.53, 0.53, 0.53]) assert np.abs(first_frame_slice.flatten() - expected_slice1).max() < 1e-2 assert np.abs(last_frame_slice.flatten() - expected_slice2).max() < 1e-2
diffusers/tests/pipelines/text_to_video_synthesis/test_text_to_video_zero_sdxl.py/0
{ "file_path": "diffusers/tests/pipelines/text_to_video_synthesis/test_text_to_video_zero_sdxl.py", "repo_id": "diffusers", "token_count": 7350 }
250
import torch from diffusers import DDPMScheduler from .test_schedulers import SchedulerCommonTest class DDPMSchedulerTest(SchedulerCommonTest): scheduler_classes = (DDPMScheduler,) def get_scheduler_config(self, **kwargs): config = { "num_train_timesteps": 1000, "beta_start": 0.0001, "beta_end": 0.02, "beta_schedule": "linear", "variance_type": "fixed_small", "clip_sample": True, } config.update(**kwargs) return config def test_timesteps(self): for timesteps in [1, 5, 100, 1000]: self.check_over_configs(num_train_timesteps=timesteps) def test_betas(self): for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]): self.check_over_configs(beta_start=beta_start, beta_end=beta_end) def test_schedules(self): for schedule in ["linear", "squaredcos_cap_v2"]: self.check_over_configs(beta_schedule=schedule) def test_variance_type(self): for variance in ["fixed_small", "fixed_large", "other"]: self.check_over_configs(variance_type=variance) def test_clip_sample(self): for clip_sample in [True, False]: self.check_over_configs(clip_sample=clip_sample) def test_thresholding(self): self.check_over_configs(thresholding=False) for threshold in [0.5, 1.0, 2.0]: for prediction_type in ["epsilon", "sample", "v_prediction"]: self.check_over_configs( thresholding=True, prediction_type=prediction_type, sample_max_value=threshold, ) def test_prediction_type(self): for prediction_type in ["epsilon", "sample", "v_prediction"]: self.check_over_configs(prediction_type=prediction_type) def test_time_indices(self): for t in [0, 500, 999]: self.check_over_forward(time_step=t) def test_variance(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) assert torch.sum(torch.abs(scheduler._get_variance(0) - 0.0)) < 1e-5 assert torch.sum(torch.abs(scheduler._get_variance(487) - 0.00979)) < 1e-5 assert torch.sum(torch.abs(scheduler._get_variance(999) - 0.02)) < 1e-5 def test_rescale_betas_zero_snr(self): for rescale_betas_zero_snr in [True, False]: self.check_over_configs(rescale_betas_zero_snr=rescale_betas_zero_snr) def test_full_loop_no_noise(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) num_trained_timesteps = len(scheduler) model = self.dummy_model() sample = self.dummy_sample_deter generator = torch.manual_seed(0) for t in reversed(range(num_trained_timesteps)): # 1. predict noise residual residual = model(sample, t) # 2. predict previous mean of sample x_t-1 pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample # if t > 0: # noise = self.dummy_sample_deter # variance = scheduler.get_variance(t) ** (0.5) * noise # # sample = pred_prev_sample + variance sample = pred_prev_sample result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) assert abs(result_sum.item() - 258.9606) < 1e-2 assert abs(result_mean.item() - 0.3372) < 1e-3 def test_full_loop_with_v_prediction(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config(prediction_type="v_prediction") scheduler = scheduler_class(**scheduler_config) num_trained_timesteps = len(scheduler) model = self.dummy_model() sample = self.dummy_sample_deter generator = torch.manual_seed(0) for t in reversed(range(num_trained_timesteps)): # 1. predict noise residual residual = model(sample, t) # 2. predict previous mean of sample x_t-1 pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample # if t > 0: # noise = self.dummy_sample_deter # variance = scheduler.get_variance(t) ** (0.5) * noise # # sample = pred_prev_sample + variance sample = pred_prev_sample result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) assert abs(result_sum.item() - 202.0296) < 1e-2 assert abs(result_mean.item() - 0.2631) < 1e-3 def test_custom_timesteps(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 1, 0] scheduler.set_timesteps(timesteps=timesteps) scheduler_timesteps = scheduler.timesteps for i, timestep in enumerate(scheduler_timesteps): if i == len(timesteps) - 1: expected_prev_t = -1 else: expected_prev_t = timesteps[i + 1] prev_t = scheduler.previous_timestep(timestep) prev_t = prev_t.item() self.assertEqual(prev_t, expected_prev_t) def test_custom_timesteps_increasing_order(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 51, 0] with self.assertRaises(ValueError, msg="`custom_timesteps` must be in descending order."): scheduler.set_timesteps(timesteps=timesteps) def test_custom_timesteps_passing_both_num_inference_steps_and_timesteps(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 1, 0] num_inference_steps = len(timesteps) with self.assertRaises(ValueError, msg="Can only pass one of `num_inference_steps` or `custom_timesteps`."): scheduler.set_timesteps(num_inference_steps=num_inference_steps, timesteps=timesteps) def test_custom_timesteps_too_large(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [scheduler.config.num_train_timesteps] with self.assertRaises( ValueError, msg="`timesteps` must start before `self.config.train_timesteps`: {scheduler.config.num_train_timesteps}}", ): scheduler.set_timesteps(timesteps=timesteps) def test_full_loop_with_noise(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) num_trained_timesteps = len(scheduler) t_start = num_trained_timesteps - 2 model = self.dummy_model() sample = self.dummy_sample_deter generator = torch.manual_seed(0) # add noise noise = self.dummy_noise_deter timesteps = scheduler.timesteps[t_start * scheduler.order :] sample = scheduler.add_noise(sample, noise, timesteps[:1]) for t in timesteps: # 1. predict noise residual residual = model(sample, t) # 2. predict previous mean of sample x_t-1 pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample sample = pred_prev_sample result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) assert abs(result_sum.item() - 387.9466) < 1e-2, f" expected result sum 387.9466, but get {result_sum}" assert abs(result_mean.item() - 0.5051) < 1e-3, f" expected result mean 0.5051, but get {result_mean}"
diffusers/tests/schedulers/test_scheduler_ddpm.py/0
{ "file_path": "diffusers/tests/schedulers/test_scheduler_ddpm.py", "repo_id": "diffusers", "token_count": 3860 }
251
import tempfile from typing import Dict, List, Tuple import torch from diffusers import LCMScheduler from diffusers.utils.testing_utils import torch_device from .test_schedulers import SchedulerCommonTest class LCMSchedulerTest(SchedulerCommonTest): scheduler_classes = (LCMScheduler,) forward_default_kwargs = (("num_inference_steps", 10),) def get_scheduler_config(self, **kwargs): config = { "num_train_timesteps": 1000, "beta_start": 0.00085, "beta_end": 0.0120, "beta_schedule": "scaled_linear", "prediction_type": "epsilon", } config.update(**kwargs) return config @property def default_valid_timestep(self): kwargs = dict(self.forward_default_kwargs) num_inference_steps = kwargs.pop("num_inference_steps", None) scheduler_config = self.get_scheduler_config() scheduler = self.scheduler_classes[0](**scheduler_config) scheduler.set_timesteps(num_inference_steps) timestep = scheduler.timesteps[-1] return timestep def test_timesteps(self): for timesteps in [100, 500, 1000]: # 0 is not guaranteed to be in the timestep schedule, but timesteps - 1 is self.check_over_configs(time_step=timesteps - 1, num_train_timesteps=timesteps) def test_betas(self): for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]): self.check_over_configs(time_step=self.default_valid_timestep, beta_start=beta_start, beta_end=beta_end) def test_schedules(self): for schedule in ["linear", "scaled_linear", "squaredcos_cap_v2"]: self.check_over_configs(time_step=self.default_valid_timestep, beta_schedule=schedule) def test_prediction_type(self): for prediction_type in ["epsilon", "v_prediction"]: self.check_over_configs(time_step=self.default_valid_timestep, prediction_type=prediction_type) def test_clip_sample(self): for clip_sample in [True, False]: self.check_over_configs(time_step=self.default_valid_timestep, clip_sample=clip_sample) def test_thresholding(self): self.check_over_configs(time_step=self.default_valid_timestep, thresholding=False) for threshold in [0.5, 1.0, 2.0]: for prediction_type in ["epsilon", "v_prediction"]: self.check_over_configs( time_step=self.default_valid_timestep, thresholding=True, prediction_type=prediction_type, sample_max_value=threshold, ) def test_time_indices(self): # Get default timestep schedule. kwargs = dict(self.forward_default_kwargs) num_inference_steps = kwargs.pop("num_inference_steps", None) scheduler_config = self.get_scheduler_config() scheduler = self.scheduler_classes[0](**scheduler_config) scheduler.set_timesteps(num_inference_steps) timesteps = scheduler.timesteps for t in timesteps: self.check_over_forward(time_step=t) def test_inference_steps(self): # Hardcoded for now for t, num_inference_steps in zip([99, 39, 39, 19], [10, 25, 26, 50]): self.check_over_forward(time_step=t, num_inference_steps=num_inference_steps) # Override test_add_noise_device because the hardcoded num_inference_steps of 100 doesn't work # for LCMScheduler under default settings def test_add_noise_device(self, num_inference_steps=10): for scheduler_class in self.scheduler_classes: scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) scheduler.set_timesteps(num_inference_steps) sample = self.dummy_sample.to(torch_device) scaled_sample = scheduler.scale_model_input(sample, 0.0) self.assertEqual(sample.shape, scaled_sample.shape) noise = torch.randn_like(scaled_sample).to(torch_device) t = scheduler.timesteps[5][None] noised = scheduler.add_noise(scaled_sample, noise, t) self.assertEqual(noised.shape, scaled_sample.shape) # Override test_from_save_pretrained because it hardcodes a timestep of 1 def test_from_save_pretrained(self): kwargs = dict(self.forward_default_kwargs) num_inference_steps = kwargs.pop("num_inference_steps", None) for scheduler_class in self.scheduler_classes: timestep = self.default_valid_timestep scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) sample = self.dummy_sample residual = 0.1 * sample with tempfile.TemporaryDirectory() as tmpdirname: scheduler.save_config(tmpdirname) new_scheduler = scheduler_class.from_pretrained(tmpdirname) scheduler.set_timesteps(num_inference_steps) new_scheduler.set_timesteps(num_inference_steps) kwargs["generator"] = torch.manual_seed(0) output = scheduler.step(residual, timestep, sample, **kwargs).prev_sample kwargs["generator"] = torch.manual_seed(0) new_output = new_scheduler.step(residual, timestep, sample, **kwargs).prev_sample assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" # Override test_step_shape because uses 0 and 1 as hardcoded timesteps def test_step_shape(self): kwargs = dict(self.forward_default_kwargs) num_inference_steps = kwargs.pop("num_inference_steps", None) for scheduler_class in self.scheduler_classes: scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) sample = self.dummy_sample residual = 0.1 * sample scheduler.set_timesteps(num_inference_steps) timestep_0 = scheduler.timesteps[-2] timestep_1 = scheduler.timesteps[-1] output_0 = scheduler.step(residual, timestep_0, sample, **kwargs).prev_sample output_1 = scheduler.step(residual, timestep_1, sample, **kwargs).prev_sample self.assertEqual(output_0.shape, sample.shape) self.assertEqual(output_0.shape, output_1.shape) # Override test_set_scheduler_outputs_equivalence since it uses 0 as a hardcoded timestep def test_scheduler_outputs_equivalence(self): def set_nan_tensor_to_zero(t): t[t != t] = 0 return t def recursive_check(tuple_object, dict_object): if isinstance(tuple_object, (List, Tuple)): for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object.values()): recursive_check(tuple_iterable_value, dict_iterable_value) elif isinstance(tuple_object, Dict): for tuple_iterable_value, dict_iterable_value in zip(tuple_object.values(), dict_object.values()): recursive_check(tuple_iterable_value, dict_iterable_value) elif tuple_object is None: return else: self.assertTrue( torch.allclose( set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5 ), msg=( "Tuple and dict output are not equal. Difference:" f" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:" f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has" f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}." ), ) kwargs = dict(self.forward_default_kwargs) num_inference_steps = kwargs.pop("num_inference_steps", 50) timestep = self.default_valid_timestep for scheduler_class in self.scheduler_classes: scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) sample = self.dummy_sample residual = 0.1 * sample scheduler.set_timesteps(num_inference_steps) kwargs["generator"] = torch.manual_seed(0) outputs_dict = scheduler.step(residual, timestep, sample, **kwargs) scheduler.set_timesteps(num_inference_steps) kwargs["generator"] = torch.manual_seed(0) outputs_tuple = scheduler.step(residual, timestep, sample, return_dict=False, **kwargs) recursive_check(outputs_tuple, outputs_dict) def full_loop(self, num_inference_steps=10, seed=0, **config): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config(**config) scheduler = scheduler_class(**scheduler_config) model = self.dummy_model() sample = self.dummy_sample_deter generator = torch.manual_seed(seed) scheduler.set_timesteps(num_inference_steps) for t in scheduler.timesteps: residual = model(sample, t) sample = scheduler.step(residual, t, sample, generator).prev_sample return sample def test_full_loop_onestep(self): sample = self.full_loop(num_inference_steps=1) result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) # TODO: get expected sum and mean assert abs(result_sum.item() - 18.7097) < 1e-3 assert abs(result_mean.item() - 0.0244) < 1e-3 def test_full_loop_multistep(self): sample = self.full_loop(num_inference_steps=10) result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) # TODO: get expected sum and mean assert abs(result_sum.item() - 197.7616) < 1e-3 assert abs(result_mean.item() - 0.2575) < 1e-3 def test_custom_timesteps(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 1, 0] scheduler.set_timesteps(timesteps=timesteps) scheduler_timesteps = scheduler.timesteps for i, timestep in enumerate(scheduler_timesteps): if i == len(timesteps) - 1: expected_prev_t = -1 else: expected_prev_t = timesteps[i + 1] prev_t = scheduler.previous_timestep(timestep) prev_t = prev_t.item() self.assertEqual(prev_t, expected_prev_t) def test_custom_timesteps_increasing_order(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 51, 0] with self.assertRaises(ValueError, msg="`custom_timesteps` must be in descending order."): scheduler.set_timesteps(timesteps=timesteps) def test_custom_timesteps_passing_both_num_inference_steps_and_timesteps(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 1, 0] num_inference_steps = len(timesteps) with self.assertRaises(ValueError, msg="Can only pass one of `num_inference_steps` or `custom_timesteps`."): scheduler.set_timesteps(num_inference_steps=num_inference_steps, timesteps=timesteps) def test_custom_timesteps_too_large(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [scheduler.config.num_train_timesteps] with self.assertRaises( ValueError, msg="`timesteps` must start before `self.config.train_timesteps`: {scheduler.config.num_train_timesteps}}", ): scheduler.set_timesteps(timesteps=timesteps)
diffusers/tests/schedulers/test_scheduler_lcm.py/0
{ "file_path": "diffusers/tests/schedulers/test_scheduler_lcm.py", "repo_id": "diffusers", "token_count": 5668 }
252
import gc import tempfile import unittest import torch from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline from diffusers.utils import load_image from diffusers.utils.testing_utils import ( enable_full_determinism, numpy_cosine_similarity_distance, require_torch_gpu, slow, ) from .single_file_testing_utils import ( SDSingleFileTesterMixin, download_diffusers_config, download_original_config, download_single_file_checkpoint, ) enable_full_determinism() @slow @require_torch_gpu class StableDiffusionControlNetInpaintPipelineSingleFileSlowTests(unittest.TestCase, SDSingleFileTesterMixin): pipeline_class = StableDiffusionControlNetInpaintPipeline ckpt_path = "https://huggingface.co/runwayml/stable-diffusion-inpainting/blob/main/sd-v1-5-inpainting.ckpt" original_config = "https://raw.githubusercontent.com/runwayml/stable-diffusion/main/configs/stable-diffusion/v1-inpainting-inference.yaml" repo_id = "runwayml/stable-diffusion-inpainting" def setUp(self): super().setUp() gc.collect() torch.cuda.empty_cache() def tearDown(self): super().tearDown() gc.collect() torch.cuda.empty_cache() def get_inputs(self): control_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png" ).resize((512, 512)) image = load_image( "https://huggingface.co/lllyasviel/sd-controlnet-canny/resolve/main/images/bird.png" ).resize((512, 512)) mask_image = load_image( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" "/stable_diffusion_inpaint/input_bench_mask.png" ).resize((512, 512)) inputs = { "prompt": "bird", "image": image, "control_image": control_image, "mask_image": mask_image, "generator": torch.Generator(device="cpu").manual_seed(0), "num_inference_steps": 3, "output_type": "np", } return inputs def test_single_file_format_inference_is_same_as_pretrained(self): controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_canny") pipe = self.pipeline_class.from_pretrained(self.repo_id, controlnet=controlnet, safety_checker=None) pipe.unet.set_default_attn_processor() pipe.enable_model_cpu_offload() pipe_sf = self.pipeline_class.from_single_file(self.ckpt_path, controlnet=controlnet, safety_checker=None) pipe_sf.unet.set_default_attn_processor() pipe_sf.enable_model_cpu_offload() inputs = self.get_inputs() output = pipe(**inputs).images[0] inputs = self.get_inputs() output_sf = pipe_sf(**inputs).images[0] max_diff = numpy_cosine_similarity_distance(output_sf.flatten(), output.flatten()) assert max_diff < 1e-3 def test_single_file_components(self): controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_canny") pipe = self.pipeline_class.from_pretrained( self.repo_id, variant="fp16", safety_checker=None, controlnet=controlnet ) pipe_single_file = self.pipeline_class.from_single_file( self.ckpt_path, safety_checker=None, controlnet=controlnet, ) super()._compare_component_configs(pipe, pipe_single_file) def test_single_file_components_local_files_only(self): controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_canny") pipe = self.pipeline_class.from_pretrained(self.repo_id, safety_checker=None, controlnet=controlnet) with tempfile.TemporaryDirectory() as tmpdir: ckpt_filename = self.ckpt_path.split("/")[-1] local_ckpt_path = download_single_file_checkpoint(self.repo_id, ckpt_filename, tmpdir) pipe_single_file = self.pipeline_class.from_single_file( local_ckpt_path, controlnet=controlnet, safety_checker=None, local_files_only=True ) super()._compare_component_configs(pipe, pipe_single_file) def test_single_file_components_with_original_config(self): controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_canny", variant="fp16") pipe = self.pipeline_class.from_pretrained(self.repo_id, controlnet=controlnet) pipe_single_file = self.pipeline_class.from_single_file( self.ckpt_path, controlnet=controlnet, original_config=self.original_config ) super()._compare_component_configs(pipe, pipe_single_file) def test_single_file_components_with_original_config_local_files_only(self): controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11p_sd15_canny", torch_dtype=torch.float16, variant="fp16" ) pipe = self.pipeline_class.from_pretrained( self.repo_id, controlnet=controlnet, safety_checker=None, ) with tempfile.TemporaryDirectory() as tmpdir: ckpt_filename = self.ckpt_path.split("/")[-1] local_ckpt_path = download_single_file_checkpoint(self.repo_id, ckpt_filename, tmpdir) local_original_config = download_original_config(self.original_config, tmpdir) pipe_single_file = self.pipeline_class.from_single_file( local_ckpt_path, original_config=local_original_config, controlnet=controlnet, safety_checker=None, local_files_only=True, ) super()._compare_component_configs(pipe, pipe_single_file) def test_single_file_components_with_diffusers_config(self): controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_canny", variant="fp16") pipe = self.pipeline_class.from_pretrained(self.repo_id, controlnet=controlnet) pipe_single_file = self.pipeline_class.from_single_file( self.ckpt_path, controlnet=controlnet, config=self.repo_id, ) super()._compare_component_configs(pipe, pipe_single_file) def test_single_file_components_with_diffusers_config_local_files_only(self): controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11p_sd15_canny", torch_dtype=torch.float16, variant="fp16", ) pipe = self.pipeline_class.from_pretrained( self.repo_id, controlnet=controlnet, safety_checker=None, ) with tempfile.TemporaryDirectory() as tmpdir: ckpt_filename = self.ckpt_path.split("/")[-1] local_ckpt_path = download_single_file_checkpoint(self.repo_id, ckpt_filename, tmpdir) local_diffusers_config = download_diffusers_config(self.repo_id, tmpdir) pipe_single_file = self.pipeline_class.from_single_file( local_ckpt_path, config=local_diffusers_config, controlnet=controlnet, safety_checker=None, local_files_only=True, ) super()._compare_component_configs(pipe, pipe_single_file)
diffusers/tests/single_file/test_stable_diffusion_controlnet_inpaint_single_file.py/0
{ "file_path": "diffusers/tests/single_file/test_stable_diffusion_controlnet_inpaint_single_file.py", "repo_id": "diffusers", "token_count": 3338 }
253
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib import inspect import os import re import warnings from collections import OrderedDict from difflib import get_close_matches from pathlib import Path from diffusers.models.auto import get_values from diffusers.utils import ENV_VARS_TRUE_VALUES, is_flax_available, is_torch_available # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_repo.py PATH_TO_DIFFUSERS = "src/diffusers" PATH_TO_TESTS = "tests" PATH_TO_DOC = "docs/source/en" # Update this list with models that are supposed to be private. PRIVATE_MODELS = [ "DPRSpanPredictor", "RealmBertModel", "T5Stack", "TFDPRSpanPredictor", ] # Update this list for models that are not tested with a comment explaining the reason it should not be. # Being in this list is an exception and should **not** be the rule. IGNORE_NON_TESTED = PRIVATE_MODELS.copy() + [ # models to ignore for not tested "OPTDecoder", # Building part of bigger (tested) model. "DecisionTransformerGPT2Model", # Building part of bigger (tested) model. "SegformerDecodeHead", # Building part of bigger (tested) model. "PLBartEncoder", # Building part of bigger (tested) model. "PLBartDecoder", # Building part of bigger (tested) model. "PLBartDecoderWrapper", # Building part of bigger (tested) model. "BigBirdPegasusEncoder", # Building part of bigger (tested) model. "BigBirdPegasusDecoder", # Building part of bigger (tested) model. "BigBirdPegasusDecoderWrapper", # Building part of bigger (tested) model. "DetrEncoder", # Building part of bigger (tested) model. "DetrDecoder", # Building part of bigger (tested) model. "DetrDecoderWrapper", # Building part of bigger (tested) model. "M2M100Encoder", # Building part of bigger (tested) model. "M2M100Decoder", # Building part of bigger (tested) model. "Speech2TextEncoder", # Building part of bigger (tested) model. "Speech2TextDecoder", # Building part of bigger (tested) model. "LEDEncoder", # Building part of bigger (tested) model. "LEDDecoder", # Building part of bigger (tested) model. "BartDecoderWrapper", # Building part of bigger (tested) model. "BartEncoder", # Building part of bigger (tested) model. "BertLMHeadModel", # Needs to be setup as decoder. "BlenderbotSmallEncoder", # Building part of bigger (tested) model. "BlenderbotSmallDecoderWrapper", # Building part of bigger (tested) model. "BlenderbotEncoder", # Building part of bigger (tested) model. "BlenderbotDecoderWrapper", # Building part of bigger (tested) model. "MBartEncoder", # Building part of bigger (tested) model. "MBartDecoderWrapper", # Building part of bigger (tested) model. "MegatronBertLMHeadModel", # Building part of bigger (tested) model. "MegatronBertEncoder", # Building part of bigger (tested) model. "MegatronBertDecoder", # Building part of bigger (tested) model. "MegatronBertDecoderWrapper", # Building part of bigger (tested) model. "PegasusEncoder", # Building part of bigger (tested) model. "PegasusDecoderWrapper", # Building part of bigger (tested) model. "DPREncoder", # Building part of bigger (tested) model. "ProphetNetDecoderWrapper", # Building part of bigger (tested) model. "RealmBertModel", # Building part of bigger (tested) model. "RealmReader", # Not regular model. "RealmScorer", # Not regular model. "RealmForOpenQA", # Not regular model. "ReformerForMaskedLM", # Needs to be setup as decoder. "Speech2Text2DecoderWrapper", # Building part of bigger (tested) model. "TFDPREncoder", # Building part of bigger (tested) model. "TFElectraMainLayer", # Building part of bigger (tested) model (should it be a TFModelMixin ?) "TFRobertaForMultipleChoice", # TODO: fix "TrOCRDecoderWrapper", # Building part of bigger (tested) model. "SeparableConv1D", # Building part of bigger (tested) model. "FlaxBartForCausalLM", # Building part of bigger (tested) model. "FlaxBertForCausalLM", # Building part of bigger (tested) model. Tested implicitly through FlaxRobertaForCausalLM. "OPTDecoderWrapper", ] # Update this list with test files that don't have a tester with a `all_model_classes` variable and which don't # trigger the common tests. TEST_FILES_WITH_NO_COMMON_TESTS = [ "models/decision_transformer/test_modeling_decision_transformer.py", "models/camembert/test_modeling_camembert.py", "models/mt5/test_modeling_flax_mt5.py", "models/mbart/test_modeling_mbart.py", "models/mt5/test_modeling_mt5.py", "models/pegasus/test_modeling_pegasus.py", "models/camembert/test_modeling_tf_camembert.py", "models/mt5/test_modeling_tf_mt5.py", "models/xlm_roberta/test_modeling_tf_xlm_roberta.py", "models/xlm_roberta/test_modeling_flax_xlm_roberta.py", "models/xlm_prophetnet/test_modeling_xlm_prophetnet.py", "models/xlm_roberta/test_modeling_xlm_roberta.py", "models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py", "models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py", "models/decision_transformer/test_modeling_decision_transformer.py", ] # Update this list for models that are not in any of the auto MODEL_XXX_MAPPING. Being in this list is an exception and # should **not** be the rule. IGNORE_NON_AUTO_CONFIGURED = PRIVATE_MODELS.copy() + [ # models to ignore for model xxx mapping "DPTForDepthEstimation", "DecisionTransformerGPT2Model", "GLPNForDepthEstimation", "ViltForQuestionAnswering", "ViltForImagesAndTextClassification", "ViltForImageAndTextRetrieval", "ViltForMaskedLM", "XGLMEncoder", "XGLMDecoder", "XGLMDecoderWrapper", "PerceiverForMultimodalAutoencoding", "PerceiverForOpticalFlow", "SegformerDecodeHead", "FlaxBeitForMaskedImageModeling", "PLBartEncoder", "PLBartDecoder", "PLBartDecoderWrapper", "BeitForMaskedImageModeling", "CLIPTextModel", "CLIPVisionModel", "TFCLIPTextModel", "TFCLIPVisionModel", "FlaxCLIPTextModel", "FlaxCLIPVisionModel", "FlaxWav2Vec2ForCTC", "DetrForSegmentation", "DPRReader", "FlaubertForQuestionAnswering", "FlavaImageCodebook", "FlavaTextModel", "FlavaImageModel", "FlavaMultimodalModel", "GPT2DoubleHeadsModel", "LukeForMaskedLM", "LukeForEntityClassification", "LukeForEntityPairClassification", "LukeForEntitySpanClassification", "OpenAIGPTDoubleHeadsModel", "RagModel", "RagSequenceForGeneration", "RagTokenForGeneration", "RealmEmbedder", "RealmForOpenQA", "RealmScorer", "RealmReader", "TFDPRReader", "TFGPT2DoubleHeadsModel", "TFOpenAIGPTDoubleHeadsModel", "TFRagModel", "TFRagSequenceForGeneration", "TFRagTokenForGeneration", "Wav2Vec2ForCTC", "HubertForCTC", "SEWForCTC", "SEWDForCTC", "XLMForQuestionAnswering", "XLNetForQuestionAnswering", "SeparableConv1D", "VisualBertForRegionToPhraseAlignment", "VisualBertForVisualReasoning", "VisualBertForQuestionAnswering", "VisualBertForMultipleChoice", "TFWav2Vec2ForCTC", "TFHubertForCTC", "MaskFormerForInstanceSegmentation", ] # Update this list for models that have multiple model types for the same # model doc MODEL_TYPE_TO_DOC_MAPPING = OrderedDict( [ ("data2vec-text", "data2vec"), ("data2vec-audio", "data2vec"), ("data2vec-vision", "data2vec"), ] ) # This is to make sure the transformers module imported is the one in the repo. spec = importlib.util.spec_from_file_location( "diffusers", os.path.join(PATH_TO_DIFFUSERS, "__init__.py"), submodule_search_locations=[PATH_TO_DIFFUSERS], ) diffusers = spec.loader.load_module() def check_model_list(): """Check the model list inside the transformers library.""" # Get the models from the directory structure of `src/diffusers/models/` models_dir = os.path.join(PATH_TO_DIFFUSERS, "models") _models = [] for model in os.listdir(models_dir): model_dir = os.path.join(models_dir, model) if os.path.isdir(model_dir) and "__init__.py" in os.listdir(model_dir): _models.append(model) # Get the models from the directory structure of `src/transformers/models/` models = [model for model in dir(diffusers.models) if not model.startswith("__")] missing_models = sorted(set(_models).difference(models)) if missing_models: raise Exception( f"The following models should be included in {models_dir}/__init__.py: {','.join(missing_models)}." ) # If some modeling modules should be ignored for all checks, they should be added in the nested list # _ignore_modules of this function. def get_model_modules(): """Get the model modules inside the transformers library.""" _ignore_modules = [ "modeling_auto", "modeling_encoder_decoder", "modeling_marian", "modeling_mmbt", "modeling_outputs", "modeling_retribert", "modeling_utils", "modeling_flax_auto", "modeling_flax_encoder_decoder", "modeling_flax_utils", "modeling_speech_encoder_decoder", "modeling_flax_speech_encoder_decoder", "modeling_flax_vision_encoder_decoder", "modeling_transfo_xl_utilities", "modeling_tf_auto", "modeling_tf_encoder_decoder", "modeling_tf_outputs", "modeling_tf_pytorch_utils", "modeling_tf_utils", "modeling_tf_transfo_xl_utilities", "modeling_tf_vision_encoder_decoder", "modeling_vision_encoder_decoder", ] modules = [] for model in dir(diffusers.models): # There are some magic dunder attributes in the dir, we ignore them if not model.startswith("__"): model_module = getattr(diffusers.models, model) for submodule in dir(model_module): if submodule.startswith("modeling") and submodule not in _ignore_modules: modeling_module = getattr(model_module, submodule) if inspect.ismodule(modeling_module): modules.append(modeling_module) return modules def get_models(module, include_pretrained=False): """Get the objects in module that are models.""" models = [] model_classes = (diffusers.ModelMixin, diffusers.TFModelMixin, diffusers.FlaxModelMixin) for attr_name in dir(module): if not include_pretrained and ("Pretrained" in attr_name or "PreTrained" in attr_name): continue attr = getattr(module, attr_name) if isinstance(attr, type) and issubclass(attr, model_classes) and attr.__module__ == module.__name__: models.append((attr_name, attr)) return models def is_a_private_model(model): """Returns True if the model should not be in the main init.""" if model in PRIVATE_MODELS: return True # Wrapper, Encoder and Decoder are all privates if model.endswith("Wrapper"): return True if model.endswith("Encoder"): return True if model.endswith("Decoder"): return True return False def check_models_are_in_init(): """Checks all models defined in the library are in the main init.""" models_not_in_init = [] dir_transformers = dir(diffusers) for module in get_model_modules(): models_not_in_init += [ model[0] for model in get_models(module, include_pretrained=True) if model[0] not in dir_transformers ] # Remove private models models_not_in_init = [model for model in models_not_in_init if not is_a_private_model(model)] if len(models_not_in_init) > 0: raise Exception(f"The following models should be in the main init: {','.join(models_not_in_init)}.") # If some test_modeling files should be ignored when checking models are all tested, they should be added in the # nested list _ignore_files of this function. def get_model_test_files(): """Get the model test files. The returned files should NOT contain the `tests` (i.e. `PATH_TO_TESTS` defined in this script). They will be considered as paths relative to `tests`. A caller has to use `os.path.join(PATH_TO_TESTS, ...)` to access the files. """ _ignore_files = [ "test_modeling_common", "test_modeling_encoder_decoder", "test_modeling_flax_encoder_decoder", "test_modeling_flax_speech_encoder_decoder", "test_modeling_marian", "test_modeling_tf_common", "test_modeling_tf_encoder_decoder", ] test_files = [] # Check both `PATH_TO_TESTS` and `PATH_TO_TESTS/models` model_test_root = os.path.join(PATH_TO_TESTS, "models") model_test_dirs = [] for x in os.listdir(model_test_root): x = os.path.join(model_test_root, x) if os.path.isdir(x): model_test_dirs.append(x) for target_dir in [PATH_TO_TESTS] + model_test_dirs: for file_or_dir in os.listdir(target_dir): path = os.path.join(target_dir, file_or_dir) if os.path.isfile(path): filename = os.path.split(path)[-1] if "test_modeling" in filename and os.path.splitext(filename)[0] not in _ignore_files: file = os.path.join(*path.split(os.sep)[1:]) test_files.append(file) return test_files # This is a bit hacky but I didn't find a way to import the test_file as a module and read inside the tester class # for the all_model_classes variable. def find_tested_models(test_file): """Parse the content of test_file to detect what's in all_model_classes""" # This is a bit hacky but I didn't find a way to import the test_file as a module and read inside the class with open(os.path.join(PATH_TO_TESTS, test_file), "r", encoding="utf-8", newline="\n") as f: content = f.read() all_models = re.findall(r"all_model_classes\s+=\s+\(\s*\(([^\)]*)\)", content) # Check with one less parenthesis as well all_models += re.findall(r"all_model_classes\s+=\s+\(([^\)]*)\)", content) if len(all_models) > 0: model_tested = [] for entry in all_models: for line in entry.split(","): name = line.strip() if len(name) > 0: model_tested.append(name) return model_tested def check_models_are_tested(module, test_file): """Check models defined in module are tested in test_file.""" # XxxModelMixin are not tested defined_models = get_models(module) tested_models = find_tested_models(test_file) if tested_models is None: if test_file.replace(os.path.sep, "/") in TEST_FILES_WITH_NO_COMMON_TESTS: return return [ f"{test_file} should define `all_model_classes` to apply common tests to the models it tests. " + "If this intentional, add the test filename to `TEST_FILES_WITH_NO_COMMON_TESTS` in the file " + "`utils/check_repo.py`." ] failures = [] for model_name, _ in defined_models: if model_name not in tested_models and model_name not in IGNORE_NON_TESTED: failures.append( f"{model_name} is defined in {module.__name__} but is not tested in " + f"{os.path.join(PATH_TO_TESTS, test_file)}. Add it to the all_model_classes in that file." + "If common tests should not applied to that model, add its name to `IGNORE_NON_TESTED`" + "in the file `utils/check_repo.py`." ) return failures def check_all_models_are_tested(): """Check all models are properly tested.""" modules = get_model_modules() test_files = get_model_test_files() failures = [] for module in modules: test_file = [file for file in test_files if f"test_{module.__name__.split('.')[-1]}.py" in file] if len(test_file) == 0: failures.append(f"{module.__name__} does not have its corresponding test file {test_file}.") elif len(test_file) > 1: failures.append(f"{module.__name__} has several test files: {test_file}.") else: test_file = test_file[0] new_failures = check_models_are_tested(module, test_file) if new_failures is not None: failures += new_failures if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) def get_all_auto_configured_models(): """Return the list of all models in at least one auto class.""" result = set() # To avoid duplicates we concatenate all model classes in a set. if is_torch_available(): for attr_name in dir(diffusers.models.auto.modeling_auto): if attr_name.startswith("MODEL_") and attr_name.endswith("MAPPING_NAMES"): result = result | set(get_values(getattr(diffusers.models.auto.modeling_auto, attr_name))) if is_flax_available(): for attr_name in dir(diffusers.models.auto.modeling_flax_auto): if attr_name.startswith("FLAX_MODEL_") and attr_name.endswith("MAPPING_NAMES"): result = result | set(get_values(getattr(diffusers.models.auto.modeling_flax_auto, attr_name))) return list(result) def ignore_unautoclassed(model_name): """Rules to determine if `name` should be in an auto class.""" # Special white list if model_name in IGNORE_NON_AUTO_CONFIGURED: return True # Encoder and Decoder should be ignored if "Encoder" in model_name or "Decoder" in model_name: return True return False def check_models_are_auto_configured(module, all_auto_models): """Check models defined in module are each in an auto class.""" defined_models = get_models(module) failures = [] for model_name, _ in defined_models: if model_name not in all_auto_models and not ignore_unautoclassed(model_name): failures.append( f"{model_name} is defined in {module.__name__} but is not present in any of the auto mapping. " "If that is intended behavior, add its name to `IGNORE_NON_AUTO_CONFIGURED` in the file " "`utils/check_repo.py`." ) return failures def check_all_models_are_auto_configured(): """Check all models are each in an auto class.""" missing_backends = [] if not is_torch_available(): missing_backends.append("PyTorch") if not is_flax_available(): missing_backends.append("Flax") if len(missing_backends) > 0: missing = ", ".join(missing_backends) if os.getenv("TRANSFORMERS_IS_CI", "").upper() in ENV_VARS_TRUE_VALUES: raise Exception( "Full quality checks require all backends to be installed (with `pip install -e .[dev]` in the " f"Transformers repo, the following are missing: {missing}." ) else: warnings.warn( "Full quality checks require all backends to be installed (with `pip install -e .[dev]` in the " f"Transformers repo, the following are missing: {missing}. While it's probably fine as long as you " "didn't make any change in one of those backends modeling files, you should probably execute the " "command above to be on the safe side." ) modules = get_model_modules() all_auto_models = get_all_auto_configured_models() failures = [] for module in modules: new_failures = check_models_are_auto_configured(module, all_auto_models) if new_failures is not None: failures += new_failures if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) _re_decorator = re.compile(r"^\s*@(\S+)\s+$") def check_decorator_order(filename): """Check that in the test file `filename` the slow decorator is always last.""" with open(filename, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() decorator_before = None errors = [] for i, line in enumerate(lines): search = _re_decorator.search(line) if search is not None: decorator_name = search.groups()[0] if decorator_before is not None and decorator_name.startswith("parameterized"): errors.append(i) decorator_before = decorator_name elif decorator_before is not None: decorator_before = None return errors def check_all_decorator_order(): """Check that in all test files, the slow decorator is always last.""" errors = [] for fname in os.listdir(PATH_TO_TESTS): if fname.endswith(".py"): filename = os.path.join(PATH_TO_TESTS, fname) new_errors = check_decorator_order(filename) errors += [f"- {filename}, line {i}" for i in new_errors] if len(errors) > 0: msg = "\n".join(errors) raise ValueError( "The parameterized decorator (and its variants) should always be first, but this is not the case in the" f" following files:\n{msg}" ) def find_all_documented_objects(): """Parse the content of all doc files to detect which classes and functions it documents""" documented_obj = [] for doc_file in Path(PATH_TO_DOC).glob("**/*.rst"): with open(doc_file, "r", encoding="utf-8", newline="\n") as f: content = f.read() raw_doc_objs = re.findall(r"(?:autoclass|autofunction):: transformers.(\S+)\s+", content) documented_obj += [obj.split(".")[-1] for obj in raw_doc_objs] for doc_file in Path(PATH_TO_DOC).glob("**/*.md"): with open(doc_file, "r", encoding="utf-8", newline="\n") as f: content = f.read() raw_doc_objs = re.findall(r"\[\[autodoc\]\]\s+(\S+)\s+", content) documented_obj += [obj.split(".")[-1] for obj in raw_doc_objs] return documented_obj # One good reason for not being documented is to be deprecated. Put in this list deprecated objects. DEPRECATED_OBJECTS = [ "AutoModelWithLMHead", "BartPretrainedModel", "DataCollator", "DataCollatorForSOP", "GlueDataset", "GlueDataTrainingArguments", "LineByLineTextDataset", "LineByLineWithRefDataset", "LineByLineWithSOPTextDataset", "PretrainedBartModel", "PretrainedFSMTModel", "SingleSentenceClassificationProcessor", "SquadDataTrainingArguments", "SquadDataset", "SquadExample", "SquadFeatures", "SquadV1Processor", "SquadV2Processor", "TFAutoModelWithLMHead", "TFBartPretrainedModel", "TextDataset", "TextDatasetForNextSentencePrediction", "Wav2Vec2ForMaskedLM", "Wav2Vec2Tokenizer", "glue_compute_metrics", "glue_convert_examples_to_features", "glue_output_modes", "glue_processors", "glue_tasks_num_labels", "squad_convert_examples_to_features", "xnli_compute_metrics", "xnli_output_modes", "xnli_processors", "xnli_tasks_num_labels", "TFTrainer", "TFTrainingArguments", ] # Exceptionally, some objects should not be documented after all rules passed. # ONLY PUT SOMETHING IN THIS LIST AS A LAST RESORT! UNDOCUMENTED_OBJECTS = [ "AddedToken", # This is a tokenizers class. "BasicTokenizer", # Internal, should never have been in the main init. "CharacterTokenizer", # Internal, should never have been in the main init. "DPRPretrainedReader", # Like an Encoder. "DummyObject", # Just picked by mistake sometimes. "MecabTokenizer", # Internal, should never have been in the main init. "ModelCard", # Internal type. "SqueezeBertModule", # Internal building block (should have been called SqueezeBertLayer) "TFDPRPretrainedReader", # Like an Encoder. "TransfoXLCorpus", # Internal type. "WordpieceTokenizer", # Internal, should never have been in the main init. "absl", # External module "add_end_docstrings", # Internal, should never have been in the main init. "add_start_docstrings", # Internal, should never have been in the main init. "cached_path", # Internal used for downloading models. "convert_tf_weight_name_to_pt_weight_name", # Internal used to convert model weights "logger", # Internal logger "logging", # External module "requires_backends", # Internal function ] # This list should be empty. Objects in it should get their own doc page. SHOULD_HAVE_THEIR_OWN_PAGE = [ # Benchmarks "PyTorchBenchmark", "PyTorchBenchmarkArguments", "TensorFlowBenchmark", "TensorFlowBenchmarkArguments", ] def ignore_undocumented(name): """Rules to determine if `name` should be undocumented.""" # NOT DOCUMENTED ON PURPOSE. # Constants uppercase are not documented. if name.isupper(): return True # ModelMixins / Encoders / Decoders / Layers / Embeddings / Attention are not documented. if ( name.endswith("ModelMixin") or name.endswith("Decoder") or name.endswith("Encoder") or name.endswith("Layer") or name.endswith("Embeddings") or name.endswith("Attention") ): return True # Submodules are not documented. if os.path.isdir(os.path.join(PATH_TO_DIFFUSERS, name)) or os.path.isfile( os.path.join(PATH_TO_DIFFUSERS, f"{name}.py") ): return True # All load functions are not documented. if name.startswith("load_tf") or name.startswith("load_pytorch"): return True # is_xxx_available functions are not documented. if name.startswith("is_") and name.endswith("_available"): return True # Deprecated objects are not documented. if name in DEPRECATED_OBJECTS or name in UNDOCUMENTED_OBJECTS: return True # MMBT model does not really work. if name.startswith("MMBT"): return True if name in SHOULD_HAVE_THEIR_OWN_PAGE: return True return False def check_all_objects_are_documented(): """Check all models are properly documented.""" documented_objs = find_all_documented_objects() modules = diffusers._modules objects = [c for c in dir(diffusers) if c not in modules and not c.startswith("_")] undocumented_objs = [c for c in objects if c not in documented_objs and not ignore_undocumented(c)] if len(undocumented_objs) > 0: raise Exception( "The following objects are in the public init so should be documented:\n - " + "\n - ".join(undocumented_objs) ) check_docstrings_are_in_md() check_model_type_doc_match() def check_model_type_doc_match(): """Check all doc pages have a corresponding model type.""" model_doc_folder = Path(PATH_TO_DOC) / "model_doc" model_docs = [m.stem for m in model_doc_folder.glob("*.md")] model_types = list(diffusers.models.auto.configuration_auto.MODEL_NAMES_MAPPING.keys()) model_types = [MODEL_TYPE_TO_DOC_MAPPING[m] if m in MODEL_TYPE_TO_DOC_MAPPING else m for m in model_types] errors = [] for m in model_docs: if m not in model_types and m != "auto": close_matches = get_close_matches(m, model_types) error_message = f"{m} is not a proper model identifier." if len(close_matches) > 0: close_matches = "/".join(close_matches) error_message += f" Did you mean {close_matches}?" errors.append(error_message) if len(errors) > 0: raise ValueError( "Some model doc pages do not match any existing model type:\n" + "\n".join(errors) + "\nYou can add any missing model type to the `MODEL_NAMES_MAPPING` constant in " "models/auto/configuration_auto.py." ) # Re pattern to catch :obj:`xx`, :class:`xx`, :func:`xx` or :meth:`xx`. _re_rst_special_words = re.compile(r":(?:obj|func|class|meth):`([^`]+)`") # Re pattern to catch things between double backquotes. _re_double_backquotes = re.compile(r"(^|[^`])``([^`]+)``([^`]|$)") # Re pattern to catch example introduction. _re_rst_example = re.compile(r"^\s*Example.*::\s*$", flags=re.MULTILINE) def is_rst_docstring(docstring): """ Returns `True` if `docstring` is written in rst. """ if _re_rst_special_words.search(docstring) is not None: return True if _re_double_backquotes.search(docstring) is not None: return True if _re_rst_example.search(docstring) is not None: return True return False def check_docstrings_are_in_md(): """Check all docstrings are in md""" files_with_rst = [] for file in Path(PATH_TO_DIFFUSERS).glob("**/*.py"): with open(file, "r") as f: code = f.read() docstrings = code.split('"""') for idx, docstring in enumerate(docstrings): if idx % 2 == 0 or not is_rst_docstring(docstring): continue files_with_rst.append(file) break if len(files_with_rst) > 0: raise ValueError( "The following files have docstrings written in rst:\n" + "\n".join([f"- {f}" for f in files_with_rst]) + "\nTo fix this run `doc-builder convert path_to_py_file` after installing `doc-builder`\n" "(`pip install git+https://github.com/huggingface/doc-builder`)" ) def check_repo_quality(): """Check all models are properly tested and documented.""" print("Checking all models are included.") check_model_list() print("Checking all models are public.") check_models_are_in_init() print("Checking all models are properly tested.") check_all_decorator_order() check_all_models_are_tested() print("Checking all objects are properly documented.") check_all_objects_are_documented() print("Checking all models are in at least one auto class.") check_all_models_are_auto_configured() if __name__ == "__main__": check_repo_quality()
diffusers/utils/check_repo.py/0
{ "file_path": "diffusers/utils/check_repo.py", "repo_id": "diffusers", "token_count": 12370 }
254
# Hugging Face Diffusion Models Course In this free course, you will: - 👩‍🎓 Study the theory behind diffusion models - 🧨 Learn how to generate images and audio with the popular 🤗 Diffusers library - 🏋️‍♂️ Train your own diffusion models from scratch - 📻 Fine-tune existing diffusion models on new datasets - 🗺 Explore conditional generation and guidance - 🧑‍🔬 Create your own custom diffusion model pipelines ## Prerequisites This course requires a good level in Python and a grounding in deep learning and Pytorch. If it's not the case yet, you can check these free resources: - Python: https://www.udacity.com/course/introduction-to-python--ud1110 - Intro to Deep Learning with PyTorch: https://www.udacity.com/course/deep-learning-pytorch--ud188 - PyTorch in 60min: https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html To upload your models to the Hugging Face Hub, you'll need an account. You can create one for free at the following address: [https://huggingface.co/join](https://huggingface.co/join). ## What is the syllabus? The course consists in four units. Each unit is made up of a theory section, which also lists resources/papers, and two *notebooks*. More specifically, we have: - Unit 1: Introduction to diffusion models Introduction to 🤗 Diffusers and implementation from 0 - Unit 2: Finetuning and guidance Finetuning a diffusion model on new data and adding guidance. - Unit 3: Stable Diffusion Exploring a powerful text-conditioned latent diffusion model - Unit 4: Doing more with diffusion Advanced techniques for going further with diffusion ## Who are we? About the authors: [**Jonathan Whitaker**](https://huggingface.co/johnowhitaker) is a Data Scientist/AI Researcher doing R&D with [answer.ai](https://www.answer.ai/). He likes teaching and making courses. His current focus is on generative AI, flitting between several modalities. You can find more info at: [johnowhitaker.dev](https://johnowhitaker.dev/). [**Lewis Tunstall**](https://huggingface.co/lewtun) is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of the O’Reilly book [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098136789/). ## FAQ Here are some answers to frequently asked questions: - **Does taking this course lead to a certification?** Currently we do not have any certification for this course. However, we are working on a certification program for the Hugging Face ecosystem -- stay tuned! - **How much time should I spend on this course?** Each chapter in this course is designed to be completed in 1 week, with approximately 6-8 hours of work per week. However, you can take as much time as you need to complete the course. - **Where can I ask a question if I have one?** If you have a question about any section of the course, just click on the "*Ask a question*" banner at the top of the page to be automatically redirected to the right section of the [Hugging Face Discord](https://discord.com/invite/JfAtkvEtRb) to ask your question in the channel `#diffusion-models-class`. - **Where can I get the code for the course?** For each section, click on the banner at the top of the page to run the code: <img src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/notebook-buttons.png" alt="Link to the Hugging Face course notebooks" width="75%"> - **How can I contribute to the course?** There are many ways to contribute to the course! If you find a typo or a bug, please open an issue on the [`diffusion-models-class`](https://github.com/huggingface/diffusion-models-class) repo. If you would like to help translate the course into your native language, check out the instructions [here](https://github.com/huggingface/diffusion-models-class#translating-the-course-into-your-language). - **Can I reuse this course?** Of course! The course is released under the permissive [Apache 2 license](https://www.apache.org/licenses/LICENSE-2.0.html). This means that you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. If you would like to cite the course, please use the following BibTeX: ``` @misc{huggingfacecourse, author = {Hugging Face}, title = {The Hugging Face Diffusion Models Course, 2022}, howpublished = "\url{https://huggingface.co/course}", year = {2022}, note = "[Online; accessed <today>]" } ``` ## Let's get started! Are you ready to get started? Then go to the first unit to start the course.
diffusion-models-class/unit0/README.md/0
{ "file_path": "diffusion-models-class/unit0/README.md", "repo_id": "diffusion-models-class", "token_count": 1315 }
255
# Distil-Whisper [[Paper]](https://arxiv.org/abs/2311.00430) [[Models]](https://huggingface.co/collections/distil-whisper/distil-whisper-models-65411987e6727569748d2eb6) [[Colab]](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/Distil_Whisper_Benchmark.ipynb) [[Training Code]](training) Distil-Whisper is a distilled version of Whisper that is **6 times faster**, 49% smaller, and performs **within 1% word error rate (WER)** on out-of-distribution evaluation sets: | Model | Params / M | Rel. Latency ↑ | Short-Form WER ↓ | Long-Form WER ↓ | |----------------------------------------------------------------------------|------------|----------------|------------------|-----------------| | [large-v3](https://huggingface.co/openai/whisper-large-v3) | 1550 | 1.0 | **8.4** | 11.0 | | | | | | | | [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) | 756 | 6.3 | 9.7 | **10.8** | | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | 11.6 | | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 | | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 | For most applications, we recommend the latest [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) checkpoint, since it is the most performant distilled checkpoint and compatible across all Whisper libraries. The only exception is resource-constrained applications with very little memory, such as on-device or mobile applications, where the [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) is a great choice, since it is only 166M parameters and performs within 4% WER of Whisper large-v3. **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the provided [training code](training). We will soon update the repository with multilingual checkpoints when ready! ## 1. Usage Distil-Whisper is supported in Hugging Face 🤗 Transformers from version 4.35 onwards. To run the model, first install the latest version of the Transformers library. For this example, we'll also install 🤗 Datasets to load a toy audio dataset from the Hugging Face Hub: ```bash pip install --upgrade pip pip install --upgrade transformers accelerate datasets[audio] ``` ### Short-Form Transcription Short-form transcription is the process of transcribing audio samples that are less than 30-seconds long, which is the maximum receptive field of the Whisper models. This means the entire audio clip can be processed in one go without the need for chunking. First, we load Distil-Whisper via the convenient [`AutoModelForSpeechSeq2Seq`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForSpeechSeq2Seq) and [`AutoProcessor`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoProcessor) classes. We load the model in `float16` precision and make sure that loading time takes as little time as possible by passing `low_cpu_mem_usage=True`. In addition, we want to make sure that the model is loaded in [`safetensors`](https://github.com/huggingface/safetensors) format by passing `use_safetensors=True`: ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) ``` The model and processor can then be passed to the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline). Note that if you would like to have more control over the generation process, you can directly make use of model + processor API as shown below. ```python pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch_dtype, device=device, ) ``` Next, we load an example short-form audio from the LibriSpeech corpus: ```python from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] ``` Finally, we can call the pipeline to transcribe the audio: ```python result = pipe(sample) print(result["text"]) ``` To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline: ```python result = pipe("audio.mp3") print(result["text"]) ``` For more information on how to customize the automatic speech recognition pipeline, please refer to the ASR pipeline [docs](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline). We also provide an end-to-end [Google Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/Distil_Whisper_Benchmark.ipynb) that benchmarks Whisper against Distil-Whisper. <details> <summary> For more control over the generation parameters, use the model + processor API directly: </summary> Ad-hoc generation arguments can be passed to `model.generate`, including `num_beams` for beam-search, `return_timestamps` for segment-level timestamps, and `prompt_ids` for prompting. See the [docstrings](https://huggingface.co/docs/transformers/en/model_doc/whisper#transformers.WhisperForConditionalGeneration.generate) for more details. ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor from datasets import Audio, load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate)) sample = dataset[0]["audio"] input_features = processor( sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt" ).input_features input_features = input_features.to(device, dtype=torch_dtype) gen_kwargs = { "max_new_tokens": 128, "num_beams": 1, "return_timestamps": False, } pred_ids = model.generate(input_features, **gen_kwargs) pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=gen_kwargs["return_timestamps"]) print(pred_text) ``` </details> ### Sequential Long-Form The latest [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) checkpoint is specifically designed to be compatible with OpenAI's sequential long-form transcription algorithm. This algorithm uses a sliding window for buffered inference of long audio files (> 30-seconds), and returns more accurate transcriptions compared to the [chunked long-form algorithm](#chunked-long-form). The sequential long-form algorithm should be used in either of the following scenarios: 1. Transcription accuracy is the most important factor, and latency is less of a consideration 2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate If you are transcribing single long audio files and latency is the most important factor, you should use the chunked algorithm described [below](#chunked-long-form). For a detailed explanation of the different algorithms, refer to Sections 5 of the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf). We start by loading the model and processor as before: ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) ``` The model and processor can then be passed to the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline). Note that if you would like to have more control over the generation process, you can directly make use of `model.generate(...)` API as shown below. ```python pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch_dtype, device=device, ) ``` Next, we load a long-form audio sample. Here, we use an example of concatenated samples from the LibriSpeech corpus: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset[0]["audio"] ``` Finally, we can call the pipeline to transcribe the audio: ```python result = pipe(sample) print(result["text"]) ``` To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline: ```python result = pipe("audio.mp3") print(result["text"]) ``` <details> <summary> For more control over the generation parameters, use the model + processor API directly: </summary> ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor from datasets import Audio, load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate)) sample = dataset[0]["audio"] inputs = processor( sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt", truncation=False, padding="longest", return_attention_mask=True, ) inputs = inputs.to(device, dtype=torch_dtype) gen_kwargs = { "max_new_tokens": 448, "num_beams": 1, "condition_on_prev_tokens": False, "compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space) "temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0), "logprob_threshold": -1.0, "no_speech_threshold": 0.6, "return_timestamps": True, } pred_ids = model.generate(**inputs, **gen_kwargs) pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=False) print(pred_text) ``` </details> ### Chunked Long-Form distil-large-v3 remains compatible with the Transformers chunked long-form algorithm. This algorithm should be used when a single large audio file is being transcribed and the fastest possible inference is required. In such circumstances, the chunked algorithm is up to 9x faster than OpenAI's sequential long-form implementation (see Table 7 of the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf)). We can load the model and processor as before: ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) ``` To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For distil-large-v3, a chunk length of 25-seconds is optimal. To activate batching, pass the argument `batch_size`: ```python pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=25, batch_size=16, torch_dtype=torch_dtype, device=device, ) ``` The argument `max_new_tokens` controls the maximum number of generated tokens *per-chunk*. In the typical speech setting, we have no more than 3 words spoken per-second. Therefore, for a 30-second input, we have at most 90 words (approx 128 tokens). We set the maximum number of generated tokens per-chunk to 128 to truncate any possible hallucinations that occur at the end of the segment. These tokens get removed at the chunk borders using the long-form chunking transcription algorithm, so it is more efficient to truncate them directly during generation to avoid redundant generation steps in the decoder. Now, let's load a long-form audio sample. Here, we use an example of concatenated samples from the LibriSpeech corpus: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset[0]["audio"] ``` Finally, we can call the pipeline to transcribe the audio: ```python result = pipe(sample) print(result["text"]) ``` For more information on how to customize the automatic speech recognition pipeline, please refer to the ASR pipeline [docs](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline). ### Speculative Decoding Distil-Whisper can be used as an assistant model to Whisper for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding). Speculative decoding mathematically ensures the exact same outputs as Whisper are obtained while being 2 times faster. This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed. For speculative decoding, we need to load both the teacher: [`openai/whisper-large-v3`](https://huggingface.co/openai/whisper-large-v3). As well as the assistant (*a.k.a* student) [`distil-whisper/distil-large-v3`](https://huggingface.co/distil-whisper/distil-large-v3). Let's start by loading the teacher model and processor. We do this in much the same way we loaded the Distil-Whisper model in the previous examples: ```python from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor import torch device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "openai/whisper-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) ``` Now let's load the assistant. Since Distil-Whisper shares exactly same encoder as the teacher model, we only need to load the 2-layer decoder as a "Decoder-only" model: ```python from transformers import AutoModelForCausalLM assistant_model_id = "distil-whisper/distil-large-v2" assistant_model = AutoModelForCausalLM.from_pretrained( assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) assistant_model.to(device) ``` The assistant model shares the same processor as the teacher, so there's no need to load a student processor. We can now pass the assistant model to the pipeline to be used for speculative decoding. We pass it as a `generate_kwarg` with the key [`"assistant_model"`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationMixin.generate.assistant_model) so that speculative decoding is enabled: ```python pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, generate_kwargs={"assistant_model": assistant_model}, torch_dtype=torch_dtype, device=device, ) ``` As before, we can pass any sample to the pipeline to be transcribed: ```python from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` **Note:** speculative decoding should be on average 2x faster than using "only" Whisper large-v2 at a mere 8% increase in VRAM memory usage while mathematically ensuring the same results. This makes it the perfect replacement for Whisper large-v2 in existing speech recognition pipelines. For more details on speculative decoding, refer to the following resources: * [Speculative decoding for 2x faster Whisper inference](https://huggingface.co/blog/whisper-speculative-decoding) blog post by Sanchit Gandhi * [Assisted Generation: a new direction toward low-latency text generation](https://huggingface.co/blog/assisted-generation) blog post by Joao Gante * [Fast Inference from Transformers via Speculative Decoding](https://arxiv.org/abs/2211.17192) paper by Leviathan et. al. ### Additional Speed & Memory Improvements You can apply additional speed and memory improvements to Distil-Whisper which we cover in the following. #### Flash Attention We recommend using [Flash Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention): ``` pip install flash-attn --no-build-isolation ``` You can then pass `use_flash_attention_2=True` to `from_pretrained` to enable Flash Attention 2: ```diff - model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True) + model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, use_flash_attention_2=True) ``` #### Torch Scale-Product-Attention (SDPA) If your GPU does not support Flash Attention, we recommend making use of [BetterTransformers](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#bettertransformer). To do so, you first need to install optimum: ``` pip install --upgrade optimum ``` And then convert your model to a "BetterTransformer" model before using it: ```diff model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True) + model = model.to_bettertransformer() ``` ### Exporting to Other Libraries Distil-Whisper has support in the following libraries with the original "sequential" long-form transcription algorithm. Click the links in the table to see the relevant code-snippets for each: | Library | distil-small.en | distil-medium.en | distil-large-v2 | |-----------------|-------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | OpenAI Whisper | [link](https://huggingface.co/distil-whisper/distil-small.en#running-whisper-in-openai-whisper) | [link](https://huggingface.co/distil-whisper/distil-medium.en#running-whisper-in-openai-whisper) | [link](https://huggingface.co/distil-whisper/distil-large-v2#running-whisper-in-openai-whisper) | | Whisper cpp | [link](https://huggingface.co/distil-whisper/distil-small.en#whispercpp) | [link](https://huggingface.co/distil-whisper/distil-medium.en#whispercpp) | [link](https://huggingface.co/distil-whisper/distil-large-v2#whispercpp) | | Transformers js | [link](https://huggingface.co/distil-whisper/distil-small.en#transformersjs) | [link](https://huggingface.co/distil-whisper/distil-medium.en#transformersjs) | [link](https://huggingface.co/distil-whisper/distil-large-v2#transformersjs) | | Candle (Rust) | [link](https://huggingface.co/distil-whisper/distil-small.en#candle) | [link](https://huggingface.co/distil-whisper/distil-medium.en#candle) | [link](https://huggingface.co/distil-whisper/distil-large-v2#candle) | Updates will be posted here with the integration of the "chunked" long-form transcription algorithm into the respective libraries. For the 🤗 Transformers code-examples, refer to the sections [Short-Form](#short-form-transcription) and [Long-Form](#long-form-transcription) Transcription. ## 2. Why use Distil-Whisper? ⁉️ Distil-Whisper is designed to be a drop-in replacement for Whisper on English speech recognition. Here are 5 reasons for making the switch to Distil-Whisper: 1. **Faster inference:** 6 times faster inference speed, while performing to within 1% WER of Whisper on out-of-distribution audio: <p align="center"> <img src="https://huggingface.co/datasets/distil-whisper/figures/resolve/main/main_table.png?raw=true" width="600"/> </p> 2. **Robustness to noise:** demonstrated by strong WER performance at low signal-to-noise ratios: <p align="center"> <img src="https://huggingface.co/datasets/distil-whisper/figures/resolve/main/noise.png?raw=true" width="600"/> </p> 3. **Robustness to hallucinations:** quantified by 1.3 times fewer repeated 5-gram word duplicates (5-Dup.) and 2.1% lower insertion error rate (IER) than Whisper: <p align="center"> <img src="https://huggingface.co/datasets/distil-whisper/figures/resolve/main/hallucination.png?raw=true" width="600"/> </p> 4. **Designed for speculative decoding:** Distil-Whisper can be used as an assistant model to Whisper, giving 2 times faster inference speed while mathematically ensuring the same outputs as the Whisper model. 5. **Permissive license:** Distil-Whisper is [MIT licensed](./LICENSE), meaning it can be used for commercial applications. ## 3. Approach ✍️ To distill Whisper, we copy the entire encoder module and freeze it during training. We copy only two decoder layers, which are initialised from the first and last decoder layers from Whisper. All other decoder layers from Whisper are discarded: <p align="center"> <img src="https://huggingface.co/datasets/distil-whisper/figures/resolve/main/architecture.png?raw=true" width="600"/> </p> Distil-Whisper is trained on a *knowledge distillation* objective. Specifically, it is trained to minimise the KL divergence between the distilled model and the Whisper model, as well as the cross-entropy loss on pseudo-labelled audio data. We train Distil-Whisper on a total of 22k hours of pseudo-labelled audio data, spanning 10 domains with over 18k speakers: <p align="center"> <img src="https://huggingface.co/datasets/distil-whisper/figures/resolve/main/datasets.png?raw=true" width="600"/> </p> This diverse audio dataset is paramount to ensuring robustness of Distil-Whisper to different datasets and domains. In addition, we use a WER filter to discard pseudo-labels where Whisper mis-transcribes or hallucinates. This greatly improves WER performance of the downstream distilled model. For full details on the distillation set-up and evaluation results, refer to the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430). ## 4. Training Code Training code to reproduce Distil-Whisper can be found in the directory [training](training). This code has been adapted be general enough to distill Whisper for multilingual speech recognition, facilitating anyone in the community to distill Whisper on their choice of language. ## 5. Acknowledgements * OpenAI for the Whisper [model](https://huggingface.co/openai/whisper-large-v3) and [original codebase](https://github.com/openai/whisper) * Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration * Google's [TPU Research Cloud (TRC)](https://sites.research.google/trc/about/) program for Cloud TPU v4s ## 6. Citation If you use this model, please consider citing the Distil-Whisper paper: ``` @misc{gandhi2023distilwhisper, title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling}, author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush}, year={2023}, eprint={2311.00430}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` And also the Whisper paper: ``` @misc{radford2022robust, title={Robust Speech Recognition via Large-Scale Weak Supervision}, author={Alec Radford and Jong Wook Kim and Tao Xu and Greg Brockman and Christine McLeavey and Ilya Sutskever}, year={2022}, eprint={2212.04356}, archivePrefix={arXiv}, primaryClass={eess.AS} } ```
distil-whisper/README.md/0
{ "file_path": "distil-whisper/README.md", "repo_id": "distil-whisper", "token_count": 9088 }
256
#!/bin/bash accelerate launch --multi_gpu --mixed_precision=bf16 --num_processes=2 run_distillation_pt.py \ --model_name_or_path distil-whisper/large-32-2 \ --teacher_model_name_or_path openai/whisper-large-v2 \ --train_dataset_config_name all+all+all+l \ --train_dataset_samples 2.9+10.4+14.9+226.6 \ --train_dataset_name librispeech_asr+librispeech_asr+librispeech_asr+gigaspeech-l \ --train_split_name train.clean.100+train.clean.360+train.other.500+train \ --eval_dataset_name librispeech_asr+librispeech_asr+gigaspeech-l \ --eval_dataset_config_name all+all+l \ --eval_split_name validation.clean+validation.other+validation \ --eval_text_column_name text+text+text \ --eval_steps 2500 \ --save_steps 2500 \ --warmup_steps 50 \ --learning_rate 0.0001 \ --lr_scheduler_type constant_with_warmup \ --logging_steps 25 \ --save_total_limit 1 \ --max_steps 10000 \ --wer_threshold 10 \ --per_device_train_batch_size 64 \ --gradient_accumulation_steps 2 \ --per_device_eval_batch_size 64 \ --dataloader_num_workers 16 \ --cache_dir /fsx/sanchit/cache \ --dataset_cache_dir /fsx/sanchit/cache \ --dtype bfloat16 \ --output_dir ./ \ --wandb_project distil-whisper-training \ --do_train \ --do_eval \ --gradient_checkpointing \ --overwrite_output_dir \ --predict_with_generate \ --freeze_encoder \ --streaming
distil-whisper/training/flax/distillation_scripts/run_32_2_pt.sh/0
{ "file_path": "distil-whisper/training/flax/distillation_scripts/run_32_2_pt.sh", "repo_id": "distil-whisper", "token_count": 633 }
257
#!/usr/bin/env bash TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD=10000000000 python run_distillation.py \ --model_name_or_path "distil-whisper/large-32-2" \ --teacher_model_name_or_path "openai/whisper-large-v2" \ --dataset_name "distil-whisper/librispeech_asr" \ --dataset_config_name "all" \ --train_split_name "train.clean.100+train.clean.360+train.other.500" \ --eval_split_name "validation.clean" \ --text_column_name "whisper_transcript" \ --cache_dir "/home/sanchitgandhi/cache" \ --dataset_cache_dir "/home/sanchitgandhi/cache" \ --output_dir "./" \ --wandb_name "large-32-2-ts-librispeech" \ --wandb_dir "/home/sanchitgandhi/.cache" \ --wandb_project "distil-whisper-librispeech" \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 16 \ --dtype "bfloat16" \ --learning_rate 1e-4 \ --warmup_steps 500 \ --temperature 2.0 \ --do_train \ --do_eval \ --num_train_epochs 10 \ --preprocessing_num_workers 16 \ --dataloader_num_workers 8 \ --logging_steps 25 \ --use_scan \ --gradient_checkpointing \ --overwrite_output_dir \ --predict_with_generate \ --push_to_hub
distil-whisper/training/flax/distillation_scripts/run_librispeech.sh/0
{ "file_path": "distil-whisper/training/flax/distillation_scripts/run_librispeech.sh", "repo_id": "distil-whisper", "token_count": 483 }
258
#!/usr/bin/env bash python run_eval.py \ --model_name_or_path "./" \ --dataset_name "distil-whisper/librispeech_asr" \ --dataset_config_name "all" \ --test_split_name "validation.clean+validation.other+test.clean+test.other" \ --text_column_name "text" \ --cache_dir "/home/sanchitgandhi/cache" \ --dataset_cache_dir "/home/sanchitgandhi/cache" \ --output_dir "./" \ --wandb_name "large-32-2-pl-freeze-librispeech-eval" \ --wandb_dir "/home/sanchitgandhi/.cache" \ --wandb_project "distil-whisper-librispeech" \ --per_device_eval_batch_size 128 \ --dtype "bfloat16" \ --do_predict \ --preprocessing_num_workers 16 \ --dataloader_num_workers 8 \ --load_with_scan \ --predict_with_generate \ --report_to "wandb"
distil-whisper/training/flax/finetuning_scripts/run_librispeech_eval.sh/0
{ "file_path": "distil-whisper/training/flax/finetuning_scripts/run_librispeech_eval.sh", "repo_id": "distil-whisper", "token_count": 322 }
259
#!/usr/bin/env bash python run_long_form_transcription.py \ --model_name_or_path "openai/whisper-tiny" \ --dataset_name "distil-whisper/tedlium-long-form" \ --dataset_config_name "all" \ --dataset_split_name "validation" \ --cache_dir "/home/sanchitgandhi/.cache" \ --dataset_cache_dir "/home/sanchitgandhi/.cache" \ --output_dir "./" \ --wandb_dir "/home/sanchitgandhi/.cache" \ --wandb_project "distil-whisper-debug" \ --wandb_name "whisper-tiny-tedlium-long-form" \ --per_device_eval_batch_size 64 \ --max_eval_samples 1 \ --dtype "bfloat16" \ --report_to "wandb" \ --streaming
distil-whisper/training/flax/long_form_transcription_scripts/run_tedlium_long_form_dummy.sh/0
{ "file_path": "distil-whisper/training/flax/long_form_transcription_scripts/run_tedlium_long_form_dummy.sh", "repo_id": "distil-whisper", "token_count": 267 }
260
import evaluate from evaluate.utils import launch_gradio_widget module = evaluate.load("exact_match", module_type="comparison") launch_gradio_widget(module)
evaluate/comparisons/exact_match/app.py/0
{ "file_path": "evaluate/comparisons/exact_match/app.py", "repo_id": "evaluate", "token_count": 46 }
261
# Considerations for model evaluation Developing an ML model is rarely a one-shot deal: it often involves multiple stages of defining the model architecture and tuning hyper-parameters before converging on a final set. Responsible model evaluation is a key part of this process, and 🤗 Evaluate is here to help! Here are some things to keep in mind when evaluating your model using the 🤗 Evaluate library: ## Properly splitting your data Good evaluation generally requires three splits of your dataset: - **train**: this is used for training your model. - **validation**: this is used for validating the model hyperparameters. - **test**: this is used for evaluating your model. Many of the datasets on the 🤗 Hub are separated into 2 splits: `train` and `validation`; others are split into 3 splits (`train`, `validation` and `test`) -- make sure to use the right split for the right purpose! Some datasets on the 🤗 Hub are already separated into these three splits. However, there are also many that only have a train/validation or only train split. If the dataset you're using doesn't have a predefined train-test split, it is up to you to define which part of the dataset you want to use for training your model and which you want to use for hyperparameter tuning or final evaluation. <Tip warning={true}> Training and evaluating on the same split can misrepresent your results! If you overfit on your training data the evaluation results on that split will look great but the model will perform poorly on new data. </Tip> Depending on the size of the dataset, you can keep anywhere from 10-30% for evaluation and the rest for training, while aiming to set up the test set to reflect the production data as close as possible. Check out [this thread](https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090) for a more in-depth discussion of dataset splitting! ## The impact of class imbalance While many academic datasets, such as the [IMDb dataset](https://huggingface.co/datasets/imdb) of movie reviews, are perfectly balanced, most real-world datasets are not. In machine learning a *balanced dataset* corresponds to a datasets where all labels are represented equally. In the case of the IMDb dataset this means that there are as many positive as negative reviews in the dataset. In an imbalanced dataset this is not the case: in fraud detection for example there are usually many more non-fraud cases than fraud cases in the dataset. Having an imbalanced dataset can skew the results of your metrics. Imagine a dataset with 99 "non-fraud" cases and 1 "fraud" case. A simple model that always predicts "non-fraud" cases would give yield a 99% accuracy which might sound good at first until you realize that you will never catch a fraud case. Often, using more than one metric can help get a better idea of your model’s performance from different points of view. For instance, metrics like **[recall](https://huggingface.co/metrics/recall)** and **[precision](https://huggingface.co/metrics/precision)** can be used together, and the **[f1 score](https://huggingface.co/metrics/f1)** is actually the harmonic mean of the two. In cases where a dataset is balanced, using [accuracy](https://huggingface.co/metrics/accuracy) can reflect the overall model performance: ![Balanced Labels](https://huggingface.co/datasets/evaluate/media/resolve/main/balanced-classes.png) In cases where there is an imbalance, using [F1 score](https://huggingface.co/metrics/f1) can be a better representation of performance, given that it encompasses both precision and recall. ![Imbalanced Labels](https://huggingface.co/datasets/evaluate/media/resolve/main/imbalanced-classes.png) Using accuracy in an imbalanced setting is less ideal, since it is not sensitive to minority classes and will not faithfully reflect model performance on them. ## Offline vs. online model evaluation There are multiple ways to evaluate models, and an important distinction is offline versus online evaluation: **Offline evaluation** is done before deploying a model or using insights generated from a model, using static datasets and metrics. **Online evaluation** means evaluating how a model is performing after deployment and during its use in production. These two types of evaluation can use different metrics and measure different aspects of model performance. For example, offline evaluation can compare a model to other models based on their performance on common benchmarks, whereas online evaluation will evaluate aspects such as latency and accuracy of the model based on production data (for example, the number of user queries that it was able to address). ## Trade-offs in model evaluation When evaluating models in practice, there are often trade-offs that have to be made between different aspects of model performance: for instance, choosing a model that is slightly less accurate but that has a faster inference time, compared to a high-accuracy that has a higher memory footprint and requires access to more GPUs. Here are other aspects of model performance to consider during evaluation: ### Interpretability When evaluating models, **interpretability** (i.e. the ability to *interpret* results) can be very important, especially when deploying models in production. For instance, metrics such as [exact match](https://huggingface.co/spaces/evaluate-metric/exact_match) have a set range (between 0 and 1, or 0% and 100%) and are easily understandable to users: for a pair of strings, the exact match score is 1 if the two strings are the exact same, and 0 otherwise. Other metrics, such as [BLEU](https://huggingface.co/spaces/evaluate-metric/exact_match) are harder to interpret: while they also range between 0 and 1, they can vary greatly depending on which parameters are used to generate the scores, especially when different tokenization and normalization techniques are used (see the [metric card](https://huggingface.co/spaces/evaluate-metric/bleu/blob/main/README.md) for more information about BLEU limitations). This means that it is difficult to interpret a BLEU score without having more information about the procedure used for obtaining it. Interpretability can be more or less important depending on the evaluation use case, but it is a useful aspect of model evaluation to keep in mind, since communicating and comparing model evaluations is an important part of responsible machine learning. ### Inference speed and memory footprint While recent years have seen increasingly large ML models achieve high performance on a large variety of tasks and benchmarks, deploying these multi-billion parameter models in practice can be a challenge in itself, and many organizations lack the resources for this. This is why considering the **inference speed** and **memory footprint** of models is important, especially when doing online model evaluation. Inference speed refers to the time that it takes for a model to make a prediction -- this will vary depending on the hardware used and the way in which models are queried, e.g. in real time via an API or in batch jobs that run once a day. Memory footprint refers to the size of the model weights and how much hardware memory they occupy. If a model is too large to fit on a single GPU or CPU, then it has to be split over multiple ones, which can be more or less difficult depending on the model architecture and the deployment method. When doing online model evaluation, there is often a trade-off to be done between inference speed and accuracy or precision, whereas this is less the case for offline evaluation. ## Limitations and bias All models and all metrics have their limitations and biases, which depend on the way in which they were trained, the data that was used, and their intended uses. It is important to measure and communicate these limitations clearly to prevent misuse and unintended impacts, for instance via [model cards](https://huggingface.co/course/chapter4/4?fw=pt) which document the training and evaluation process. Measuring biases can be done by evaluating models on datasets such as [Wino Bias](https://huggingface.co/datasets/wino_bias) or [MD Gender Bias](https://huggingface.co/datasets/md_gender_bias), and by doing [Interactive Error Analyis](https://huggingface.co/spaces/nazneen/error-analysis) to try to identify which subsets of the evaluation dataset a model performs poorly on. We are currently working on additional measurements that can be used to quantify different dimensions of bias in both models and datasets -- stay tuned for more documentation on this topic!
evaluate/docs/source/considerations.mdx/0
{ "file_path": "evaluate/docs/source/considerations.mdx", "repo_id": "evaluate", "token_count": 2017 }
262
# Types of Evaluations in 🤗 Evaluate The goal of the 🤗 Evaluate library is to support different types of evaluation, depending on different goals, datasets and models. Here are the types of evaluations that are currently supported with a few examples for each: ## Metrics A metric measures the performance of a model on a given dataset. This is often based on an existing ground truth (i.e. a set of references), but there are also *referenceless metrics* which allow evaluating generated text by leveraging a pretrained model such as [GPT-2](https://huggingface.co/gpt2). Examples of metrics include: - [Accuracy](https://huggingface.co/metrics/accuracy) : the proportion of correct predictions among the total number of cases processed. - [Exact Match](https://huggingface.co/metrics/exact_match): the rate at which the input predicted strings exactly match their references. - [Mean Intersection over union (IoUO)](https://huggingface.co/metrics/mean_iou): the area of overlap between the predicted segmentation of an image and the ground truth divided by the area of union between the predicted segmentation and the ground truth. Metrics are often used to track model performance on benchmark datasets, and to report progress on tasks such as [machine translation](https://huggingface.co/tasks/translation) and [image classification](https://huggingface.co/tasks/image-classification). ## Comparisons Comparisons can be useful to compare the performance of two or more models on a single test dataset. For instance, the [McNemar Test](https://github.com/huggingface/evaluate/tree/main/comparisons/mcnemar) is a paired nonparametric statistical hypothesis test that takes the predictions of two models and compares them, aiming to measure whether the models's predictions diverge or not. The p value it outputs, which ranges from `0.0` to `1.0`, indicates the difference between the two models' predictions, with a lower p value indicating a more significant difference. Comparisons have yet to be systematically used when comparing and reporting model performance, however they are useful tools to go beyond simply comparing leaderboard scores and for getting more information on the way model prediction differ. ## Measurements In the 🤗 Evaluate library, measurements are tools for gaining more insights on datasets and model predictions. For instance, in the case of datasets, it can be useful to calculate the [average word length](https://github.com/huggingface/evaluate/tree/main/measurements/word_length) of a dataset's entries, and how it is distributed -- this can help when choosing the maximum input length for [Tokenizer](https://huggingface.co/docs/transformers/main_classes/tokenizer). In the case of model predictions, it can help to calculate the average [perplexity](https://huggingface.co/metrics/perplexity) of model predictions using different models such as [GPT-2](https://huggingface.co/gpt2) and [BERT](https://huggingface.co/bert-base-uncased), which can indicate the quality of generated text when no reference is available. All three types of evaluation supported by the 🤗 Evaluate library are meant to be mutually complementary, and help our community carry out more mindful and responsible evaluation. We will continue adding more types of metrics, measurements and comparisons in coming months, and are counting on community involvement (via [PRs](https://github.com/huggingface/evaluate/compare) and [issues](https://github.com/huggingface/evaluate/issues/new/choose)) to make the library as extensive and inclusive as possible!
evaluate/docs/source/types_of_evaluations.mdx/0
{ "file_path": "evaluate/docs/source/types_of_evaluations.mdx", "repo_id": "evaluate", "token_count": 867 }
263
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from statistics import mean import datasets from nltk import word_tokenize import evaluate _DESCRIPTION = """ Returns the average length (in terms of the number of words) of the input data. """ _KWARGS_DESCRIPTION = """ Args: `data`: a list of `str` for which the word length is calculated. `tokenizer` (`Callable`) : the approach used for tokenizing `data` (optional). The default tokenizer is `word_tokenize` from NLTK: https://www.nltk.org/api/nltk.tokenize.html This can be replaced by any function that takes a string as input and returns a list of tokens as output. Returns: `average_word_length` (`float`) : the average number of words in the input list of strings. Examples: >>> data = ["hello world"] >>> wordlength = evaluate.load("word_length", module_type="measurement") >>> results = wordlength.compute(data=data) >>> print(results) {'average_word_length': 2} """ # TODO: Add BibTeX citation _CITATION = """\ @InProceedings{huggingface:module, title = {A great new module}, authors={huggingface, Inc.}, year={2020} } """ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) class WordLength(evaluate.Measurement): """This measurement returns the average number of words in the input string(s).""" def _info(self): # TODO: Specifies the evaluate.MeasurementInfo object return evaluate.MeasurementInfo( # This is the description that will appear on the modules page. module_type="measurement", description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, # This defines the format of each prediction and reference features=datasets.Features( { "data": datasets.Value("string"), } ), ) def _download_and_prepare(self, dl_manager): import nltk nltk.download("punkt") def _compute(self, data, tokenizer=word_tokenize): """Returns the average word length of the input data""" lengths = [len(tokenizer(d)) for d in data] average_length = mean(lengths) return {"average_word_length": average_length}
evaluate/measurements/word_length/word_length.py/0
{ "file_path": "evaluate/measurements/word_length/word_length.py", "repo_id": "evaluate", "token_count": 1008 }
264
import evaluate from evaluate.utils import launch_gradio_widget module = evaluate.load("charcut_mt") launch_gradio_widget(module)
evaluate/metrics/charcut_mt/app.py/0
{ "file_path": "evaluate/metrics/charcut_mt/app.py", "repo_id": "evaluate", "token_count": 39 }
265
# coding=utf-8 # Copyright 2020 The HuggingFace Evaluate Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ MAUVE metric from https://github.com/krishnap25/mauve. """ import datasets import faiss # Here to have a nice missing dependency error message early on import numpy # Here to have a nice missing dependency error message early on import requests # Here to have a nice missing dependency error message early on import sklearn # Here to have a nice missing dependency error message early on import tqdm # Here to have a nice missing dependency error message early on from mauve import compute_mauve # From: mauve-text import evaluate _CITATION = """\ @inproceedings{pillutla-etal:mauve:neurips2021, title={{MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers}}, author={Pillutla, Krishna and Swayamdipta, Swabha and Zellers, Rowan and Thickstun, John and Welleck, Sean and Choi, Yejin and Harchaoui, Zaid}, booktitle = {NeurIPS}, year = {2021} } @article{pillutla-etal:mauve:arxiv2022, title={{MAUVE Scores for Generative Models: Theory and Practice}}, author={Pillutla, Krishna and Liu, Lang and Thickstun, John and Welleck, Sean and Swayamdipta, Swabha and Zellers, Rowan and Oh, Sewoong and Choi, Yejin and Harchaoui, Zaid}, journal={arXiv Preprint}, year={2022} } """ _DESCRIPTION = """\ MAUVE is a measure of the statistical gap between two text distributions, e.g., how far the text written by a model is the distribution of human text, using samples from both distributions. MAUVE is obtained by computing Kullback–Leibler (KL) divergences between the two distributions in a quantized embedding space of a large language model. It can quantify differences in the quality of generated text based on the size of the model, the decoding algorithm, and the length of the generated text. MAUVE was found to correlate the strongest with human evaluations over baseline metrics for open-ended text generation. This metrics is a wrapper around the official implementation of MAUVE: https://github.com/krishnap25/mauve """ _KWARGS_DESCRIPTION = """ Calculates MAUVE scores between two lists of generated text and reference text. Args: predictions: list of generated text to score. Each predictions should be a string with tokens separated by spaces. references: list of reference for each prediction. Each reference should be a string with tokens separated by spaces. Optional Args: num_buckets: the size of the histogram to quantize P and Q. Options: 'auto' (default) or an integer pca_max_data: the number data points to use for PCA dimensionality reduction prior to clustering. If -1, use all the data. Default -1 kmeans_explained_var: amount of variance of the data to keep in dimensionality reduction by PCA. Default 0.9 kmeans_num_redo: number of times to redo k-means clustering (the best objective is kept). Default 5 kmeans_max_iter: maximum number of k-means iterations. Default 500 featurize_model_name: name of the model from which features are obtained. Default 'gpt2-large' Use one of ['gpt2', 'gpt2-medium', 'gpt2-large', 'gpt2-xl']. device_id: Device for featurization. Supply a GPU id (e.g. 0 or 3) to use GPU. If no GPU with this id is found, use CPU max_text_length: maximum number of tokens to consider. Default 1024 divergence_curve_discretization_size: Number of points to consider on the divergence curve. Default 25 mauve_scaling_factor: "c" from the paper. Default 5. verbose: If True (default), print running time updates seed: random seed to initialize k-means cluster assignments. Returns: mauve: MAUVE score, a number between 0 and 1. Larger values indicate that P and Q are closer, frontier_integral: Frontier Integral, a number between 0 and 1. Smaller values indicate that P and Q are closer, divergence_curve: a numpy.ndarray of shape (m, 2); plot it with matplotlib to view the divergence curve, p_hist: a discrete distribution, which is a quantized version of the text distribution p_text, q_hist: same as above, but with q_text. Examples: >>> # faiss segfaults in doctest for some reason, so the .compute call is not tested with doctest >>> import evaluate >>> mauve = evaluate.load('mauve') >>> predictions = ["hello there", "general kenobi"] >>> references = ["hello there", "general kenobi"] >>> out = mauve.compute(predictions=predictions, references=references) # doctest: +SKIP >>> print(out.mauve) # doctest: +SKIP 1.0 """ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) class Mauve(evaluate.Metric): def _info(self): return evaluate.MetricInfo( description=_DESCRIPTION, citation=_CITATION, homepage="https://github.com/krishnap25/mauve", inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Value("string", id="sequence"), "references": datasets.Value("string", id="sequence"), } ), codebase_urls=["https://github.com/krishnap25/mauve"], reference_urls=[ "https://arxiv.org/abs/2102.01454", "https://github.com/krishnap25/mauve", ], ) def _compute( self, predictions, references, p_features=None, q_features=None, p_tokens=None, q_tokens=None, num_buckets="auto", pca_max_data=-1, kmeans_explained_var=0.9, kmeans_num_redo=5, kmeans_max_iter=500, featurize_model_name="gpt2-large", device_id=-1, max_text_length=1024, divergence_curve_discretization_size=25, mauve_scaling_factor=5, verbose=True, seed=25, ): out = compute_mauve( p_text=predictions, q_text=references, p_features=p_features, q_features=q_features, p_tokens=p_tokens, q_tokens=q_tokens, num_buckets=num_buckets, pca_max_data=pca_max_data, kmeans_explained_var=kmeans_explained_var, kmeans_num_redo=kmeans_num_redo, kmeans_max_iter=kmeans_max_iter, featurize_model_name=featurize_model_name, device_id=device_id, max_text_length=max_text_length, divergence_curve_discretization_size=divergence_curve_discretization_size, mauve_scaling_factor=mauve_scaling_factor, verbose=verbose, seed=seed, ) return out
evaluate/metrics/mauve/mauve.py/0
{ "file_path": "evaluate/metrics/mauve/mauve.py", "repo_id": "evaluate", "token_count": 2712 }
266
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """NLTK's NIST implementation on both the sentence and corpus level""" from typing import Dict, Optional import datasets import nltk from datasets import Sequence, Value try: nltk.data.find("perluniprops") except LookupError: nltk.download("perluniprops", quiet=True) # NISTTokenizer requirement from nltk.tokenize.nist import NISTTokenizer from nltk.translate.nist_score import corpus_nist, sentence_nist import evaluate _CITATION = """\ @inproceedings{10.5555/1289189.1289273, author = {Doddington, George}, title = {Automatic Evaluation of Machine Translation Quality Using N-Gram Co-Occurrence Statistics}, year = {2002}, publisher = {Morgan Kaufmann Publishers Inc.}, address = {San Francisco, CA, USA}, booktitle = {Proceedings of the Second International Conference on Human Language Technology Research}, pages = {138–145}, numpages = {8}, location = {San Diego, California}, series = {HLT '02} } """ _DESCRIPTION = """\ DARPA commissioned NIST to develop an MT evaluation facility based on the BLEU score. The official script used by NIST to compute BLEU and NIST score is mteval-14.pl. The main differences are: - BLEU uses geometric mean of the ngram precisions, NIST uses arithmetic mean. - NIST has a different brevity penalty - NIST score from mteval-14.pl has a self-contained tokenizer (in the Hugging Face implementation we rely on NLTK's implementation of the NIST-specific tokenizer) """ _KWARGS_DESCRIPTION = """ Computes NIST score of translated segments against one or more references. Args: predictions: predictions to score (list of str) references: potentially multiple references for each prediction (list of str or list of list of str) n: highest n-gram order lowercase: whether to lowercase the data (only applicable if 'western_lang' is True) western_lang: whether the current language is a Western language, which will enable some specific tokenization rules with respect to, e.g., punctuation Returns: 'nist_mt': nist_mt score Examples: >>> nist_mt = evaluate.load("nist_mt") >>> hypothesis = "It is a guide to action which ensures that the military always obeys the commands of the party" >>> reference1 = "It is a guide to action that ensures that the military will forever heed Party commands" >>> reference2 = "It is the guiding principle which guarantees the military forces always being under the command of the Party" >>> reference3 = "It is the practical guide for the army always to heed the directions of the party" >>> nist_mt.compute(predictions=[hypothesis], references=[[reference1, reference2, reference3]]) {'nist_mt': 3.3709935957649324} >>> nist_mt.compute(predictions=[hypothesis], references=[reference1]) {'nist_mt': 2.4477124183006533} """ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) class NistMt(evaluate.Metric): """A wrapper around NLTK's NIST implementation.""" def _info(self): return evaluate.MetricInfo( module_type="metric", description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=[ datasets.Features( { "predictions": Value("string", id="prediction"), "references": Sequence(Value("string", id="reference"), id="references"), } ), datasets.Features( { "predictions": Value("string", id="prediction"), "references": Value("string", id="reference"), } ), ], homepage="https://www.nltk.org/api/nltk.translate.nist_score.html", codebase_urls=["https://github.com/nltk/nltk/blob/develop/nltk/translate/nist_score.py"], reference_urls=["https://en.wikipedia.org/wiki/NIST_(metric)"], ) def _compute(self, predictions, references, n: int = 5, lowercase=False, western_lang=True): tokenizer = NISTTokenizer() # Account for single reference cases: references always need to have one more dimension than predictions if isinstance(references[0], str): references = [[ref] for ref in references] predictions = [ tokenizer.tokenize(pred, return_str=False, lowercase=lowercase, western_lang=western_lang) for pred in predictions ] references = [ [ tokenizer.tokenize(ref, return_str=False, lowercase=lowercase, western_lang=western_lang) for ref in ref_sentences ] for ref_sentences in references ] return {"nist_mt": corpus_nist(list_of_references=references, hypotheses=predictions, n=n)}
evaluate/metrics/nist_mt/nist_mt.py/0
{ "file_path": "evaluate/metrics/nist_mt/nist_mt.py", "repo_id": "evaluate", "token_count": 2007 }
267
import evaluate from evaluate.utils import launch_gradio_widget module = evaluate.load("precision") launch_gradio_widget(module)
evaluate/metrics/precision/app.py/0
{ "file_path": "evaluate/metrics/precision/app.py", "repo_id": "evaluate", "token_count": 36 }
268
import evaluate from evaluate.utils import launch_gradio_widget module = evaluate.load("roc_auc") launch_gradio_widget(module)
evaluate/metrics/roc_auc/app.py/0
{ "file_path": "evaluate/metrics/roc_auc/app.py", "repo_id": "evaluate", "token_count": 37 }
269
import sys import evaluate from evaluate.utils import launch_gradio_widget sys.path = [p for p in sys.path if p != "/home/user/app"] module = evaluate.load("seqeval") sys.path = ["/home/user/app"] + sys.path launch_gradio_widget(module)
evaluate/metrics/seqeval/app.py/0
{ "file_path": "evaluate/metrics/seqeval/app.py", "repo_id": "evaluate", "token_count": 81 }
270
# Copyright 2022 The HuggingFace Evaluate Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from numbers import Number from typing import TYPE_CHECKING, Any, Callable, Dict, Optional, Tuple, Union from datasets import Dataset, load_dataset from typing_extensions import Literal from ..module import EvaluationModule from ..utils.file_utils import add_end_docstrings, add_start_docstrings from .base import EVALUATOR_COMPUTE_RETURN_DOCSTRING, EVALUTOR_COMPUTE_START_DOCSTRING, Evaluator from .utils import DatasetColumnPair if TYPE_CHECKING: from transformers import FeatureExtractionMixin, Pipeline, PreTrainedModel, PreTrainedTokenizer, TFPreTrainedModel TASK_DOCUMENTATION = r""" Examples: ```python >>> from evaluate import evaluator >>> from datasets import load_dataset >>> task_evaluator = evaluator("text-classification") >>> data = load_dataset("imdb", split="test[:2]") >>> results = task_evaluator.compute( >>> model_or_pipeline="huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli", >>> data=data, >>> metric="accuracy", >>> label_mapping={"LABEL_0": 0.0, "LABEL_1": 1.0}, >>> strategy="bootstrap", >>> n_resamples=10, >>> random_state=0 >>> ) ``` """ class TextClassificationEvaluator(Evaluator): """ Text classification evaluator. This text classification evaluator can currently be loaded from [`evaluator`] using the default task name `text-classification` or with a `"sentiment-analysis"` alias. Methods in this class assume a data format compatible with the [`~transformers.TextClassificationPipeline`] - a single textual feature as input and a categorical label as output. """ PIPELINE_KWARGS = {"truncation": True} def __init__(self, task="text-classification", default_metric_name=None): super().__init__(task, default_metric_name=default_metric_name) def prepare_data(self, data: Union[str, Dataset], input_column: str, second_input_column: str, label_column: str): if data is None: raise ValueError( "Please specify a valid `data` object - either a `str` with a name or a `Dataset` object." ) self.check_required_columns(data, {"input_column": input_column, "label_column": label_column}) if second_input_column is not None: self.check_required_columns(data, {"second_input_column": second_input_column}) data = load_dataset(data) if isinstance(data, str) else data return {"references": data[label_column]}, DatasetColumnPair( data, input_column, second_input_column, "text", "text_pair" ) def predictions_processor(self, predictions, label_mapping): predictions = [ label_mapping[element["label"]] if label_mapping is not None else element["label"] for element in predictions ] return {"predictions": predictions} @add_start_docstrings(EVALUTOR_COMPUTE_START_DOCSTRING) @add_end_docstrings(EVALUATOR_COMPUTE_RETURN_DOCSTRING, TASK_DOCUMENTATION) def compute( self, model_or_pipeline: Union[ str, "Pipeline", Callable, "PreTrainedModel", "TFPreTrainedModel" # noqa: F821 ] = None, data: Union[str, Dataset] = None, subset: Optional[str] = None, split: Optional[str] = None, metric: Union[str, EvaluationModule] = None, tokenizer: Optional[Union[str, "PreTrainedTokenizer"]] = None, # noqa: F821 feature_extractor: Optional[Union[str, "FeatureExtractionMixin"]] = None, # noqa: F821 strategy: Literal["simple", "bootstrap"] = "simple", confidence_level: float = 0.95, n_resamples: int = 9999, device: int = None, random_state: Optional[int] = None, input_column: str = "text", second_input_column: Optional[str] = None, label_column: str = "label", label_mapping: Optional[Dict[str, Number]] = None, ) -> Tuple[Dict[str, float], Any]: """ input_column (`str`, *optional*, defaults to `"text"`): The name of the column containing the text feature in the dataset specified by `data`. second_input_column (`str`, *optional*, defaults to `None`): The name of the second column containing the text features. This may be useful for classification tasks as MNLI, where two columns are used. label_column (`str`, defaults to `"label"`): The name of the column containing the labels in the dataset specified by `data`. label_mapping (`Dict[str, Number]`, *optional*, defaults to `None`): We want to map class labels defined by the model in the pipeline to values consistent with those defined in the `label_column` of the `data` dataset. """ result = {} self.check_for_mismatch_in_device_setup(device, model_or_pipeline) # Prepare inputs data = self.load_data(data=data, subset=subset, split=split) metric_inputs, pipe_inputs = self.prepare_data( data=data, input_column=input_column, second_input_column=second_input_column, label_column=label_column ) pipe = self.prepare_pipeline( model_or_pipeline=model_or_pipeline, tokenizer=tokenizer, feature_extractor=feature_extractor, device=device, ) metric = self.prepare_metric(metric) # Compute predictions predictions, perf_results = self.call_pipeline(pipe, pipe_inputs) predictions = self.predictions_processor(predictions, label_mapping) metric_inputs.update(predictions) # Compute metrics from references and predictions metric_results = self.compute_metric( metric=metric, metric_inputs=metric_inputs, strategy=strategy, confidence_level=confidence_level, n_resamples=n_resamples, random_state=random_state, ) result.update(metric_results) result.update(perf_results) return result
evaluate/src/evaluate/evaluator/text_classification.py/0
{ "file_path": "evaluate/src/evaluate/evaluator/text_classification.py", "repo_id": "evaluate", "token_count": 2602 }
271
{ "module_name": "Awesome Module", "module_type": "module", "module_description": "This new module is designed to solve this great ML task and is crafted with a lot of care and love.", "module_slug": "{{ cookiecutter.module_name|lower|replace(' ', '_') }}", "module_class_name": "{{ cookiecutter.module_name|replace(' ', '') }}", "namespace": "", "dataset_name": "" }
evaluate/templates/cookiecutter.json/0
{ "file_path": "evaluate/templates/cookiecutter.json", "repo_id": "evaluate", "token_count": 140 }
272
import json import os import shutil import subprocess import tempfile import unittest import numpy as np import torch import transformers from datasets import load_dataset from transformers import AutoFeatureExtractor, AutoModelForImageClassification, Trainer, TrainingArguments, pipeline from evaluate import evaluator, load from .utils import slow class TestEvaluatorTrainerParity(unittest.TestCase): def setUp(self): self.dir_path = tempfile.mkdtemp("evaluator_trainer_parity_test") transformers_version = transformers.__version__ branch = "" if not transformers_version.endswith(".dev0"): branch = f"--branch v{transformers_version}" subprocess.run( f"git clone --depth 3 --filter=blob:none --sparse {branch} https://github.com/huggingface/transformers", shell=True, cwd=self.dir_path, ) def tearDown(self): shutil.rmtree(self.dir_path, ignore_errors=True) def test_text_classification_parity(self): model_name = "philschmid/tiny-bert-sst2-distilled" subprocess.run( "git sparse-checkout set examples/pytorch/text-classification", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) subprocess.run( f"python examples/pytorch/text-classification/run_glue.py" f" --model_name_or_path {model_name}" f" --task_name sst2" f" --do_eval" f" --max_seq_length 9999999999" # rely on tokenizer.model_max_length for max_length f" --output_dir {os.path.join(self.dir_path, 'textclassification_sst2_transformers')}" f" --max_eval_samples 80", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) with open( f"{os.path.join(self.dir_path, 'textclassification_sst2_transformers', 'eval_results.json')}", "r" ) as f: transformers_results = json.load(f) eval_dataset = load_dataset("glue", "sst2", split="validation[:80]") pipe = pipeline(task="text-classification", model=model_name, tokenizer=model_name) task_evaluator = evaluator(task="text-classification") evaluator_results = task_evaluator.compute( model_or_pipeline=pipe, data=eval_dataset, metric="accuracy", input_column="sentence", label_column="label", label_mapping={"negative": 0, "positive": 1}, strategy="simple", ) self.assertEqual(transformers_results["eval_accuracy"], evaluator_results["accuracy"]) @slow def test_text_classification_parity_two_columns(self): model_name = "prajjwal1/bert-tiny-mnli" max_eval_samples = 150 subprocess.run( "git sparse-checkout set examples/pytorch/text-classification", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) subprocess.run( f"python examples/pytorch/text-classification/run_glue.py" f" --model_name_or_path {model_name}" f" --task_name mnli" f" --do_eval" f" --max_seq_length 256" f" --output_dir {os.path.join(self.dir_path, 'textclassification_mnli_transformers')}" f" --max_eval_samples {max_eval_samples}", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) with open( f"{os.path.join(self.dir_path, 'textclassification_mnli_transformers', 'eval_results.json')}", "r" ) as f: transformers_results = json.load(f) eval_dataset = load_dataset("glue", "mnli", split=f"validation_matched[:{max_eval_samples}]") pipe = pipeline(task="text-classification", model=model_name, tokenizer=model_name, max_length=256) task_evaluator = evaluator(task="text-classification") evaluator_results = task_evaluator.compute( model_or_pipeline=pipe, data=eval_dataset, metric="accuracy", input_column="premise", second_input_column="hypothesis", label_column="label", label_mapping={"LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2}, ) self.assertEqual(transformers_results["eval_accuracy"], evaluator_results["accuracy"]) def test_image_classification_parity(self): # we can not compare to the Pytorch transformers example, that uses custom preprocessing on the images model_name = "douwekiela/resnet-18-finetuned-dogfood" dataset_name = "beans" max_eval_samples = 120 raw_dataset = load_dataset(dataset_name, split="validation") eval_dataset = raw_dataset.select(range(max_eval_samples)) feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) model = AutoModelForImageClassification.from_pretrained(model_name) def collate_fn(examples): pixel_values = torch.stack( [torch.tensor(feature_extractor(example["image"])["pixel_values"][0]) for example in examples] ) labels = torch.tensor([example["labels"] for example in examples]) return {"pixel_values": pixel_values, "labels": labels} metric = load("accuracy") trainer = Trainer( model=model, args=TrainingArguments( output_dir=os.path.join(self.dir_path, "imageclassification_beans_transformers"), remove_unused_columns=False, ), train_dataset=None, eval_dataset=eval_dataset, compute_metrics=lambda p: metric.compute( predictions=np.argmax(p.predictions, axis=1), references=p.label_ids ), tokenizer=None, data_collator=collate_fn, ) metrics = trainer.evaluate() trainer.save_metrics("eval", metrics) with open( f"{os.path.join(self.dir_path, 'imageclassification_beans_transformers', 'eval_results.json')}", "r" ) as f: transformers_results = json.load(f) pipe = pipeline(task="image-classification", model=model_name, feature_extractor=model_name) task_evaluator = evaluator(task="image-classification") evaluator_results = task_evaluator.compute( model_or_pipeline=pipe, data=eval_dataset, metric="accuracy", input_column="image", label_column="labels", label_mapping=model.config.label2id, strategy="simple", ) self.assertEqual(transformers_results["eval_accuracy"], evaluator_results["accuracy"]) def test_question_answering_parity(self): model_name_v1 = "anas-awadalla/bert-tiny-finetuned-squad" model_name_v2 = "mrm8488/bert-tiny-finetuned-squadv2" subprocess.run( "git sparse-checkout set examples/pytorch/question-answering", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) # test squad_v1-like dataset subprocess.run( f"python examples/pytorch/question-answering/run_qa.py" f" --model_name_or_path {model_name_v1}" f" --dataset_name squad" f" --do_eval" f" --output_dir {os.path.join(self.dir_path, 'questionanswering_squad_transformers')}" f" --max_eval_samples 100" f" --max_seq_length 384", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) with open( f"{os.path.join(self.dir_path, 'questionanswering_squad_transformers', 'eval_results.json')}", "r" ) as f: transformers_results = json.load(f) eval_dataset = load_dataset("squad", split="validation[:100]") pipe = pipeline( task="question-answering", model=model_name_v1, tokenizer=model_name_v1, max_answer_len=30, padding="max_length", ) task_evaluator = evaluator(task="question-answering") evaluator_results = task_evaluator.compute( model_or_pipeline=pipe, data=eval_dataset, metric="squad", strategy="simple", ) self.assertEqual(transformers_results["eval_f1"], evaluator_results["f1"]) self.assertEqual(transformers_results["eval_exact_match"], evaluator_results["exact_match"]) # test squad_v2-like dataset subprocess.run( f"python examples/pytorch/question-answering/run_qa.py" f" --model_name_or_path {model_name_v2}" f" --dataset_name squad_v2" f" --version_2_with_negative" f" --do_eval" f" --output_dir {os.path.join(self.dir_path, 'questionanswering_squadv2_transformers')}" f" --max_eval_samples 100" f" --max_seq_length 384", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) with open( f"{os.path.join(self.dir_path, 'questionanswering_squadv2_transformers', 'eval_results.json')}", "r" ) as f: transformers_results = json.load(f) eval_dataset = load_dataset("squad_v2", split="validation[:100]") pipe = pipeline( task="question-answering", model=model_name_v2, tokenizer=model_name_v2, max_answer_len=30, ) task_evaluator = evaluator(task="question-answering") evaluator_results = task_evaluator.compute( model_or_pipeline=pipe, data=eval_dataset, metric="squad_v2", strategy="simple", squad_v2_format=True, ) self.assertEqual(transformers_results["eval_f1"], evaluator_results["f1"]) self.assertEqual(transformers_results["eval_HasAns_f1"], evaluator_results["HasAns_f1"]) self.assertEqual(transformers_results["eval_NoAns_f1"], evaluator_results["NoAns_f1"]) def test_token_classification_parity(self): model_name = "hf-internal-testing/tiny-bert-for-token-classification" n_samples = 500 subprocess.run( "git sparse-checkout set examples/pytorch/token-classification", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) subprocess.run( f"python examples/pytorch/token-classification/run_ner.py" f" --model_name_or_path {model_name}" f" --dataset_name conll2003" f" --do_eval" f" --output_dir {os.path.join(self.dir_path, 'tokenclassification_conll2003_transformers')}" f" --max_eval_samples {n_samples}", shell=True, cwd=os.path.join(self.dir_path, "transformers"), ) with open( os.path.join(self.dir_path, "tokenclassification_conll2003_transformers", "eval_results.json"), "r" ) as f: transformers_results = json.load(f) eval_dataset = load_dataset("conll2003", split=f"validation[:{n_samples}]") pipe = pipeline(task="token-classification", model=model_name) e = evaluator(task="token-classification") evaluator_results = e.compute( model_or_pipeline=pipe, data=eval_dataset, metric="seqeval", input_column="tokens", label_column="ner_tags", strategy="simple", ) self.assertEqual(transformers_results["eval_accuracy"], evaluator_results["overall_accuracy"]) self.assertEqual(transformers_results["eval_f1"], evaluator_results["overall_f1"])
evaluate/tests/test_trainer_evaluator_parity.py/0
{ "file_path": "evaluate/tests/test_trainer_evaluator_parity.py", "repo_id": "evaluate", "token_count": 5669 }
273
from knockknock.chime_sender import chime_sender from knockknock.discord_sender import discord_sender from knockknock.email_sender import email_sender from knockknock.slack_sender import slack_sender from knockknock.sms_sender import sms_sender from knockknock.telegram_sender import telegram_sender from knockknock.teams_sender import teams_sender from knockknock.desktop_sender import desktop_sender from knockknock.matrix_sender import matrix_sender from knockknock.dingtalk_sender import dingtalk_sender from knockknock.wechat_sender import wechat_sender from knockknock.rocketchat_sender import rocketchat_sender
knockknock/knockknock/__init__.py/0
{ "file_path": "knockknock/knockknock/__init__.py", "repo_id": "knockknock", "token_count": 198 }
274
from setuptools import setup, find_packages from io import open setup( name='knockknock', version='0.1.8.1', description='Be notified when your training is complete with only two additional lines of code', long_description=open('README.md', 'r', encoding='utf-8').read(), long_description_content_type='text/markdown', url='http://github.com/huggingface/knockknock', author='Victor SANH', author_email='victorsanh@gmail.com', license='MIT', packages=find_packages(), entry_points={ 'console_scripts': [ 'knockknock = knockknock.__main__:main' ] }, zip_safe=False, python_requires='>=3.6', install_requires=[ 'yagmail>=0.11.214', 'keyring', 'matrix_client', 'python-telegram-bot', 'requests', 'twilio', ], classifiers=[ 'Intended Audience :: Science/Research', 'Development Status :: 3 - Alpha', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 3.6', 'Topic :: Scientific/Engineering :: Artificial Intelligence', ] )
knockknock/setup.py/0
{ "file_path": "knockknock/setup.py", "repo_id": "knockknock", "token_count": 486 }
275
""" This file contains lists of available environments, dataset and policies to reflect the current state of LeRobot library. We do not want to import all the dependencies, but instead we keep it lightweight to ensure fast access to these variables. Example: ```python import lerobot print(lerobot.available_envs) print(lerobot.available_tasks_per_env) print(lerobot.available_datasets) print(lerobot.available_datasets_per_env) print(lerobot.available_real_world_datasets) print(lerobot.available_policies) print(lerobot.available_policies_per_env) ``` When implementing a new dataset loadable with LeRobotDataset follow these steps: - Update `available_datasets_per_env` in `lerobot/__init__.py` When implementing a new environment (e.g. `gym_aloha`), follow these steps: - Update `available_tasks_per_env` and `available_datasets_per_env` in `lerobot/__init__.py` When implementing a new policy class (e.g. `DiffusionPolicy`) follow these steps: - Update `available_policies` and `available_policies_per_env`, in `lerobot/__init__.py` - Set the required `name` class attribute. - Update variables in `tests/test_available.py` by importing your new Policy class """ import itertools from lerobot.__version__ import __version__ # noqa: F401 available_tasks_per_env = { "aloha": [ "AlohaInsertion-v0", "AlohaTransferCube-v0", ], "pusht": ["PushT-v0"], "xarm": ["XarmLift-v0"], } available_envs = list(available_tasks_per_env.keys()) available_datasets_per_env = { "aloha": [ "lerobot/aloha_sim_insertion_human", "lerobot/aloha_sim_insertion_scripted", "lerobot/aloha_sim_transfer_cube_human", "lerobot/aloha_sim_transfer_cube_scripted", ], "pusht": ["lerobot/pusht"], "xarm": [ "lerobot/xarm_lift_medium", "lerobot/xarm_lift_medium_replay", "lerobot/xarm_push_medium", "lerobot/xarm_push_medium_replay", ], } available_real_world_datasets = [ "lerobot/aloha_mobile_cabinet", "lerobot/aloha_mobile_chair", "lerobot/aloha_mobile_elevator", "lerobot/aloha_mobile_shrimp", "lerobot/aloha_mobile_wash_pan", "lerobot/aloha_mobile_wipe_wine", "lerobot/aloha_static_battery", "lerobot/aloha_static_candy", "lerobot/aloha_static_coffee", "lerobot/aloha_static_coffee_new", "lerobot/aloha_static_cups_open", "lerobot/aloha_static_fork_pick_up", "lerobot/aloha_static_pingpong_test", "lerobot/aloha_static_pro_pencil", "lerobot/aloha_static_screw_driver", "lerobot/aloha_static_tape", "lerobot/aloha_static_thread_velcro", "lerobot/aloha_static_towel", "lerobot/aloha_static_vinh_cup", "lerobot/aloha_static_vinh_cup_left", "lerobot/aloha_static_ziploc_slide", "lerobot/umi_cup_in_the_wild", ] available_datasets = list( itertools.chain(*available_datasets_per_env.values(), available_real_world_datasets) ) available_policies = [ "act", "diffusion", "tdmpc", ] available_policies_per_env = { "aloha": ["act"], "pusht": ["diffusion"], "xarm": ["tdmpc"], } env_task_pairs = [(env, task) for env, tasks in available_tasks_per_env.items() for task in tasks] env_dataset_pairs = [ (env, dataset) for env, datasets in available_datasets_per_env.items() for dataset in datasets ] env_dataset_policy_triplets = [ (env, dataset, policy) for env, datasets in available_datasets_per_env.items() for dataset in datasets for policy in available_policies_per_env[env] ]
lerobot/lerobot/__init__.py/0
{ "file_path": "lerobot/lerobot/__init__.py", "repo_id": "lerobot", "token_count": 1517 }
276
""" This file contains all obsolete download scripts. They are centralized here to not have to load useless dependencies when using datasets. """ import io import logging import shutil from pathlib import Path import tqdm ALOHA_RAW_URLS_DIR = "lerobot/common/datasets/push_dataset_to_hub/_aloha_raw_urls" def download_raw(raw_dir, dataset_id): if "pusht" in dataset_id: download_pusht(raw_dir) elif "xarm" in dataset_id: download_xarm(raw_dir) elif "aloha" in dataset_id: download_aloha(raw_dir, dataset_id) elif "umi" in dataset_id: download_umi(raw_dir) else: raise ValueError(dataset_id) def download_and_extract_zip(url: str, destination_folder: Path) -> bool: import zipfile import requests print(f"downloading from {url}") response = requests.get(url, stream=True) if response.status_code == 200: total_size = int(response.headers.get("content-length", 0)) progress_bar = tqdm.tqdm(total=total_size, unit="B", unit_scale=True) zip_file = io.BytesIO() for chunk in response.iter_content(chunk_size=1024): if chunk: zip_file.write(chunk) progress_bar.update(len(chunk)) progress_bar.close() zip_file.seek(0) with zipfile.ZipFile(zip_file, "r") as zip_ref: zip_ref.extractall(destination_folder) def download_pusht(raw_dir: str): pusht_url = "https://diffusion-policy.cs.columbia.edu/data/training/pusht.zip" raw_dir = Path(raw_dir) raw_dir.mkdir(parents=True, exist_ok=True) download_and_extract_zip(pusht_url, raw_dir) # file is created inside a useful "pusht" directory, so we move it out and delete the dir zarr_path = raw_dir / "pusht_cchi_v7_replay.zarr" shutil.move(raw_dir / "pusht" / "pusht_cchi_v7_replay.zarr", zarr_path) shutil.rmtree(raw_dir / "pusht") def download_xarm(raw_dir: Path): """Download all xarm datasets at once""" import zipfile import gdown raw_dir = Path(raw_dir) raw_dir.mkdir(parents=True, exist_ok=True) # from https://github.com/fyhMer/fowm/blob/main/scripts/download_datasets.py url = "https://drive.google.com/uc?id=1nhxpykGtPDhmQKm-_B8zBSywVRdgeVya" zip_path = raw_dir / "data.zip" gdown.download(url, str(zip_path), quiet=False) print("Extracting...") with zipfile.ZipFile(str(zip_path), "r") as zip_f: for pkl_path in zip_f.namelist(): if pkl_path.startswith("data/xarm") and pkl_path.endswith(".pkl"): zip_f.extract(member=pkl_path) # move to corresponding raw directory extract_dir = pkl_path.replace("/buffer.pkl", "") raw_pkl_path = raw_dir / "buffer.pkl" shutil.move(pkl_path, raw_pkl_path) shutil.rmtree(extract_dir) zip_path.unlink() def download_aloha(raw_dir: Path, dataset_id: str): import gdown subset_id = dataset_id.replace("aloha_", "") urls_path = Path(ALOHA_RAW_URLS_DIR) / f"{subset_id}.txt" assert urls_path.exists(), f"{subset_id}.txt not found in '{ALOHA_RAW_URLS_DIR}' directory." with open(urls_path) as f: # strip lines and ignore empty lines urls = [url.strip() for url in f if url.strip()] # sanity check for url in urls: assert ( "drive.google.com/drive/folders" in url or "drive.google.com/file" in url ), f"Wrong url provided '{url}' in file '{urls_path}'." raw_dir = Path(raw_dir) raw_dir.mkdir(parents=True, exist_ok=True) logging.info(f"Start downloading from google drive for {dataset_id}") for url in urls: if "drive.google.com/drive/folders" in url: # when a folder url is given, download up to 50 files from the folder gdown.download_folder(url, output=str(raw_dir), remaining_ok=True) elif "drive.google.com/file" in url: # because of the 50 files limit per folder, we download the remaining files (file by file) gdown.download(url, output=str(raw_dir), fuzzy=True) logging.info(f"End downloading from google drive for {dataset_id}") def download_umi(raw_dir: Path): url_cup_in_the_wild = "https://real.stanford.edu/umi/data/zarr_datasets/cup_in_the_wild.zarr.zip" zarr_path = raw_dir / "cup_in_the_wild.zarr" raw_dir = Path(raw_dir) raw_dir.mkdir(parents=True, exist_ok=True) download_and_extract_zip(url_cup_in_the_wild, zarr_path) if __name__ == "__main__": data_dir = Path("data") dataset_ids = [ "pusht", "xarm_lift_medium", "xarm_lift_medium_replay", "xarm_push_medium", "xarm_push_medium_replay", "aloha_mobile_cabinet", "aloha_mobile_chair", "aloha_mobile_elevator", "aloha_mobile_shrimp", "aloha_mobile_wash_pan", "aloha_mobile_wipe_wine", "aloha_sim_insertion_human", "aloha_sim_insertion_scripted", "aloha_sim_transfer_cube_human", "aloha_sim_transfer_cube_scripted", "aloha_static_battery", "aloha_static_candy", "aloha_static_coffee", "aloha_static_coffee_new", "aloha_static_cups_open", "aloha_static_fork_pick_up", "aloha_static_pingpong_test", "aloha_static_pro_pencil", "aloha_static_screw_driver", "aloha_static_tape", "aloha_static_thread_velcro", "aloha_static_towel", "aloha_static_vinh_cup", "aloha_static_vinh_cup_left", "aloha_static_ziploc_slide", "umi_cup_in_the_wild", ] for dataset_id in dataset_ids: raw_dir = data_dir / f"{dataset_id}_raw" download_raw(raw_dir, dataset_id)
lerobot/lerobot/common/datasets/push_dataset_to_hub/_download_raw.py/0
{ "file_path": "lerobot/lerobot/common/datasets/push_dataset_to_hub/_download_raw.py", "repo_id": "lerobot", "token_count": 2654 }
277
"""Diffusion Policy as per "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion" TODO(alexander-soare): - Remove reliance on Robomimic for SpatialSoftmax. - Remove reliance on diffusers for DDPMScheduler and LR scheduler. """ import math from collections import deque from typing import Callable import einops import torch import torch.nn.functional as F # noqa: N812 import torchvision from diffusers.schedulers.scheduling_ddim import DDIMScheduler from diffusers.schedulers.scheduling_ddpm import DDPMScheduler from huggingface_hub import PyTorchModelHubMixin from robomimic.models.base_nets import SpatialSoftmax from torch import Tensor, nn from lerobot.common.policies.diffusion.configuration_diffusion import DiffusionConfig from lerobot.common.policies.normalize import Normalize, Unnormalize from lerobot.common.policies.utils import ( get_device_from_parameters, get_dtype_from_parameters, populate_queues, ) class DiffusionPolicy(nn.Module, PyTorchModelHubMixin): """ Diffusion Policy as per "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion" (paper: https://arxiv.org/abs/2303.04137, code: https://github.com/real-stanford/diffusion_policy). """ name = "diffusion" def __init__( self, config: DiffusionConfig | None = None, dataset_stats: dict[str, dict[str, Tensor]] | None = None, ): """ Args: config: Policy configuration class instance or None, in which case the default instantiation of the configuration class is used. dataset_stats: Dataset statistics to be used for normalization. If not passed here, it is expected that they will be passed with a call to `load_state_dict` before the policy is used. """ super().__init__() if config is None: config = DiffusionConfig() self.config = config self.normalize_inputs = Normalize( config.input_shapes, config.input_normalization_modes, dataset_stats ) self.normalize_targets = Normalize( config.output_shapes, config.output_normalization_modes, dataset_stats ) self.unnormalize_outputs = Unnormalize( config.output_shapes, config.output_normalization_modes, dataset_stats ) # queues are populated during rollout of the policy, they contain the n latest observations and actions self._queues = None self.diffusion = DiffusionModel(config) def reset(self): """ Clear observation and action queues. Should be called on `env.reset()` """ self._queues = { "observation.image": deque(maxlen=self.config.n_obs_steps), "observation.state": deque(maxlen=self.config.n_obs_steps), "action": deque(maxlen=self.config.n_action_steps), } @torch.no_grad def select_action(self, batch: dict[str, Tensor]) -> Tensor: """Select a single action given environment observations. This method handles caching a history of observations and an action trajectory generated by the underlying diffusion model. Here's how it works: - `n_obs_steps` steps worth of observations are cached (for the first steps, the observation is copied `n_obs_steps` times to fill the cache). - The diffusion model generates `horizon` steps worth of actions. - `n_action_steps` worth of actions are actually kept for execution, starting from the current step. Schematically this looks like: ---------------------------------------------------------------------------------------------- (legend: o = n_obs_steps, h = horizon, a = n_action_steps) |timestep | n-o+1 | n-o+2 | ..... | n | ..... | n+a-1 | n+a | ..... |n-o+1+h| |observation is used | YES | YES | YES | NO | NO | NO | NO | NO | NO | |action is generated | YES | YES | YES | YES | YES | YES | YES | YES | YES | |action is used | NO | NO | NO | YES | YES | YES | NO | NO | NO | ---------------------------------------------------------------------------------------------- Note that this means we require: `n_action_steps < horizon - n_obs_steps + 1`. Also, note that "horizon" may not the best name to describe what the variable actually means, because this period is actually measured from the first observation which (if `n_obs_steps` > 1) happened in the past. """ assert "observation.image" in batch assert "observation.state" in batch batch = self.normalize_inputs(batch) self._queues = populate_queues(self._queues, batch) if len(self._queues["action"]) == 0: # stack n latest observations from the queue batch = {key: torch.stack(list(self._queues[key]), dim=1) for key in batch} actions = self.diffusion.generate_actions(batch) # TODO(rcadene): make above methods return output dictionary? actions = self.unnormalize_outputs({"action": actions})["action"] self._queues["action"].extend(actions.transpose(0, 1)) action = self._queues["action"].popleft() return action def forward(self, batch: dict[str, Tensor]) -> dict[str, Tensor]: """Run the batch through the model and compute the loss for training or validation.""" batch = self.normalize_inputs(batch) batch = self.normalize_targets(batch) loss = self.diffusion.compute_loss(batch) return {"loss": loss} def _make_noise_scheduler(name: str, **kwargs: dict) -> DDPMScheduler | DDIMScheduler: """ Factory for noise scheduler instances of the requested type. All kwargs are passed to the scheduler. """ if name == "DDPM": return DDPMScheduler(**kwargs) elif name == "DDIM": return DDIMScheduler(**kwargs) else: raise ValueError(f"Unsupported noise scheduler type {name}") class DiffusionModel(nn.Module): def __init__(self, config: DiffusionConfig): super().__init__() self.config = config self.rgb_encoder = DiffusionRgbEncoder(config) self.unet = DiffusionConditionalUnet1d( config, global_cond_dim=(config.output_shapes["action"][0] + self.rgb_encoder.feature_dim) * config.n_obs_steps, ) self.noise_scheduler = _make_noise_scheduler( config.noise_scheduler_type, num_train_timesteps=config.num_train_timesteps, beta_start=config.beta_start, beta_end=config.beta_end, beta_schedule=config.beta_schedule, clip_sample=config.clip_sample, clip_sample_range=config.clip_sample_range, prediction_type=config.prediction_type, ) if config.num_inference_steps is None: self.num_inference_steps = self.noise_scheduler.config.num_train_timesteps else: self.num_inference_steps = config.num_inference_steps # ========= inference ============ def conditional_sample( self, batch_size: int, global_cond: Tensor | None = None, generator: torch.Generator | None = None ) -> Tensor: device = get_device_from_parameters(self) dtype = get_dtype_from_parameters(self) # Sample prior. sample = torch.randn( size=(batch_size, self.config.horizon, self.config.output_shapes["action"][0]), dtype=dtype, device=device, generator=generator, ) self.noise_scheduler.set_timesteps(self.num_inference_steps) for t in self.noise_scheduler.timesteps: # Predict model output. model_output = self.unet( sample, torch.full(sample.shape[:1], t, dtype=torch.long, device=sample.device), global_cond=global_cond, ) # Compute previous image: x_t -> x_t-1 sample = self.noise_scheduler.step(model_output, t, sample, generator=generator).prev_sample return sample def generate_actions(self, batch: dict[str, Tensor]) -> Tensor: """ This function expects `batch` to have (at least): { "observation.state": (B, n_obs_steps, state_dim) "observation.image": (B, n_obs_steps, C, H, W) } """ assert set(batch).issuperset({"observation.state", "observation.image"}) batch_size, n_obs_steps = batch["observation.state"].shape[:2] assert n_obs_steps == self.config.n_obs_steps # Extract image feature (first combine batch and sequence dims). img_features = self.rgb_encoder(einops.rearrange(batch["observation.image"], "b n ... -> (b n) ...")) # Separate batch and sequence dims. img_features = einops.rearrange(img_features, "(b n) ... -> b n ...", b=batch_size) # Concatenate state and image features then flatten to (B, global_cond_dim). global_cond = torch.cat([batch["observation.state"], img_features], dim=-1).flatten(start_dim=1) # run sampling sample = self.conditional_sample(batch_size, global_cond=global_cond) # `horizon` steps worth of actions (from the first observation). actions = sample[..., : self.config.output_shapes["action"][0]] # Extract `n_action_steps` steps worth of actions (from the current observation). start = n_obs_steps - 1 end = start + self.config.n_action_steps actions = actions[:, start:end] return actions def compute_loss(self, batch: dict[str, Tensor]) -> Tensor: """ This function expects `batch` to have (at least): { "observation.state": (B, n_obs_steps, state_dim) "observation.image": (B, n_obs_steps, C, H, W) "action": (B, horizon, action_dim) "action_is_pad": (B, horizon) } """ # Input validation. assert set(batch).issuperset({"observation.state", "observation.image", "action", "action_is_pad"}) batch_size, n_obs_steps = batch["observation.state"].shape[:2] horizon = batch["action"].shape[1] assert horizon == self.config.horizon assert n_obs_steps == self.config.n_obs_steps # Extract image feature (first combine batch and sequence dims). img_features = self.rgb_encoder(einops.rearrange(batch["observation.image"], "b n ... -> (b n) ...")) # Separate batch and sequence dims. img_features = einops.rearrange(img_features, "(b n) ... -> b n ...", b=batch_size) # Concatenate state and image features then flatten to (B, global_cond_dim). global_cond = torch.cat([batch["observation.state"], img_features], dim=-1).flatten(start_dim=1) trajectory = batch["action"] # Forward diffusion. # Sample noise to add to the trajectory. eps = torch.randn(trajectory.shape, device=trajectory.device) # Sample a random noising timestep for each item in the batch. timesteps = torch.randint( low=0, high=self.noise_scheduler.config.num_train_timesteps, size=(trajectory.shape[0],), device=trajectory.device, ).long() # Add noise to the clean trajectories according to the noise magnitude at each timestep. noisy_trajectory = self.noise_scheduler.add_noise(trajectory, eps, timesteps) # Run the denoising network (that might denoise the trajectory, or attempt to predict the noise). pred = self.unet(noisy_trajectory, timesteps, global_cond=global_cond) # Compute the loss. # The target is either the original trajectory, or the noise. if self.config.prediction_type == "epsilon": target = eps elif self.config.prediction_type == "sample": target = batch["action"] else: raise ValueError(f"Unsupported prediction type {self.config.prediction_type}") loss = F.mse_loss(pred, target, reduction="none") # Mask loss wherever the action is padded with copies (edges of the dataset trajectory). if self.config.do_mask_loss_for_padding and "action_is_pad" in batch: in_episode_bound = ~batch["action_is_pad"] loss = loss * in_episode_bound.unsqueeze(-1) return loss.mean() class DiffusionRgbEncoder(nn.Module): """Encoder an RGB image into a 1D feature vector. Includes the ability to normalize and crop the image first. """ def __init__(self, config: DiffusionConfig): super().__init__() # Set up optional preprocessing. if config.crop_shape is not None: self.do_crop = True # Always use center crop for eval self.center_crop = torchvision.transforms.CenterCrop(config.crop_shape) if config.crop_is_random: self.maybe_random_crop = torchvision.transforms.RandomCrop(config.crop_shape) else: self.maybe_random_crop = self.center_crop else: self.do_crop = False # Set up backbone. backbone_model = getattr(torchvision.models, config.vision_backbone)( weights=config.pretrained_backbone_weights ) # Note: This assumes that the layer4 feature map is children()[-3] # TODO(alexander-soare): Use a safer alternative. self.backbone = nn.Sequential(*(list(backbone_model.children())[:-2])) if config.use_group_norm: if config.pretrained_backbone_weights: raise ValueError( "You can't replace BatchNorm in a pretrained model without ruining the weights!" ) self.backbone = _replace_submodules( root_module=self.backbone, predicate=lambda x: isinstance(x, nn.BatchNorm2d), func=lambda x: nn.GroupNorm(num_groups=x.num_features // 16, num_channels=x.num_features), ) # Set up pooling and final layers. # Use a dry run to get the feature map shape. # The dummy input should take the number of image channels from `config.input_shapes` and it should use the # height and width from `config.crop_shape`. dummy_input = torch.zeros(size=(1, config.input_shapes["observation.image"][0], *config.crop_shape)) with torch.inference_mode(): dummy_feature_map = self.backbone(dummy_input) feature_map_shape = tuple(dummy_feature_map.shape[1:]) self.pool = SpatialSoftmax(feature_map_shape, num_kp=config.spatial_softmax_num_keypoints) self.feature_dim = config.spatial_softmax_num_keypoints * 2 self.out = nn.Linear(config.spatial_softmax_num_keypoints * 2, self.feature_dim) self.relu = nn.ReLU() def forward(self, x: Tensor) -> Tensor: """ Args: x: (B, C, H, W) image tensor with pixel values in [0, 1]. Returns: (B, D) image feature. """ # Preprocess: maybe crop (if it was set up in the __init__). if self.do_crop: if self.training: # noqa: SIM108 x = self.maybe_random_crop(x) else: # Always use center crop for eval. x = self.center_crop(x) # Extract backbone feature. x = torch.flatten(self.pool(self.backbone(x)), start_dim=1) # Final linear layer with non-linearity. x = self.relu(self.out(x)) return x def _replace_submodules( root_module: nn.Module, predicate: Callable[[nn.Module], bool], func: Callable[[nn.Module], nn.Module] ) -> nn.Module: """ Args: root_module: The module for which the submodules need to be replaced predicate: Takes a module as an argument and must return True if the that module is to be replaced. func: Takes a module as an argument and returns a new module to replace it with. Returns: The root module with its submodules replaced. """ if predicate(root_module): return func(root_module) replace_list = [k.split(".") for k, m in root_module.named_modules(remove_duplicate=True) if predicate(m)] for *parents, k in replace_list: parent_module = root_module if len(parents) > 0: parent_module = root_module.get_submodule(".".join(parents)) if isinstance(parent_module, nn.Sequential): src_module = parent_module[int(k)] else: src_module = getattr(parent_module, k) tgt_module = func(src_module) if isinstance(parent_module, nn.Sequential): parent_module[int(k)] = tgt_module else: setattr(parent_module, k, tgt_module) # verify that all BN are replaced assert not any(predicate(m) for _, m in root_module.named_modules(remove_duplicate=True)) return root_module class DiffusionSinusoidalPosEmb(nn.Module): """1D sinusoidal positional embeddings as in Attention is All You Need.""" def __init__(self, dim: int): super().__init__() self.dim = dim def forward(self, x: Tensor) -> Tensor: device = x.device half_dim = self.dim // 2 emb = math.log(10000) / (half_dim - 1) emb = torch.exp(torch.arange(half_dim, device=device) * -emb) emb = x.unsqueeze(-1) * emb.unsqueeze(0) emb = torch.cat((emb.sin(), emb.cos()), dim=-1) return emb class DiffusionConv1dBlock(nn.Module): """Conv1d --> GroupNorm --> Mish""" def __init__(self, inp_channels, out_channels, kernel_size, n_groups=8): super().__init__() self.block = nn.Sequential( nn.Conv1d(inp_channels, out_channels, kernel_size, padding=kernel_size // 2), nn.GroupNorm(n_groups, out_channels), nn.Mish(), ) def forward(self, x): return self.block(x) class DiffusionConditionalUnet1d(nn.Module): """A 1D convolutional UNet with FiLM modulation for conditioning. Note: this removes local conditioning as compared to the original diffusion policy code. """ def __init__(self, config: DiffusionConfig, global_cond_dim: int): super().__init__() self.config = config # Encoder for the diffusion timestep. self.diffusion_step_encoder = nn.Sequential( DiffusionSinusoidalPosEmb(config.diffusion_step_embed_dim), nn.Linear(config.diffusion_step_embed_dim, config.diffusion_step_embed_dim * 4), nn.Mish(), nn.Linear(config.diffusion_step_embed_dim * 4, config.diffusion_step_embed_dim), ) # The FiLM conditioning dimension. cond_dim = config.diffusion_step_embed_dim + global_cond_dim # In channels / out channels for each downsampling block in the Unet's encoder. For the decoder, we # just reverse these. in_out = [(config.output_shapes["action"][0], config.down_dims[0])] + list( zip(config.down_dims[:-1], config.down_dims[1:], strict=True) ) # Unet encoder. common_res_block_kwargs = { "cond_dim": cond_dim, "kernel_size": config.kernel_size, "n_groups": config.n_groups, "use_film_scale_modulation": config.use_film_scale_modulation, } self.down_modules = nn.ModuleList([]) for ind, (dim_in, dim_out) in enumerate(in_out): is_last = ind >= (len(in_out) - 1) self.down_modules.append( nn.ModuleList( [ DiffusionConditionalResidualBlock1d(dim_in, dim_out, **common_res_block_kwargs), DiffusionConditionalResidualBlock1d(dim_out, dim_out, **common_res_block_kwargs), # Downsample as long as it is not the last block. nn.Conv1d(dim_out, dim_out, 3, 2, 1) if not is_last else nn.Identity(), ] ) ) # Processing in the middle of the auto-encoder. self.mid_modules = nn.ModuleList( [ DiffusionConditionalResidualBlock1d( config.down_dims[-1], config.down_dims[-1], **common_res_block_kwargs ), DiffusionConditionalResidualBlock1d( config.down_dims[-1], config.down_dims[-1], **common_res_block_kwargs ), ] ) # Unet decoder. self.up_modules = nn.ModuleList([]) for ind, (dim_out, dim_in) in enumerate(reversed(in_out[1:])): is_last = ind >= (len(in_out) - 1) self.up_modules.append( nn.ModuleList( [ # dim_in * 2, because it takes the encoder's skip connection as well DiffusionConditionalResidualBlock1d(dim_in * 2, dim_out, **common_res_block_kwargs), DiffusionConditionalResidualBlock1d(dim_out, dim_out, **common_res_block_kwargs), # Upsample as long as it is not the last block. nn.ConvTranspose1d(dim_out, dim_out, 4, 2, 1) if not is_last else nn.Identity(), ] ) ) self.final_conv = nn.Sequential( DiffusionConv1dBlock(config.down_dims[0], config.down_dims[0], kernel_size=config.kernel_size), nn.Conv1d(config.down_dims[0], config.output_shapes["action"][0], 1), ) def forward(self, x: Tensor, timestep: Tensor | int, global_cond=None) -> Tensor: """ Args: x: (B, T, input_dim) tensor for input to the Unet. timestep: (B,) tensor of (timestep_we_are_denoising_from - 1). global_cond: (B, global_cond_dim) output: (B, T, input_dim) Returns: (B, T, input_dim) diffusion model prediction. """ # For 1D convolutions we'll need feature dimension first. x = einops.rearrange(x, "b t d -> b d t") timesteps_embed = self.diffusion_step_encoder(timestep) # If there is a global conditioning feature, concatenate it to the timestep embedding. if global_cond is not None: global_feature = torch.cat([timesteps_embed, global_cond], axis=-1) else: global_feature = timesteps_embed # Run encoder, keeping track of skip features to pass to the decoder. encoder_skip_features: list[Tensor] = [] for resnet, resnet2, downsample in self.down_modules: x = resnet(x, global_feature) x = resnet2(x, global_feature) encoder_skip_features.append(x) x = downsample(x) for mid_module in self.mid_modules: x = mid_module(x, global_feature) # Run decoder, using the skip features from the encoder. for resnet, resnet2, upsample in self.up_modules: x = torch.cat((x, encoder_skip_features.pop()), dim=1) x = resnet(x, global_feature) x = resnet2(x, global_feature) x = upsample(x) x = self.final_conv(x) x = einops.rearrange(x, "b d t -> b t d") return x class DiffusionConditionalResidualBlock1d(nn.Module): """ResNet style 1D convolutional block with FiLM modulation for conditioning.""" def __init__( self, in_channels: int, out_channels: int, cond_dim: int, kernel_size: int = 3, n_groups: int = 8, # Set to True to do scale modulation with FiLM as well as bias modulation (defaults to False meaning # FiLM just modulates bias). use_film_scale_modulation: bool = False, ): super().__init__() self.use_film_scale_modulation = use_film_scale_modulation self.out_channels = out_channels self.conv1 = DiffusionConv1dBlock(in_channels, out_channels, kernel_size, n_groups=n_groups) # FiLM modulation (https://arxiv.org/abs/1709.07871) outputs per-channel bias and (maybe) scale. cond_channels = out_channels * 2 if use_film_scale_modulation else out_channels self.cond_encoder = nn.Sequential(nn.Mish(), nn.Linear(cond_dim, cond_channels)) self.conv2 = DiffusionConv1dBlock(out_channels, out_channels, kernel_size, n_groups=n_groups) # A final convolution for dimension matching the residual (if needed). self.residual_conv = ( nn.Conv1d(in_channels, out_channels, 1) if in_channels != out_channels else nn.Identity() ) def forward(self, x: Tensor, cond: Tensor) -> Tensor: """ Args: x: (B, in_channels, T) cond: (B, cond_dim) Returns: (B, out_channels, T) """ out = self.conv1(x) # Get condition embedding. Unsqueeze for broadcasting to `out`, resulting in (B, out_channels, 1). cond_embed = self.cond_encoder(cond).unsqueeze(-1) if self.use_film_scale_modulation: # Treat the embedding as a list of scales and biases. scale = cond_embed[:, : self.out_channels] bias = cond_embed[:, self.out_channels :] out = scale * out + bias else: # Treat the embedding as biases. out = out + cond_embed out = self.conv2(out) out = out + self.residual_conv(x) return out
lerobot/lerobot/common/policies/diffusion/modeling_diffusion.py/0
{ "file_path": "lerobot/lerobot/common/policies/diffusion/modeling_diffusion.py", "repo_id": "lerobot", "token_count": 11294 }
278
# @package _global_ seed: 1 dataset_repo_id: lerobot/xarm_lift_medium training: offline_steps: 25000 online_steps: 25000 eval_freq: 5000 online_steps_between_rollouts: 1 online_sampling_ratio: 0.5 online_env_seed: 10000 batch_size: 256 grad_clip_norm: 10.0 lr: 3e-4 delta_timestamps: observation.image: "[i / ${fps} for i in range(${policy.horizon} + 1)]" observation.state: "[i / ${fps} for i in range(${policy.horizon} + 1)]" action: "[i / ${fps} for i in range(${policy.horizon})]" next.reward: "[i / ${fps} for i in range(${policy.horizon})]" policy: name: tdmpc pretrained_model_path: # Input / output structure. n_action_repeats: 2 horizon: 5 input_shapes: # TODO(rcadene, alexander-soare): add variables for height and width from the dataset/env? observation.image: [3, 84, 84] observation.state: ["${env.state_dim}"] output_shapes: action: ["${env.action_dim}"] # Normalization / Unnormalization input_normalization_modes: null output_normalization_modes: action: min_max # Architecture / modeling. # Neural networks. image_encoder_hidden_dim: 32 state_encoder_hidden_dim: 256 latent_dim: 50 q_ensemble_size: 5 mlp_dim: 512 # Reinforcement learning. discount: 0.9 # Inference. use_mpc: false cem_iterations: 6 max_std: 2.0 min_std: 0.05 n_gaussian_samples: 512 n_pi_samples: 51 uncertainty_regularizer_coeff: 1.0 n_elites: 50 elite_weighting_temperature: 0.5 gaussian_mean_momentum: 0.1 # Training and loss computation. max_random_shift_ratio: 0.0476 # Loss coefficients. reward_coeff: 0.5 expectile_weight: 0.9 value_coeff: 0.1 consistency_coeff: 20.0 advantage_scaling: 3.0 pi_coeff: 0.5 temporal_decay_coeff: 0.5 # Target model. target_model_momentum: 0.995
lerobot/lerobot/configs/policy/tdmpc.yaml/0
{ "file_path": "lerobot/lerobot/configs/policy/tdmpc.yaml", "repo_id": "lerobot", "token_count": 722 }
279
{ "_data_files": [ { "filename": "data-00000-of-00001.arrow" } ], "_fingerprint": "243b01eb8a4b184e", "_format_columns": null, "_format_kwargs": {}, "_format_type": null, "_output_all_columns": false, "_split": null }
lerobot/tests/data/lerobot/aloha_sim_transfer_cube_human/train/state.json/0
{ "file_path": "lerobot/tests/data/lerobot/aloha_sim_transfer_cube_human/train/state.json", "repo_id": "lerobot", "token_count": 110 }
280
#ifdef _MULTIARRAYMODULE typedef struct { PyObject_HEAD npy_bool obval; } PyBoolScalarObject; #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyArrayMapIter_Type; extern NPY_NO_EXPORT PyTypeObject PyArrayNeighborhoodIter_Type; extern NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[2]; #else NPY_NO_EXPORT PyTypeObject PyArrayMapIter_Type; NPY_NO_EXPORT PyTypeObject PyArrayNeighborhoodIter_Type; NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[2]; #endif NPY_NO_EXPORT unsigned int PyArray_GetNDArrayCVersion \ (void); #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyBigArray_Type; #else NPY_NO_EXPORT PyTypeObject PyBigArray_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyArray_Type; #else NPY_NO_EXPORT PyTypeObject PyArray_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyArrayDescr_Type; #else NPY_NO_EXPORT PyTypeObject PyArrayDescr_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyArrayFlags_Type; #else NPY_NO_EXPORT PyTypeObject PyArrayFlags_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyArrayIter_Type; #else NPY_NO_EXPORT PyTypeObject PyArrayIter_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyArrayMultiIter_Type; #else NPY_NO_EXPORT PyTypeObject PyArrayMultiIter_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT int NPY_NUMUSERTYPES; #else NPY_NO_EXPORT int NPY_NUMUSERTYPES; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyBoolArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyBoolArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[2]; #else NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[2]; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyGenericArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyGenericArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyNumberArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyNumberArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyIntegerArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyIntegerArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PySignedIntegerArrType_Type; #else NPY_NO_EXPORT PyTypeObject PySignedIntegerArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyUnsignedIntegerArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyUnsignedIntegerArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyInexactArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyInexactArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyFloatingArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyFloatingArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyComplexFloatingArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyComplexFloatingArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyFlexibleArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyFlexibleArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyCharacterArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyCharacterArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyByteArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyByteArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyShortArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyShortArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyIntArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyIntArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyLongArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyLongArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyLongLongArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyLongLongArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyUByteArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyUByteArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyUShortArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyUShortArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyUIntArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyUIntArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyULongArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyULongArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyULongLongArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyULongLongArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyFloatArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyFloatArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyDoubleArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyDoubleArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyLongDoubleArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyLongDoubleArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyCFloatArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyCFloatArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyCDoubleArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyCDoubleArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyCLongDoubleArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyCLongDoubleArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyObjectArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyObjectArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyStringArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyStringArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyUnicodeArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyUnicodeArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyVoidArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyVoidArrType_Type; #endif NPY_NO_EXPORT int PyArray_SetNumericOps \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_GetNumericOps \ (void); NPY_NO_EXPORT int PyArray_INCREF \ (PyArrayObject *); NPY_NO_EXPORT int PyArray_XDECREF \ (PyArrayObject *); NPY_NO_EXPORT void PyArray_SetStringFunction \ (PyObject *, int); NPY_NO_EXPORT PyArray_Descr * PyArray_DescrFromType \ (int); NPY_NO_EXPORT PyObject * PyArray_TypeObjectFromType \ (int); NPY_NO_EXPORT char * PyArray_Zero \ (PyArrayObject *); NPY_NO_EXPORT char * PyArray_One \ (PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_CastToType \ (PyArrayObject *, PyArray_Descr *, int); NPY_NO_EXPORT int PyArray_CastTo \ (PyArrayObject *, PyArrayObject *); NPY_NO_EXPORT int PyArray_CastAnyTo \ (PyArrayObject *, PyArrayObject *); NPY_NO_EXPORT int PyArray_CanCastSafely \ (int, int); NPY_NO_EXPORT npy_bool PyArray_CanCastTo \ (PyArray_Descr *, PyArray_Descr *); NPY_NO_EXPORT int PyArray_ObjectType \ (PyObject *, int); NPY_NO_EXPORT PyArray_Descr * PyArray_DescrFromObject \ (PyObject *, PyArray_Descr *); NPY_NO_EXPORT PyArrayObject ** PyArray_ConvertToCommonType \ (PyObject *, int *); NPY_NO_EXPORT PyArray_Descr * PyArray_DescrFromScalar \ (PyObject *); NPY_NO_EXPORT PyArray_Descr * PyArray_DescrFromTypeObject \ (PyObject *); NPY_NO_EXPORT npy_intp PyArray_Size \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_Scalar \ (void *, PyArray_Descr *, PyObject *); NPY_NO_EXPORT PyObject * PyArray_FromScalar \ (PyObject *, PyArray_Descr *); NPY_NO_EXPORT void PyArray_ScalarAsCtype \ (PyObject *, void *); NPY_NO_EXPORT int PyArray_CastScalarToCtype \ (PyObject *, void *, PyArray_Descr *); NPY_NO_EXPORT int PyArray_CastScalarDirect \ (PyObject *, PyArray_Descr *, void *, int); NPY_NO_EXPORT PyObject * PyArray_ScalarFromObject \ (PyObject *); NPY_NO_EXPORT PyArray_VectorUnaryFunc * PyArray_GetCastFunc \ (PyArray_Descr *, int); NPY_NO_EXPORT PyObject * PyArray_FromDims \ (int, int *, int); NPY_NO_EXPORT PyObject * PyArray_FromDimsAndDataAndDescr \ (int, int *, PyArray_Descr *, char *); NPY_NO_EXPORT PyObject * PyArray_FromAny \ (PyObject *, PyArray_Descr *, int, int, int, PyObject *); NPY_NO_EXPORT PyObject * PyArray_EnsureArray \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_EnsureAnyArray \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_FromFile \ (FILE *, PyArray_Descr *, npy_intp, char *); NPY_NO_EXPORT PyObject * PyArray_FromString \ (char *, npy_intp, PyArray_Descr *, npy_intp, char *); NPY_NO_EXPORT PyObject * PyArray_FromBuffer \ (PyObject *, PyArray_Descr *, npy_intp, npy_intp); NPY_NO_EXPORT PyObject * PyArray_FromIter \ (PyObject *, PyArray_Descr *, npy_intp); NPY_NO_EXPORT PyObject * PyArray_Return \ (PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_GetField \ (PyArrayObject *, PyArray_Descr *, int); NPY_NO_EXPORT int PyArray_SetField \ (PyArrayObject *, PyArray_Descr *, int, PyObject *); NPY_NO_EXPORT PyObject * PyArray_Byteswap \ (PyArrayObject *, npy_bool); NPY_NO_EXPORT PyObject * PyArray_Resize \ (PyArrayObject *, PyArray_Dims *, int, NPY_ORDER); NPY_NO_EXPORT int PyArray_MoveInto \ (PyArrayObject *, PyArrayObject *); NPY_NO_EXPORT int PyArray_CopyInto \ (PyArrayObject *, PyArrayObject *); NPY_NO_EXPORT int PyArray_CopyAnyInto \ (PyArrayObject *, PyArrayObject *); NPY_NO_EXPORT int PyArray_CopyObject \ (PyArrayObject *, PyObject *); NPY_NO_EXPORT PyObject * PyArray_NewCopy \ (PyArrayObject *, NPY_ORDER); NPY_NO_EXPORT PyObject * PyArray_ToList \ (PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_ToString \ (PyArrayObject *, NPY_ORDER); NPY_NO_EXPORT int PyArray_ToFile \ (PyArrayObject *, FILE *, char *, char *); NPY_NO_EXPORT int PyArray_Dump \ (PyObject *, PyObject *, int); NPY_NO_EXPORT PyObject * PyArray_Dumps \ (PyObject *, int); NPY_NO_EXPORT int PyArray_ValidType \ (int); NPY_NO_EXPORT void PyArray_UpdateFlags \ (PyArrayObject *, int); NPY_NO_EXPORT PyObject * PyArray_New \ (PyTypeObject *, int, npy_intp *, int, npy_intp *, void *, int, int, PyObject *); NPY_NO_EXPORT PyObject * PyArray_NewFromDescr \ (PyTypeObject *, PyArray_Descr *, int, npy_intp *, npy_intp *, void *, int, PyObject *); NPY_NO_EXPORT PyArray_Descr * PyArray_DescrNew \ (PyArray_Descr *); NPY_NO_EXPORT PyArray_Descr * PyArray_DescrNewFromType \ (int); NPY_NO_EXPORT double PyArray_GetPriority \ (PyObject *, double); NPY_NO_EXPORT PyObject * PyArray_IterNew \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_MultiIterNew \ (int, ...); NPY_NO_EXPORT int PyArray_PyIntAsInt \ (PyObject *); NPY_NO_EXPORT npy_intp PyArray_PyIntAsIntp \ (PyObject *); NPY_NO_EXPORT int PyArray_Broadcast \ (PyArrayMultiIterObject *); NPY_NO_EXPORT void PyArray_FillObjectArray \ (PyArrayObject *, PyObject *); NPY_NO_EXPORT int PyArray_FillWithScalar \ (PyArrayObject *, PyObject *); NPY_NO_EXPORT npy_bool PyArray_CheckStrides \ (int, int, npy_intp, npy_intp, npy_intp *, npy_intp *); NPY_NO_EXPORT PyArray_Descr * PyArray_DescrNewByteorder \ (PyArray_Descr *, char); NPY_NO_EXPORT PyObject * PyArray_IterAllButAxis \ (PyObject *, int *); NPY_NO_EXPORT PyObject * PyArray_CheckFromAny \ (PyObject *, PyArray_Descr *, int, int, int, PyObject *); NPY_NO_EXPORT PyObject * PyArray_FromArray \ (PyArrayObject *, PyArray_Descr *, int); NPY_NO_EXPORT PyObject * PyArray_FromInterface \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_FromStructInterface \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_FromArrayAttr \ (PyObject *, PyArray_Descr *, PyObject *); NPY_NO_EXPORT NPY_SCALARKIND PyArray_ScalarKind \ (int, PyArrayObject **); NPY_NO_EXPORT int PyArray_CanCoerceScalar \ (int, int, NPY_SCALARKIND); NPY_NO_EXPORT PyObject * PyArray_NewFlagsObject \ (PyObject *); NPY_NO_EXPORT npy_bool PyArray_CanCastScalar \ (PyTypeObject *, PyTypeObject *); NPY_NO_EXPORT int PyArray_CompareUCS4 \ (npy_ucs4 *, npy_ucs4 *, size_t); NPY_NO_EXPORT int PyArray_RemoveSmallest \ (PyArrayMultiIterObject *); NPY_NO_EXPORT int PyArray_ElementStrides \ (PyObject *); NPY_NO_EXPORT void PyArray_Item_INCREF \ (char *, PyArray_Descr *); NPY_NO_EXPORT void PyArray_Item_XDECREF \ (char *, PyArray_Descr *); NPY_NO_EXPORT PyObject * PyArray_FieldNames \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_Transpose \ (PyArrayObject *, PyArray_Dims *); NPY_NO_EXPORT PyObject * PyArray_TakeFrom \ (PyArrayObject *, PyObject *, int, PyArrayObject *, NPY_CLIPMODE); NPY_NO_EXPORT PyObject * PyArray_PutTo \ (PyArrayObject *, PyObject*, PyObject *, NPY_CLIPMODE); NPY_NO_EXPORT PyObject * PyArray_PutMask \ (PyArrayObject *, PyObject*, PyObject*); NPY_NO_EXPORT PyObject * PyArray_Repeat \ (PyArrayObject *, PyObject *, int); NPY_NO_EXPORT PyObject * PyArray_Choose \ (PyArrayObject *, PyObject *, PyArrayObject *, NPY_CLIPMODE); NPY_NO_EXPORT int PyArray_Sort \ (PyArrayObject *, int, NPY_SORTKIND); NPY_NO_EXPORT PyObject * PyArray_ArgSort \ (PyArrayObject *, int, NPY_SORTKIND); NPY_NO_EXPORT PyObject * PyArray_SearchSorted \ (PyArrayObject *, PyObject *, NPY_SEARCHSIDE, PyObject *); NPY_NO_EXPORT PyObject * PyArray_ArgMax \ (PyArrayObject *, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_ArgMin \ (PyArrayObject *, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Reshape \ (PyArrayObject *, PyObject *); NPY_NO_EXPORT PyObject * PyArray_Newshape \ (PyArrayObject *, PyArray_Dims *, NPY_ORDER); NPY_NO_EXPORT PyObject * PyArray_Squeeze \ (PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_View \ (PyArrayObject *, PyArray_Descr *, PyTypeObject *); NPY_NO_EXPORT PyObject * PyArray_SwapAxes \ (PyArrayObject *, int, int); NPY_NO_EXPORT PyObject * PyArray_Max \ (PyArrayObject *, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Min \ (PyArrayObject *, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Ptp \ (PyArrayObject *, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Mean \ (PyArrayObject *, int, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Trace \ (PyArrayObject *, int, int, int, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Diagonal \ (PyArrayObject *, int, int, int); NPY_NO_EXPORT PyObject * PyArray_Clip \ (PyArrayObject *, PyObject *, PyObject *, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Conjugate \ (PyArrayObject *, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Nonzero \ (PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Std \ (PyArrayObject *, int, int, PyArrayObject *, int); NPY_NO_EXPORT PyObject * PyArray_Sum \ (PyArrayObject *, int, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_CumSum \ (PyArrayObject *, int, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Prod \ (PyArrayObject *, int, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_CumProd \ (PyArrayObject *, int, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_All \ (PyArrayObject *, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Any \ (PyArrayObject *, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Compress \ (PyArrayObject *, PyObject *, int, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_Flatten \ (PyArrayObject *, NPY_ORDER); NPY_NO_EXPORT PyObject * PyArray_Ravel \ (PyArrayObject *, NPY_ORDER); NPY_NO_EXPORT npy_intp PyArray_MultiplyList \ (npy_intp *, int); NPY_NO_EXPORT int PyArray_MultiplyIntList \ (int *, int); NPY_NO_EXPORT void * PyArray_GetPtr \ (PyArrayObject *, npy_intp*); NPY_NO_EXPORT int PyArray_CompareLists \ (npy_intp *, npy_intp *, int); NPY_NO_EXPORT int PyArray_AsCArray \ (PyObject **, void *, npy_intp *, int, PyArray_Descr*); NPY_NO_EXPORT int PyArray_As1D \ (PyObject **, char **, int *, int); NPY_NO_EXPORT int PyArray_As2D \ (PyObject **, char ***, int *, int *, int); NPY_NO_EXPORT int PyArray_Free \ (PyObject *, void *); NPY_NO_EXPORT int PyArray_Converter \ (PyObject *, PyObject **); NPY_NO_EXPORT int PyArray_IntpFromSequence \ (PyObject *, npy_intp *, int); NPY_NO_EXPORT PyObject * PyArray_Concatenate \ (PyObject *, int); NPY_NO_EXPORT PyObject * PyArray_InnerProduct \ (PyObject *, PyObject *); NPY_NO_EXPORT PyObject * PyArray_MatrixProduct \ (PyObject *, PyObject *); NPY_NO_EXPORT PyObject * PyArray_CopyAndTranspose \ (PyObject *); NPY_NO_EXPORT PyObject * PyArray_Correlate \ (PyObject *, PyObject *, int); NPY_NO_EXPORT int PyArray_TypestrConvert \ (int, int); NPY_NO_EXPORT int PyArray_DescrConverter \ (PyObject *, PyArray_Descr **); NPY_NO_EXPORT int PyArray_DescrConverter2 \ (PyObject *, PyArray_Descr **); NPY_NO_EXPORT int PyArray_IntpConverter \ (PyObject *, PyArray_Dims *); NPY_NO_EXPORT int PyArray_BufferConverter \ (PyObject *, PyArray_Chunk *); NPY_NO_EXPORT int PyArray_AxisConverter \ (PyObject *, int *); NPY_NO_EXPORT int PyArray_BoolConverter \ (PyObject *, npy_bool *); NPY_NO_EXPORT int PyArray_ByteorderConverter \ (PyObject *, char *); NPY_NO_EXPORT int PyArray_OrderConverter \ (PyObject *, NPY_ORDER *); NPY_NO_EXPORT unsigned char PyArray_EquivTypes \ (PyArray_Descr *, PyArray_Descr *); NPY_NO_EXPORT PyObject * PyArray_Zeros \ (int, npy_intp *, PyArray_Descr *, int); NPY_NO_EXPORT PyObject * PyArray_Empty \ (int, npy_intp *, PyArray_Descr *, int); NPY_NO_EXPORT PyObject * PyArray_Where \ (PyObject *, PyObject *, PyObject *); NPY_NO_EXPORT PyObject * PyArray_Arange \ (double, double, double, int); NPY_NO_EXPORT PyObject * PyArray_ArangeObj \ (PyObject *, PyObject *, PyObject *, PyArray_Descr *); NPY_NO_EXPORT int PyArray_SortkindConverter \ (PyObject *, NPY_SORTKIND *); NPY_NO_EXPORT PyObject * PyArray_LexSort \ (PyObject *, int); NPY_NO_EXPORT PyObject * PyArray_Round \ (PyArrayObject *, int, PyArrayObject *); NPY_NO_EXPORT unsigned char PyArray_EquivTypenums \ (int, int); NPY_NO_EXPORT int PyArray_RegisterDataType \ (PyArray_Descr *); NPY_NO_EXPORT int PyArray_RegisterCastFunc \ (PyArray_Descr *, int, PyArray_VectorUnaryFunc *); NPY_NO_EXPORT int PyArray_RegisterCanCast \ (PyArray_Descr *, int, NPY_SCALARKIND); NPY_NO_EXPORT void PyArray_InitArrFuncs \ (PyArray_ArrFuncs *); NPY_NO_EXPORT PyObject * PyArray_IntTupleFromIntp \ (int, npy_intp *); NPY_NO_EXPORT int PyArray_TypeNumFromName \ (char *); NPY_NO_EXPORT int PyArray_ClipmodeConverter \ (PyObject *, NPY_CLIPMODE *); NPY_NO_EXPORT int PyArray_OutputConverter \ (PyObject *, PyArrayObject **); NPY_NO_EXPORT PyObject * PyArray_BroadcastToShape \ (PyObject *, npy_intp *, int); NPY_NO_EXPORT void _PyArray_SigintHandler \ (int); NPY_NO_EXPORT void* _PyArray_GetSigintBuf \ (void); NPY_NO_EXPORT int PyArray_DescrAlignConverter \ (PyObject *, PyArray_Descr **); NPY_NO_EXPORT int PyArray_DescrAlignConverter2 \ (PyObject *, PyArray_Descr **); NPY_NO_EXPORT int PyArray_SearchsideConverter \ (PyObject *, void *); NPY_NO_EXPORT PyObject * PyArray_CheckAxis \ (PyArrayObject *, int *, int); NPY_NO_EXPORT npy_intp PyArray_OverflowMultiplyList \ (npy_intp *, int); NPY_NO_EXPORT int PyArray_CompareString \ (char *, char *, size_t); NPY_NO_EXPORT PyObject * PyArray_MultiIterFromObjects \ (PyObject **, int, int, ...); NPY_NO_EXPORT int PyArray_GetEndianness \ (void); NPY_NO_EXPORT unsigned int PyArray_GetNDArrayCFeatureVersion \ (void); NPY_NO_EXPORT PyObject * PyArray_Correlate2 \ (PyObject *, PyObject *, int); NPY_NO_EXPORT PyObject* PyArray_NeighborhoodIterNew \ (PyArrayIterObject *, npy_intp *, int, PyArrayObject*); #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyTimeIntegerArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyTimeIntegerArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyDatetimeArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyDatetimeArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyTimedeltaArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyTimedeltaArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject PyHalfArrType_Type; #else NPY_NO_EXPORT PyTypeObject PyHalfArrType_Type; #endif #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT PyTypeObject NpyIter_Type; #else NPY_NO_EXPORT PyTypeObject NpyIter_Type; #endif NPY_NO_EXPORT void PyArray_SetDatetimeParseFunction \ (PyObject *); NPY_NO_EXPORT void PyArray_DatetimeToDatetimeStruct \ (npy_datetime, NPY_DATETIMEUNIT, npy_datetimestruct *); NPY_NO_EXPORT void PyArray_TimedeltaToTimedeltaStruct \ (npy_timedelta, NPY_DATETIMEUNIT, npy_timedeltastruct *); NPY_NO_EXPORT npy_datetime PyArray_DatetimeStructToDatetime \ (NPY_DATETIMEUNIT, npy_datetimestruct *); NPY_NO_EXPORT npy_datetime PyArray_TimedeltaStructToTimedelta \ (NPY_DATETIMEUNIT, npy_timedeltastruct *); NPY_NO_EXPORT NpyIter * NpyIter_New \ (PyArrayObject *, npy_uint32, NPY_ORDER, NPY_CASTING, PyArray_Descr*); NPY_NO_EXPORT NpyIter * NpyIter_MultiNew \ (int, PyArrayObject **, npy_uint32, NPY_ORDER, NPY_CASTING, npy_uint32 *, PyArray_Descr **); NPY_NO_EXPORT NpyIter * NpyIter_AdvancedNew \ (int, PyArrayObject **, npy_uint32, NPY_ORDER, NPY_CASTING, npy_uint32 *, PyArray_Descr **, int, int **, npy_intp *, npy_intp); NPY_NO_EXPORT NpyIter * NpyIter_Copy \ (NpyIter *); NPY_NO_EXPORT int NpyIter_Deallocate \ (NpyIter *); NPY_NO_EXPORT npy_bool NpyIter_HasDelayedBufAlloc \ (NpyIter *); NPY_NO_EXPORT npy_bool NpyIter_HasExternalLoop \ (NpyIter *); NPY_NO_EXPORT int NpyIter_EnableExternalLoop \ (NpyIter *); NPY_NO_EXPORT npy_intp * NpyIter_GetInnerStrideArray \ (NpyIter *); NPY_NO_EXPORT npy_intp * NpyIter_GetInnerLoopSizePtr \ (NpyIter *); NPY_NO_EXPORT int NpyIter_Reset \ (NpyIter *, char **); NPY_NO_EXPORT int NpyIter_ResetBasePointers \ (NpyIter *, char **, char **); NPY_NO_EXPORT int NpyIter_ResetToIterIndexRange \ (NpyIter *, npy_intp, npy_intp, char **); NPY_NO_EXPORT int NpyIter_GetNDim \ (NpyIter *); NPY_NO_EXPORT int NpyIter_GetNOp \ (NpyIter *); NPY_NO_EXPORT NpyIter_IterNextFunc * NpyIter_GetIterNext \ (NpyIter *, char **); NPY_NO_EXPORT npy_intp NpyIter_GetIterSize \ (NpyIter *); NPY_NO_EXPORT void NpyIter_GetIterIndexRange \ (NpyIter *, npy_intp *, npy_intp *); NPY_NO_EXPORT npy_intp NpyIter_GetIterIndex \ (NpyIter *); NPY_NO_EXPORT int NpyIter_GotoIterIndex \ (NpyIter *, npy_intp); NPY_NO_EXPORT npy_bool NpyIter_HasMultiIndex \ (NpyIter *); NPY_NO_EXPORT int NpyIter_GetShape \ (NpyIter *, npy_intp *); NPY_NO_EXPORT NpyIter_GetMultiIndexFunc * NpyIter_GetGetMultiIndex \ (NpyIter *, char **); NPY_NO_EXPORT int NpyIter_GotoMultiIndex \ (NpyIter *, npy_intp *); NPY_NO_EXPORT int NpyIter_RemoveMultiIndex \ (NpyIter *); NPY_NO_EXPORT npy_bool NpyIter_HasIndex \ (NpyIter *); NPY_NO_EXPORT npy_bool NpyIter_IsBuffered \ (NpyIter *); NPY_NO_EXPORT npy_bool NpyIter_IsGrowInner \ (NpyIter *); NPY_NO_EXPORT npy_intp NpyIter_GetBufferSize \ (NpyIter *); NPY_NO_EXPORT npy_intp * NpyIter_GetIndexPtr \ (NpyIter *); NPY_NO_EXPORT int NpyIter_GotoIndex \ (NpyIter *, npy_intp); NPY_NO_EXPORT char ** NpyIter_GetDataPtrArray \ (NpyIter *); NPY_NO_EXPORT PyArray_Descr ** NpyIter_GetDescrArray \ (NpyIter *); NPY_NO_EXPORT PyArrayObject ** NpyIter_GetOperandArray \ (NpyIter *); NPY_NO_EXPORT PyArrayObject * NpyIter_GetIterView \ (NpyIter *, npy_intp); NPY_NO_EXPORT void NpyIter_GetReadFlags \ (NpyIter *, char *); NPY_NO_EXPORT void NpyIter_GetWriteFlags \ (NpyIter *, char *); NPY_NO_EXPORT void NpyIter_DebugPrint \ (NpyIter *); NPY_NO_EXPORT npy_bool NpyIter_IterationNeedsAPI \ (NpyIter *); NPY_NO_EXPORT void NpyIter_GetInnerFixedStrideArray \ (NpyIter *, npy_intp *); NPY_NO_EXPORT int NpyIter_RemoveAxis \ (NpyIter *, int); NPY_NO_EXPORT npy_intp * NpyIter_GetAxisStrideArray \ (NpyIter *, int); NPY_NO_EXPORT npy_bool NpyIter_RequiresBuffering \ (NpyIter *); NPY_NO_EXPORT char ** NpyIter_GetInitialDataPtrArray \ (NpyIter *); NPY_NO_EXPORT int NpyIter_CreateCompatibleStrides \ (NpyIter *, npy_intp, npy_intp *); NPY_NO_EXPORT int PyArray_CastingConverter \ (PyObject *, NPY_CASTING *); NPY_NO_EXPORT npy_intp PyArray_CountNonzero \ (PyArrayObject *); NPY_NO_EXPORT PyArray_Descr * PyArray_PromoteTypes \ (PyArray_Descr *, PyArray_Descr *); NPY_NO_EXPORT PyArray_Descr * PyArray_MinScalarType \ (PyArrayObject *); NPY_NO_EXPORT PyArray_Descr * PyArray_ResultType \ (npy_intp, PyArrayObject **, npy_intp, PyArray_Descr **); NPY_NO_EXPORT npy_bool PyArray_CanCastArrayTo \ (PyArrayObject *, PyArray_Descr *, NPY_CASTING); NPY_NO_EXPORT npy_bool PyArray_CanCastTypeTo \ (PyArray_Descr *, PyArray_Descr *, NPY_CASTING); NPY_NO_EXPORT PyArrayObject * PyArray_EinsteinSum \ (char *, npy_intp, PyArrayObject **, PyArray_Descr *, NPY_ORDER, NPY_CASTING, PyArrayObject *); NPY_NO_EXPORT PyObject * PyArray_NewLikeArray \ (PyArrayObject *, NPY_ORDER, PyArray_Descr *, int); NPY_NO_EXPORT int PyArray_GetArrayParamsFromObject \ (PyObject *, PyArray_Descr *, npy_bool, PyArray_Descr **, int *, npy_intp *, PyArrayObject **, PyObject *); NPY_NO_EXPORT int PyArray_ConvertClipmodeSequence \ (PyObject *, NPY_CLIPMODE *, int); NPY_NO_EXPORT PyObject * PyArray_MatrixProduct2 \ (PyObject *, PyObject *, PyArrayObject*); NPY_NO_EXPORT npy_bool NpyIter_IsFirstVisit \ (NpyIter *, int); NPY_NO_EXPORT int PyArray_SetBaseObject \ (PyArrayObject *, PyObject *); NPY_NO_EXPORT void PyArray_CreateSortedStridePerm \ (int, npy_intp *, npy_stride_sort_item *); NPY_NO_EXPORT void PyArray_RemoveAxesInPlace \ (PyArrayObject *, npy_bool *); NPY_NO_EXPORT void PyArray_DebugPrint \ (PyArrayObject *); NPY_NO_EXPORT int PyArray_FailUnlessWriteable \ (PyArrayObject *, const char *); NPY_NO_EXPORT int PyArray_SetUpdateIfCopyBase \ (PyArrayObject *, PyArrayObject *); NPY_NO_EXPORT void * PyDataMem_NEW \ (size_t); NPY_NO_EXPORT void PyDataMem_FREE \ (void *); NPY_NO_EXPORT void * PyDataMem_RENEW \ (void *, size_t); NPY_NO_EXPORT PyDataMem_EventHookFunc * PyDataMem_SetEventHook \ (PyDataMem_EventHookFunc *, void *, void **); #ifdef NPY_ENABLE_SEPARATE_COMPILATION extern NPY_NO_EXPORT NPY_CASTING NPY_DEFAULT_ASSIGN_CASTING; #else NPY_NO_EXPORT NPY_CASTING NPY_DEFAULT_ASSIGN_CASTING; #endif #else #if defined(PY_ARRAY_UNIQUE_SYMBOL) #define PyArray_API PY_ARRAY_UNIQUE_SYMBOL #endif #if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY) extern void **PyArray_API; #else #if defined(PY_ARRAY_UNIQUE_SYMBOL) void **PyArray_API; #else static void **PyArray_API=NULL; #endif #endif #define PyArray_GetNDArrayCVersion \ (*(unsigned int (*)(void)) \ PyArray_API[0]) #define PyBigArray_Type (*(PyTypeObject *)PyArray_API[1]) #define PyArray_Type (*(PyTypeObject *)PyArray_API[2]) #define PyArrayDescr_Type (*(PyTypeObject *)PyArray_API[3]) #define PyArrayFlags_Type (*(PyTypeObject *)PyArray_API[4]) #define PyArrayIter_Type (*(PyTypeObject *)PyArray_API[5]) #define PyArrayMultiIter_Type (*(PyTypeObject *)PyArray_API[6]) #define NPY_NUMUSERTYPES (*(int *)PyArray_API[7]) #define PyBoolArrType_Type (*(PyTypeObject *)PyArray_API[8]) #define _PyArrayScalar_BoolValues ((PyBoolScalarObject *)PyArray_API[9]) #define PyGenericArrType_Type (*(PyTypeObject *)PyArray_API[10]) #define PyNumberArrType_Type (*(PyTypeObject *)PyArray_API[11]) #define PyIntegerArrType_Type (*(PyTypeObject *)PyArray_API[12]) #define PySignedIntegerArrType_Type (*(PyTypeObject *)PyArray_API[13]) #define PyUnsignedIntegerArrType_Type (*(PyTypeObject *)PyArray_API[14]) #define PyInexactArrType_Type (*(PyTypeObject *)PyArray_API[15]) #define PyFloatingArrType_Type (*(PyTypeObject *)PyArray_API[16]) #define PyComplexFloatingArrType_Type (*(PyTypeObject *)PyArray_API[17]) #define PyFlexibleArrType_Type (*(PyTypeObject *)PyArray_API[18]) #define PyCharacterArrType_Type (*(PyTypeObject *)PyArray_API[19]) #define PyByteArrType_Type (*(PyTypeObject *)PyArray_API[20]) #define PyShortArrType_Type (*(PyTypeObject *)PyArray_API[21]) #define PyIntArrType_Type (*(PyTypeObject *)PyArray_API[22]) #define PyLongArrType_Type (*(PyTypeObject *)PyArray_API[23]) #define PyLongLongArrType_Type (*(PyTypeObject *)PyArray_API[24]) #define PyUByteArrType_Type (*(PyTypeObject *)PyArray_API[25]) #define PyUShortArrType_Type (*(PyTypeObject *)PyArray_API[26]) #define PyUIntArrType_Type (*(PyTypeObject *)PyArray_API[27]) #define PyULongArrType_Type (*(PyTypeObject *)PyArray_API[28]) #define PyULongLongArrType_Type (*(PyTypeObject *)PyArray_API[29]) #define PyFloatArrType_Type (*(PyTypeObject *)PyArray_API[30]) #define PyDoubleArrType_Type (*(PyTypeObject *)PyArray_API[31]) #define PyLongDoubleArrType_Type (*(PyTypeObject *)PyArray_API[32]) #define PyCFloatArrType_Type (*(PyTypeObject *)PyArray_API[33]) #define PyCDoubleArrType_Type (*(PyTypeObject *)PyArray_API[34]) #define PyCLongDoubleArrType_Type (*(PyTypeObject *)PyArray_API[35]) #define PyObjectArrType_Type (*(PyTypeObject *)PyArray_API[36]) #define PyStringArrType_Type (*(PyTypeObject *)PyArray_API[37]) #define PyUnicodeArrType_Type (*(PyTypeObject *)PyArray_API[38]) #define PyVoidArrType_Type (*(PyTypeObject *)PyArray_API[39]) #define PyArray_SetNumericOps \ (*(int (*)(PyObject *)) \ PyArray_API[40]) #define PyArray_GetNumericOps \ (*(PyObject * (*)(void)) \ PyArray_API[41]) #define PyArray_INCREF \ (*(int (*)(PyArrayObject *)) \ PyArray_API[42]) #define PyArray_XDECREF \ (*(int (*)(PyArrayObject *)) \ PyArray_API[43]) #define PyArray_SetStringFunction \ (*(void (*)(PyObject *, int)) \ PyArray_API[44]) #define PyArray_DescrFromType \ (*(PyArray_Descr * (*)(int)) \ PyArray_API[45]) #define PyArray_TypeObjectFromType \ (*(PyObject * (*)(int)) \ PyArray_API[46]) #define PyArray_Zero \ (*(char * (*)(PyArrayObject *)) \ PyArray_API[47]) #define PyArray_One \ (*(char * (*)(PyArrayObject *)) \ PyArray_API[48]) #define PyArray_CastToType \ (*(PyObject * (*)(PyArrayObject *, PyArray_Descr *, int)) \ PyArray_API[49]) #define PyArray_CastTo \ (*(int (*)(PyArrayObject *, PyArrayObject *)) \ PyArray_API[50]) #define PyArray_CastAnyTo \ (*(int (*)(PyArrayObject *, PyArrayObject *)) \ PyArray_API[51]) #define PyArray_CanCastSafely \ (*(int (*)(int, int)) \ PyArray_API[52]) #define PyArray_CanCastTo \ (*(npy_bool (*)(PyArray_Descr *, PyArray_Descr *)) \ PyArray_API[53]) #define PyArray_ObjectType \ (*(int (*)(PyObject *, int)) \ PyArray_API[54]) #define PyArray_DescrFromObject \ (*(PyArray_Descr * (*)(PyObject *, PyArray_Descr *)) \ PyArray_API[55]) #define PyArray_ConvertToCommonType \ (*(PyArrayObject ** (*)(PyObject *, int *)) \ PyArray_API[56]) #define PyArray_DescrFromScalar \ (*(PyArray_Descr * (*)(PyObject *)) \ PyArray_API[57]) #define PyArray_DescrFromTypeObject \ (*(PyArray_Descr * (*)(PyObject *)) \ PyArray_API[58]) #define PyArray_Size \ (*(npy_intp (*)(PyObject *)) \ PyArray_API[59]) #define PyArray_Scalar \ (*(PyObject * (*)(void *, PyArray_Descr *, PyObject *)) \ PyArray_API[60]) #define PyArray_FromScalar \ (*(PyObject * (*)(PyObject *, PyArray_Descr *)) \ PyArray_API[61]) #define PyArray_ScalarAsCtype \ (*(void (*)(PyObject *, void *)) \ PyArray_API[62]) #define PyArray_CastScalarToCtype \ (*(int (*)(PyObject *, void *, PyArray_Descr *)) \ PyArray_API[63]) #define PyArray_CastScalarDirect \ (*(int (*)(PyObject *, PyArray_Descr *, void *, int)) \ PyArray_API[64]) #define PyArray_ScalarFromObject \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[65]) #define PyArray_GetCastFunc \ (*(PyArray_VectorUnaryFunc * (*)(PyArray_Descr *, int)) \ PyArray_API[66]) #define PyArray_FromDims \ (*(PyObject * (*)(int, int *, int)) \ PyArray_API[67]) #define PyArray_FromDimsAndDataAndDescr \ (*(PyObject * (*)(int, int *, PyArray_Descr *, char *)) \ PyArray_API[68]) #define PyArray_FromAny \ (*(PyObject * (*)(PyObject *, PyArray_Descr *, int, int, int, PyObject *)) \ PyArray_API[69]) #define PyArray_EnsureArray \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[70]) #define PyArray_EnsureAnyArray \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[71]) #define PyArray_FromFile \ (*(PyObject * (*)(FILE *, PyArray_Descr *, npy_intp, char *)) \ PyArray_API[72]) #define PyArray_FromString \ (*(PyObject * (*)(char *, npy_intp, PyArray_Descr *, npy_intp, char *)) \ PyArray_API[73]) #define PyArray_FromBuffer \ (*(PyObject * (*)(PyObject *, PyArray_Descr *, npy_intp, npy_intp)) \ PyArray_API[74]) #define PyArray_FromIter \ (*(PyObject * (*)(PyObject *, PyArray_Descr *, npy_intp)) \ PyArray_API[75]) #define PyArray_Return \ (*(PyObject * (*)(PyArrayObject *)) \ PyArray_API[76]) #define PyArray_GetField \ (*(PyObject * (*)(PyArrayObject *, PyArray_Descr *, int)) \ PyArray_API[77]) #define PyArray_SetField \ (*(int (*)(PyArrayObject *, PyArray_Descr *, int, PyObject *)) \ PyArray_API[78]) #define PyArray_Byteswap \ (*(PyObject * (*)(PyArrayObject *, npy_bool)) \ PyArray_API[79]) #define PyArray_Resize \ (*(PyObject * (*)(PyArrayObject *, PyArray_Dims *, int, NPY_ORDER)) \ PyArray_API[80]) #define PyArray_MoveInto \ (*(int (*)(PyArrayObject *, PyArrayObject *)) \ PyArray_API[81]) #define PyArray_CopyInto \ (*(int (*)(PyArrayObject *, PyArrayObject *)) \ PyArray_API[82]) #define PyArray_CopyAnyInto \ (*(int (*)(PyArrayObject *, PyArrayObject *)) \ PyArray_API[83]) #define PyArray_CopyObject \ (*(int (*)(PyArrayObject *, PyObject *)) \ PyArray_API[84]) #define PyArray_NewCopy \ (*(PyObject * (*)(PyArrayObject *, NPY_ORDER)) \ PyArray_API[85]) #define PyArray_ToList \ (*(PyObject * (*)(PyArrayObject *)) \ PyArray_API[86]) #define PyArray_ToString \ (*(PyObject * (*)(PyArrayObject *, NPY_ORDER)) \ PyArray_API[87]) #define PyArray_ToFile \ (*(int (*)(PyArrayObject *, FILE *, char *, char *)) \ PyArray_API[88]) #define PyArray_Dump \ (*(int (*)(PyObject *, PyObject *, int)) \ PyArray_API[89]) #define PyArray_Dumps \ (*(PyObject * (*)(PyObject *, int)) \ PyArray_API[90]) #define PyArray_ValidType \ (*(int (*)(int)) \ PyArray_API[91]) #define PyArray_UpdateFlags \ (*(void (*)(PyArrayObject *, int)) \ PyArray_API[92]) #define PyArray_New \ (*(PyObject * (*)(PyTypeObject *, int, npy_intp *, int, npy_intp *, void *, int, int, PyObject *)) \ PyArray_API[93]) #define PyArray_NewFromDescr \ (*(PyObject * (*)(PyTypeObject *, PyArray_Descr *, int, npy_intp *, npy_intp *, void *, int, PyObject *)) \ PyArray_API[94]) #define PyArray_DescrNew \ (*(PyArray_Descr * (*)(PyArray_Descr *)) \ PyArray_API[95]) #define PyArray_DescrNewFromType \ (*(PyArray_Descr * (*)(int)) \ PyArray_API[96]) #define PyArray_GetPriority \ (*(double (*)(PyObject *, double)) \ PyArray_API[97]) #define PyArray_IterNew \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[98]) #define PyArray_MultiIterNew \ (*(PyObject * (*)(int, ...)) \ PyArray_API[99]) #define PyArray_PyIntAsInt \ (*(int (*)(PyObject *)) \ PyArray_API[100]) #define PyArray_PyIntAsIntp \ (*(npy_intp (*)(PyObject *)) \ PyArray_API[101]) #define PyArray_Broadcast \ (*(int (*)(PyArrayMultiIterObject *)) \ PyArray_API[102]) #define PyArray_FillObjectArray \ (*(void (*)(PyArrayObject *, PyObject *)) \ PyArray_API[103]) #define PyArray_FillWithScalar \ (*(int (*)(PyArrayObject *, PyObject *)) \ PyArray_API[104]) #define PyArray_CheckStrides \ (*(npy_bool (*)(int, int, npy_intp, npy_intp, npy_intp *, npy_intp *)) \ PyArray_API[105]) #define PyArray_DescrNewByteorder \ (*(PyArray_Descr * (*)(PyArray_Descr *, char)) \ PyArray_API[106]) #define PyArray_IterAllButAxis \ (*(PyObject * (*)(PyObject *, int *)) \ PyArray_API[107]) #define PyArray_CheckFromAny \ (*(PyObject * (*)(PyObject *, PyArray_Descr *, int, int, int, PyObject *)) \ PyArray_API[108]) #define PyArray_FromArray \ (*(PyObject * (*)(PyArrayObject *, PyArray_Descr *, int)) \ PyArray_API[109]) #define PyArray_FromInterface \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[110]) #define PyArray_FromStructInterface \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[111]) #define PyArray_FromArrayAttr \ (*(PyObject * (*)(PyObject *, PyArray_Descr *, PyObject *)) \ PyArray_API[112]) #define PyArray_ScalarKind \ (*(NPY_SCALARKIND (*)(int, PyArrayObject **)) \ PyArray_API[113]) #define PyArray_CanCoerceScalar \ (*(int (*)(int, int, NPY_SCALARKIND)) \ PyArray_API[114]) #define PyArray_NewFlagsObject \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[115]) #define PyArray_CanCastScalar \ (*(npy_bool (*)(PyTypeObject *, PyTypeObject *)) \ PyArray_API[116]) #define PyArray_CompareUCS4 \ (*(int (*)(npy_ucs4 *, npy_ucs4 *, size_t)) \ PyArray_API[117]) #define PyArray_RemoveSmallest \ (*(int (*)(PyArrayMultiIterObject *)) \ PyArray_API[118]) #define PyArray_ElementStrides \ (*(int (*)(PyObject *)) \ PyArray_API[119]) #define PyArray_Item_INCREF \ (*(void (*)(char *, PyArray_Descr *)) \ PyArray_API[120]) #define PyArray_Item_XDECREF \ (*(void (*)(char *, PyArray_Descr *)) \ PyArray_API[121]) #define PyArray_FieldNames \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[122]) #define PyArray_Transpose \ (*(PyObject * (*)(PyArrayObject *, PyArray_Dims *)) \ PyArray_API[123]) #define PyArray_TakeFrom \ (*(PyObject * (*)(PyArrayObject *, PyObject *, int, PyArrayObject *, NPY_CLIPMODE)) \ PyArray_API[124]) #define PyArray_PutTo \ (*(PyObject * (*)(PyArrayObject *, PyObject*, PyObject *, NPY_CLIPMODE)) \ PyArray_API[125]) #define PyArray_PutMask \ (*(PyObject * (*)(PyArrayObject *, PyObject*, PyObject*)) \ PyArray_API[126]) #define PyArray_Repeat \ (*(PyObject * (*)(PyArrayObject *, PyObject *, int)) \ PyArray_API[127]) #define PyArray_Choose \ (*(PyObject * (*)(PyArrayObject *, PyObject *, PyArrayObject *, NPY_CLIPMODE)) \ PyArray_API[128]) #define PyArray_Sort \ (*(int (*)(PyArrayObject *, int, NPY_SORTKIND)) \ PyArray_API[129]) #define PyArray_ArgSort \ (*(PyObject * (*)(PyArrayObject *, int, NPY_SORTKIND)) \ PyArray_API[130]) #define PyArray_SearchSorted \ (*(PyObject * (*)(PyArrayObject *, PyObject *, NPY_SEARCHSIDE, PyObject *)) \ PyArray_API[131]) #define PyArray_ArgMax \ (*(PyObject * (*)(PyArrayObject *, int, PyArrayObject *)) \ PyArray_API[132]) #define PyArray_ArgMin \ (*(PyObject * (*)(PyArrayObject *, int, PyArrayObject *)) \ PyArray_API[133]) #define PyArray_Reshape \ (*(PyObject * (*)(PyArrayObject *, PyObject *)) \ PyArray_API[134]) #define PyArray_Newshape \ (*(PyObject * (*)(PyArrayObject *, PyArray_Dims *, NPY_ORDER)) \ PyArray_API[135]) #define PyArray_Squeeze \ (*(PyObject * (*)(PyArrayObject *)) \ PyArray_API[136]) #define PyArray_View \ (*(PyObject * (*)(PyArrayObject *, PyArray_Descr *, PyTypeObject *)) \ PyArray_API[137]) #define PyArray_SwapAxes \ (*(PyObject * (*)(PyArrayObject *, int, int)) \ PyArray_API[138]) #define PyArray_Max \ (*(PyObject * (*)(PyArrayObject *, int, PyArrayObject *)) \ PyArray_API[139]) #define PyArray_Min \ (*(PyObject * (*)(PyArrayObject *, int, PyArrayObject *)) \ PyArray_API[140]) #define PyArray_Ptp \ (*(PyObject * (*)(PyArrayObject *, int, PyArrayObject *)) \ PyArray_API[141]) #define PyArray_Mean \ (*(PyObject * (*)(PyArrayObject *, int, int, PyArrayObject *)) \ PyArray_API[142]) #define PyArray_Trace \ (*(PyObject * (*)(PyArrayObject *, int, int, int, int, PyArrayObject *)) \ PyArray_API[143]) #define PyArray_Diagonal \ (*(PyObject * (*)(PyArrayObject *, int, int, int)) \ PyArray_API[144]) #define PyArray_Clip \ (*(PyObject * (*)(PyArrayObject *, PyObject *, PyObject *, PyArrayObject *)) \ PyArray_API[145]) #define PyArray_Conjugate \ (*(PyObject * (*)(PyArrayObject *, PyArrayObject *)) \ PyArray_API[146]) #define PyArray_Nonzero \ (*(PyObject * (*)(PyArrayObject *)) \ PyArray_API[147]) #define PyArray_Std \ (*(PyObject * (*)(PyArrayObject *, int, int, PyArrayObject *, int)) \ PyArray_API[148]) #define PyArray_Sum \ (*(PyObject * (*)(PyArrayObject *, int, int, PyArrayObject *)) \ PyArray_API[149]) #define PyArray_CumSum \ (*(PyObject * (*)(PyArrayObject *, int, int, PyArrayObject *)) \ PyArray_API[150]) #define PyArray_Prod \ (*(PyObject * (*)(PyArrayObject *, int, int, PyArrayObject *)) \ PyArray_API[151]) #define PyArray_CumProd \ (*(PyObject * (*)(PyArrayObject *, int, int, PyArrayObject *)) \ PyArray_API[152]) #define PyArray_All \ (*(PyObject * (*)(PyArrayObject *, int, PyArrayObject *)) \ PyArray_API[153]) #define PyArray_Any \ (*(PyObject * (*)(PyArrayObject *, int, PyArrayObject *)) \ PyArray_API[154]) #define PyArray_Compress \ (*(PyObject * (*)(PyArrayObject *, PyObject *, int, PyArrayObject *)) \ PyArray_API[155]) #define PyArray_Flatten \ (*(PyObject * (*)(PyArrayObject *, NPY_ORDER)) \ PyArray_API[156]) #define PyArray_Ravel \ (*(PyObject * (*)(PyArrayObject *, NPY_ORDER)) \ PyArray_API[157]) #define PyArray_MultiplyList \ (*(npy_intp (*)(npy_intp *, int)) \ PyArray_API[158]) #define PyArray_MultiplyIntList \ (*(int (*)(int *, int)) \ PyArray_API[159]) #define PyArray_GetPtr \ (*(void * (*)(PyArrayObject *, npy_intp*)) \ PyArray_API[160]) #define PyArray_CompareLists \ (*(int (*)(npy_intp *, npy_intp *, int)) \ PyArray_API[161]) #define PyArray_AsCArray \ (*(int (*)(PyObject **, void *, npy_intp *, int, PyArray_Descr*)) \ PyArray_API[162]) #define PyArray_As1D \ (*(int (*)(PyObject **, char **, int *, int)) \ PyArray_API[163]) #define PyArray_As2D \ (*(int (*)(PyObject **, char ***, int *, int *, int)) \ PyArray_API[164]) #define PyArray_Free \ (*(int (*)(PyObject *, void *)) \ PyArray_API[165]) #define PyArray_Converter \ (*(int (*)(PyObject *, PyObject **)) \ PyArray_API[166]) #define PyArray_IntpFromSequence \ (*(int (*)(PyObject *, npy_intp *, int)) \ PyArray_API[167]) #define PyArray_Concatenate \ (*(PyObject * (*)(PyObject *, int)) \ PyArray_API[168]) #define PyArray_InnerProduct \ (*(PyObject * (*)(PyObject *, PyObject *)) \ PyArray_API[169]) #define PyArray_MatrixProduct \ (*(PyObject * (*)(PyObject *, PyObject *)) \ PyArray_API[170]) #define PyArray_CopyAndTranspose \ (*(PyObject * (*)(PyObject *)) \ PyArray_API[171]) #define PyArray_Correlate \ (*(PyObject * (*)(PyObject *, PyObject *, int)) \ PyArray_API[172]) #define PyArray_TypestrConvert \ (*(int (*)(int, int)) \ PyArray_API[173]) #define PyArray_DescrConverter \ (*(int (*)(PyObject *, PyArray_Descr **)) \ PyArray_API[174]) #define PyArray_DescrConverter2 \ (*(int (*)(PyObject *, PyArray_Descr **)) \ PyArray_API[175]) #define PyArray_IntpConverter \ (*(int (*)(PyObject *, PyArray_Dims *)) \ PyArray_API[176]) #define PyArray_BufferConverter \ (*(int (*)(PyObject *, PyArray_Chunk *)) \ PyArray_API[177]) #define PyArray_AxisConverter \ (*(int (*)(PyObject *, int *)) \ PyArray_API[178]) #define PyArray_BoolConverter \ (*(int (*)(PyObject *, npy_bool *)) \ PyArray_API[179]) #define PyArray_ByteorderConverter \ (*(int (*)(PyObject *, char *)) \ PyArray_API[180]) #define PyArray_OrderConverter \ (*(int (*)(PyObject *, NPY_ORDER *)) \ PyArray_API[181]) #define PyArray_EquivTypes \ (*(unsigned char (*)(PyArray_Descr *, PyArray_Descr *)) \ PyArray_API[182]) #define PyArray_Zeros \ (*(PyObject * (*)(int, npy_intp *, PyArray_Descr *, int)) \ PyArray_API[183]) #define PyArray_Empty \ (*(PyObject * (*)(int, npy_intp *, PyArray_Descr *, int)) \ PyArray_API[184]) #define PyArray_Where \ (*(PyObject * (*)(PyObject *, PyObject *, PyObject *)) \ PyArray_API[185]) #define PyArray_Arange \ (*(PyObject * (*)(double, double, double, int)) \ PyArray_API[186]) #define PyArray_ArangeObj \ (*(PyObject * (*)(PyObject *, PyObject *, PyObject *, PyArray_Descr *)) \ PyArray_API[187]) #define PyArray_SortkindConverter \ (*(int (*)(PyObject *, NPY_SORTKIND *)) \ PyArray_API[188]) #define PyArray_LexSort \ (*(PyObject * (*)(PyObject *, int)) \ PyArray_API[189]) #define PyArray_Round \ (*(PyObject * (*)(PyArrayObject *, int, PyArrayObject *)) \ PyArray_API[190]) #define PyArray_EquivTypenums \ (*(unsigned char (*)(int, int)) \ PyArray_API[191]) #define PyArray_RegisterDataType \ (*(int (*)(PyArray_Descr *)) \ PyArray_API[192]) #define PyArray_RegisterCastFunc \ (*(int (*)(PyArray_Descr *, int, PyArray_VectorUnaryFunc *)) \ PyArray_API[193]) #define PyArray_RegisterCanCast \ (*(int (*)(PyArray_Descr *, int, NPY_SCALARKIND)) \ PyArray_API[194]) #define PyArray_InitArrFuncs \ (*(void (*)(PyArray_ArrFuncs *)) \ PyArray_API[195]) #define PyArray_IntTupleFromIntp \ (*(PyObject * (*)(int, npy_intp *)) \ PyArray_API[196]) #define PyArray_TypeNumFromName \ (*(int (*)(char *)) \ PyArray_API[197]) #define PyArray_ClipmodeConverter \ (*(int (*)(PyObject *, NPY_CLIPMODE *)) \ PyArray_API[198]) #define PyArray_OutputConverter \ (*(int (*)(PyObject *, PyArrayObject **)) \ PyArray_API[199]) #define PyArray_BroadcastToShape \ (*(PyObject * (*)(PyObject *, npy_intp *, int)) \ PyArray_API[200]) #define _PyArray_SigintHandler \ (*(void (*)(int)) \ PyArray_API[201]) #define _PyArray_GetSigintBuf \ (*(void* (*)(void)) \ PyArray_API[202]) #define PyArray_DescrAlignConverter \ (*(int (*)(PyObject *, PyArray_Descr **)) \ PyArray_API[203]) #define PyArray_DescrAlignConverter2 \ (*(int (*)(PyObject *, PyArray_Descr **)) \ PyArray_API[204]) #define PyArray_SearchsideConverter \ (*(int (*)(PyObject *, void *)) \ PyArray_API[205]) #define PyArray_CheckAxis \ (*(PyObject * (*)(PyArrayObject *, int *, int)) \ PyArray_API[206]) #define PyArray_OverflowMultiplyList \ (*(npy_intp (*)(npy_intp *, int)) \ PyArray_API[207]) #define PyArray_CompareString \ (*(int (*)(char *, char *, size_t)) \ PyArray_API[208]) #define PyArray_MultiIterFromObjects \ (*(PyObject * (*)(PyObject **, int, int, ...)) \ PyArray_API[209]) #define PyArray_GetEndianness \ (*(int (*)(void)) \ PyArray_API[210]) #define PyArray_GetNDArrayCFeatureVersion \ (*(unsigned int (*)(void)) \ PyArray_API[211]) #define PyArray_Correlate2 \ (*(PyObject * (*)(PyObject *, PyObject *, int)) \ PyArray_API[212]) #define PyArray_NeighborhoodIterNew \ (*(PyObject* (*)(PyArrayIterObject *, npy_intp *, int, PyArrayObject*)) \ PyArray_API[213]) #define PyTimeIntegerArrType_Type (*(PyTypeObject *)PyArray_API[214]) #define PyDatetimeArrType_Type (*(PyTypeObject *)PyArray_API[215]) #define PyTimedeltaArrType_Type (*(PyTypeObject *)PyArray_API[216]) #define PyHalfArrType_Type (*(PyTypeObject *)PyArray_API[217]) #define NpyIter_Type (*(PyTypeObject *)PyArray_API[218]) #define PyArray_SetDatetimeParseFunction \ (*(void (*)(PyObject *)) \ PyArray_API[219]) #define PyArray_DatetimeToDatetimeStruct \ (*(void (*)(npy_datetime, NPY_DATETIMEUNIT, npy_datetimestruct *)) \ PyArray_API[220]) #define PyArray_TimedeltaToTimedeltaStruct \ (*(void (*)(npy_timedelta, NPY_DATETIMEUNIT, npy_timedeltastruct *)) \ PyArray_API[221]) #define PyArray_DatetimeStructToDatetime \ (*(npy_datetime (*)(NPY_DATETIMEUNIT, npy_datetimestruct *)) \ PyArray_API[222]) #define PyArray_TimedeltaStructToTimedelta \ (*(npy_datetime (*)(NPY_DATETIMEUNIT, npy_timedeltastruct *)) \ PyArray_API[223]) #define NpyIter_New \ (*(NpyIter * (*)(PyArrayObject *, npy_uint32, NPY_ORDER, NPY_CASTING, PyArray_Descr*)) \ PyArray_API[224]) #define NpyIter_MultiNew \ (*(NpyIter * (*)(int, PyArrayObject **, npy_uint32, NPY_ORDER, NPY_CASTING, npy_uint32 *, PyArray_Descr **)) \ PyArray_API[225]) #define NpyIter_AdvancedNew \ (*(NpyIter * (*)(int, PyArrayObject **, npy_uint32, NPY_ORDER, NPY_CASTING, npy_uint32 *, PyArray_Descr **, int, int **, npy_intp *, npy_intp)) \ PyArray_API[226]) #define NpyIter_Copy \ (*(NpyIter * (*)(NpyIter *)) \ PyArray_API[227]) #define NpyIter_Deallocate \ (*(int (*)(NpyIter *)) \ PyArray_API[228]) #define NpyIter_HasDelayedBufAlloc \ (*(npy_bool (*)(NpyIter *)) \ PyArray_API[229]) #define NpyIter_HasExternalLoop \ (*(npy_bool (*)(NpyIter *)) \ PyArray_API[230]) #define NpyIter_EnableExternalLoop \ (*(int (*)(NpyIter *)) \ PyArray_API[231]) #define NpyIter_GetInnerStrideArray \ (*(npy_intp * (*)(NpyIter *)) \ PyArray_API[232]) #define NpyIter_GetInnerLoopSizePtr \ (*(npy_intp * (*)(NpyIter *)) \ PyArray_API[233]) #define NpyIter_Reset \ (*(int (*)(NpyIter *, char **)) \ PyArray_API[234]) #define NpyIter_ResetBasePointers \ (*(int (*)(NpyIter *, char **, char **)) \ PyArray_API[235]) #define NpyIter_ResetToIterIndexRange \ (*(int (*)(NpyIter *, npy_intp, npy_intp, char **)) \ PyArray_API[236]) #define NpyIter_GetNDim \ (*(int (*)(NpyIter *)) \ PyArray_API[237]) #define NpyIter_GetNOp \ (*(int (*)(NpyIter *)) \ PyArray_API[238]) #define NpyIter_GetIterNext \ (*(NpyIter_IterNextFunc * (*)(NpyIter *, char **)) \ PyArray_API[239]) #define NpyIter_GetIterSize \ (*(npy_intp (*)(NpyIter *)) \ PyArray_API[240]) #define NpyIter_GetIterIndexRange \ (*(void (*)(NpyIter *, npy_intp *, npy_intp *)) \ PyArray_API[241]) #define NpyIter_GetIterIndex \ (*(npy_intp (*)(NpyIter *)) \ PyArray_API[242]) #define NpyIter_GotoIterIndex \ (*(int (*)(NpyIter *, npy_intp)) \ PyArray_API[243]) #define NpyIter_HasMultiIndex \ (*(npy_bool (*)(NpyIter *)) \ PyArray_API[244]) #define NpyIter_GetShape \ (*(int (*)(NpyIter *, npy_intp *)) \ PyArray_API[245]) #define NpyIter_GetGetMultiIndex \ (*(NpyIter_GetMultiIndexFunc * (*)(NpyIter *, char **)) \ PyArray_API[246]) #define NpyIter_GotoMultiIndex \ (*(int (*)(NpyIter *, npy_intp *)) \ PyArray_API[247]) #define NpyIter_RemoveMultiIndex \ (*(int (*)(NpyIter *)) \ PyArray_API[248]) #define NpyIter_HasIndex \ (*(npy_bool (*)(NpyIter *)) \ PyArray_API[249]) #define NpyIter_IsBuffered \ (*(npy_bool (*)(NpyIter *)) \ PyArray_API[250]) #define NpyIter_IsGrowInner \ (*(npy_bool (*)(NpyIter *)) \ PyArray_API[251]) #define NpyIter_GetBufferSize \ (*(npy_intp (*)(NpyIter *)) \ PyArray_API[252]) #define NpyIter_GetIndexPtr \ (*(npy_intp * (*)(NpyIter *)) \ PyArray_API[253]) #define NpyIter_GotoIndex \ (*(int (*)(NpyIter *, npy_intp)) \ PyArray_API[254]) #define NpyIter_GetDataPtrArray \ (*(char ** (*)(NpyIter *)) \ PyArray_API[255]) #define NpyIter_GetDescrArray \ (*(PyArray_Descr ** (*)(NpyIter *)) \ PyArray_API[256]) #define NpyIter_GetOperandArray \ (*(PyArrayObject ** (*)(NpyIter *)) \ PyArray_API[257]) #define NpyIter_GetIterView \ (*(PyArrayObject * (*)(NpyIter *, npy_intp)) \ PyArray_API[258]) #define NpyIter_GetReadFlags \ (*(void (*)(NpyIter *, char *)) \ PyArray_API[259]) #define NpyIter_GetWriteFlags \ (*(void (*)(NpyIter *, char *)) \ PyArray_API[260]) #define NpyIter_DebugPrint \ (*(void (*)(NpyIter *)) \ PyArray_API[261]) #define NpyIter_IterationNeedsAPI \ (*(npy_bool (*)(NpyIter *)) \ PyArray_API[262]) #define NpyIter_GetInnerFixedStrideArray \ (*(void (*)(NpyIter *, npy_intp *)) \ PyArray_API[263]) #define NpyIter_RemoveAxis \ (*(int (*)(NpyIter *, int)) \ PyArray_API[264]) #define NpyIter_GetAxisStrideArray \ (*(npy_intp * (*)(NpyIter *, int)) \ PyArray_API[265]) #define NpyIter_RequiresBuffering \ (*(npy_bool (*)(NpyIter *)) \ PyArray_API[266]) #define NpyIter_GetInitialDataPtrArray \ (*(char ** (*)(NpyIter *)) \ PyArray_API[267]) #define NpyIter_CreateCompatibleStrides \ (*(int (*)(NpyIter *, npy_intp, npy_intp *)) \ PyArray_API[268]) #define PyArray_CastingConverter \ (*(int (*)(PyObject *, NPY_CASTING *)) \ PyArray_API[269]) #define PyArray_CountNonzero \ (*(npy_intp (*)(PyArrayObject *)) \ PyArray_API[270]) #define PyArray_PromoteTypes \ (*(PyArray_Descr * (*)(PyArray_Descr *, PyArray_Descr *)) \ PyArray_API[271]) #define PyArray_MinScalarType \ (*(PyArray_Descr * (*)(PyArrayObject *)) \ PyArray_API[272]) #define PyArray_ResultType \ (*(PyArray_Descr * (*)(npy_intp, PyArrayObject **, npy_intp, PyArray_Descr **)) \ PyArray_API[273]) #define PyArray_CanCastArrayTo \ (*(npy_bool (*)(PyArrayObject *, PyArray_Descr *, NPY_CASTING)) \ PyArray_API[274]) #define PyArray_CanCastTypeTo \ (*(npy_bool (*)(PyArray_Descr *, PyArray_Descr *, NPY_CASTING)) \ PyArray_API[275]) #define PyArray_EinsteinSum \ (*(PyArrayObject * (*)(char *, npy_intp, PyArrayObject **, PyArray_Descr *, NPY_ORDER, NPY_CASTING, PyArrayObject *)) \ PyArray_API[276]) #define PyArray_NewLikeArray \ (*(PyObject * (*)(PyArrayObject *, NPY_ORDER, PyArray_Descr *, int)) \ PyArray_API[277]) #define PyArray_GetArrayParamsFromObject \ (*(int (*)(PyObject *, PyArray_Descr *, npy_bool, PyArray_Descr **, int *, npy_intp *, PyArrayObject **, PyObject *)) \ PyArray_API[278]) #define PyArray_ConvertClipmodeSequence \ (*(int (*)(PyObject *, NPY_CLIPMODE *, int)) \ PyArray_API[279]) #define PyArray_MatrixProduct2 \ (*(PyObject * (*)(PyObject *, PyObject *, PyArrayObject*)) \ PyArray_API[280]) #define NpyIter_IsFirstVisit \ (*(npy_bool (*)(NpyIter *, int)) \ PyArray_API[281]) #define PyArray_SetBaseObject \ (*(int (*)(PyArrayObject *, PyObject *)) \ PyArray_API[282]) #define PyArray_CreateSortedStridePerm \ (*(void (*)(int, npy_intp *, npy_stride_sort_item *)) \ PyArray_API[283]) #define PyArray_RemoveAxesInPlace \ (*(void (*)(PyArrayObject *, npy_bool *)) \ PyArray_API[284]) #define PyArray_DebugPrint \ (*(void (*)(PyArrayObject *)) \ PyArray_API[285]) #define PyArray_FailUnlessWriteable \ (*(int (*)(PyArrayObject *, const char *)) \ PyArray_API[286]) #define PyArray_SetUpdateIfCopyBase \ (*(int (*)(PyArrayObject *, PyArrayObject *)) \ PyArray_API[287]) #define PyDataMem_NEW \ (*(void * (*)(size_t)) \ PyArray_API[288]) #define PyDataMem_FREE \ (*(void (*)(void *)) \ PyArray_API[289]) #define PyDataMem_RENEW \ (*(void * (*)(void *, size_t)) \ PyArray_API[290]) #define PyDataMem_SetEventHook \ (*(PyDataMem_EventHookFunc * (*)(PyDataMem_EventHookFunc *, void *, void **)) \ PyArray_API[291]) #define NPY_DEFAULT_ASSIGN_CASTING (*(NPY_CASTING *)PyArray_API[292]) #if !defined(NO_IMPORT_ARRAY) && !defined(NO_IMPORT) static int _import_array(void) { int st; PyObject *numpy = PyImport_ImportModule("numpy.core.multiarray"); PyObject *c_api = NULL; if (numpy == NULL) { PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); return -1; } c_api = PyObject_GetAttrString(numpy, "_ARRAY_API"); Py_DECREF(numpy); if (c_api == NULL) { PyErr_SetString(PyExc_AttributeError, "_ARRAY_API not found"); return -1; } #if PY_VERSION_HEX >= 0x03000000 if (!PyCapsule_CheckExact(c_api)) { PyErr_SetString(PyExc_RuntimeError, "_ARRAY_API is not PyCapsule object"); Py_DECREF(c_api); return -1; } PyArray_API = (void **)PyCapsule_GetPointer(c_api, NULL); #else if (!PyCObject_Check(c_api)) { PyErr_SetString(PyExc_RuntimeError, "_ARRAY_API is not PyCObject object"); Py_DECREF(c_api); return -1; } PyArray_API = (void **)PyCObject_AsVoidPtr(c_api); #endif Py_DECREF(c_api); if (PyArray_API == NULL) { PyErr_SetString(PyExc_RuntimeError, "_ARRAY_API is NULL pointer"); return -1; } /* Perform runtime check of C API version */ if (NPY_VERSION != PyArray_GetNDArrayCVersion()) { PyErr_Format(PyExc_RuntimeError, "module compiled against "\ "ABI version %x but this version of numpy is %x", \ (int) NPY_VERSION, (int) PyArray_GetNDArrayCVersion()); return -1; } if (NPY_FEATURE_VERSION > PyArray_GetNDArrayCFeatureVersion()) { PyErr_Format(PyExc_RuntimeError, "module compiled against "\ "API version %x but this version of numpy is %x", \ (int) NPY_FEATURE_VERSION, (int) PyArray_GetNDArrayCFeatureVersion()); return -1; } /* * Perform runtime check of endianness and check it matches the one set by * the headers (npy_endian.h) as a safeguard */ st = PyArray_GetEndianness(); if (st == NPY_CPU_UNKNOWN_ENDIAN) { PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as unknown endian"); return -1; } #if NPY_BYTE_ORDER == NPY_BIG_ENDIAN if (st != NPY_CPU_BIG) { PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as "\ "big endian, but detected different endianness at runtime"); return -1; } #elif NPY_BYTE_ORDER == NPY_LITTLE_ENDIAN if (st != NPY_CPU_LITTLE) { PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as "\ "little endian, but detected different endianness at runtime"); return -1; } #endif return 0; } #if PY_VERSION_HEX >= 0x03000000 #define NUMPY_IMPORT_ARRAY_RETVAL NULL #else #define NUMPY_IMPORT_ARRAY_RETVAL #endif #define import_array() {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); return NUMPY_IMPORT_ARRAY_RETVAL; } } #define import_array1(ret) {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); return ret; } } #define import_array2(msg, ret) {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, msg); return ret; } } #endif #endif
neuralcoref/include/numpy/__multiarray_api.h/0
{ "file_path": "neuralcoref/include/numpy/__multiarray_api.h", "repo_id": "neuralcoref", "token_count": 28488 }
281
/* Signal handling: This header file defines macros that allow your code to handle interrupts received during processing. Interrupts that could reasonably be handled: SIGINT, SIGABRT, SIGALRM, SIGSEGV ****Warning*************** Do not allow code that creates temporary memory or increases reference counts of Python objects to be interrupted unless you handle it differently. ************************** The mechanism for handling interrupts is conceptually simple: - replace the signal handler with our own home-grown version and store the old one. - run the code to be interrupted -- if an interrupt occurs the handler should basically just cause a return to the calling function for finish work. - restore the old signal handler Of course, every code that allows interrupts must account for returning via the interrupt and handle clean-up correctly. But, even still, the simple paradigm is complicated by at least three factors. 1) platform portability (i.e. Microsoft says not to use longjmp to return from signal handling. They have a __try and __except extension to C instead but what about mingw?). 2) how to handle threads: apparently whether signals are delivered to every thread of the process or the "invoking" thread is platform dependent. --- we don't handle threads for now. 3) do we need to worry about re-entrance. For now, assume the code will not call-back into itself. Ideas: 1) Start by implementing an approach that works on platforms that can use setjmp and longjmp functionality and does nothing on other platforms. 2) Ignore threads --- i.e. do not mix interrupt handling and threads 3) Add a default signal_handler function to the C-API but have the rest use macros. Simple Interface: In your C-extension: around a block of code you want to be interruptable with a SIGINT NPY_SIGINT_ON [code] NPY_SIGINT_OFF In order for this to work correctly, the [code] block must not allocate any memory or alter the reference count of any Python objects. In other words [code] must be interruptible so that continuation after NPY_SIGINT_OFF will only be "missing some computations" Interrupt handling does not work well with threads. */ /* Add signal handling macros Make the global variable and signal handler part of the C-API */ #ifndef NPY_INTERRUPT_H #define NPY_INTERRUPT_H #ifndef NPY_NO_SIGNAL #include <setjmp.h> #include <signal.h> #ifndef sigsetjmp #define NPY_SIGSETJMP(arg1, arg2) setjmp(arg1) #define NPY_SIGLONGJMP(arg1, arg2) longjmp(arg1, arg2) #define NPY_SIGJMP_BUF jmp_buf #else #define NPY_SIGSETJMP(arg1, arg2) sigsetjmp(arg1, arg2) #define NPY_SIGLONGJMP(arg1, arg2) siglongjmp(arg1, arg2) #define NPY_SIGJMP_BUF sigjmp_buf #endif # define NPY_SIGINT_ON { \ PyOS_sighandler_t _npy_sig_save; \ _npy_sig_save = PyOS_setsig(SIGINT, _PyArray_SigintHandler); \ if (NPY_SIGSETJMP(*((NPY_SIGJMP_BUF *)_PyArray_GetSigintBuf()), \ 1) == 0) { \ # define NPY_SIGINT_OFF } \ PyOS_setsig(SIGINT, _npy_sig_save); \ } #else /* NPY_NO_SIGNAL */ #define NPY_SIGINT_ON #define NPY_SIGINT_OFF #endif /* HAVE_SIGSETJMP */ #endif /* NPY_INTERRUPT_H */
neuralcoref/include/numpy/npy_interrupt.h/0
{ "file_path": "neuralcoref/include/numpy/npy_interrupt.h", "repo_id": "neuralcoref", "token_count": 1302 }
282
import spacy from ..__init__ import add_to_pipe def test_add_pipe(): nlp = spacy.lang.en.English() add_to_pipe(nlp) assert "neuralcoref" in nlp.pipe_names
neuralcoref/neuralcoref/tests/test_neuralcoref.py/0
{ "file_path": "neuralcoref/neuralcoref/tests/test_neuralcoref.py", "repo_id": "neuralcoref", "token_count": 72 }
283
package Algorithm::Munkres; use 5.006; use strict; use warnings; require Exporter; our @ISA = qw(Exporter); our @EXPORT = qw( assign ); our $VERSION = '0.08'; #Variables global to the package my @mat = (); my @mask = (); my @colcov = (); my @rowcov = (); my $Z0_row = 0; my $Z0_col = 0; my @path = (); #The exported subroutine. #Expected Input: Reference to the input matrix (MxN) #Output: Mx1 matrix, giving the column number of the value assigned to each row. (For more explaination refer perldoc) sub assign { #reference to the input matrix my $rmat = shift; my $rsolution_mat = shift; my ($row, $row_len) = (0,0); # re-initialize that global variables @mat = (); @mask = (); @colcov = (); @rowcov = (); $Z0_row = 0; $Z0_col = 0; @path = (); #variables local to the subroutine my $step = 0; my ($i, $j) = (0,0); #the input matrix my @inp_mat = @$rmat; #copy the orginal matrix, before applying the algorithm to the matrix foreach (@inp_mat) { push @mat, [ @$_ ]; } #check if the input matrix is well-formed i.e. either square or rectangle. $row_len = $#{$mat[0]}; foreach my $row (@mat) { if($row_len != $#$row) { die "Please check the input matrix.\nThe input matrix is not a well-formed matrix!\nThe input matrix has to be rectangular or square matrix.\n"; } } #check if the matrix is a square matrix, #if not convert it to square matrix by padding zeroes. if($#mat < $#{$mat[0]}) { # Add rows my $diff = $#{$mat[0]} - $#mat; for (1 .. $diff) { push @mat, [ (0) x @{$mat[0]} ]; } } elsif($#mat > $#{$mat[0]}) { # Add columns my $diff = $#mat - $#{$mat[0]}; for (0 .. $#mat) { push @{$mat[$_]}, (0) x $diff; } } #initialize mask, column cover and row cover matrices clear_covers(); for($i=0;$i<=$#mat;$i++) { push @mask, [ (0) x @mat ]; } #The algorithm can be grouped in 6 steps. &stepone(); &steptwo(); $step = &stepthree(); while($step == 4) { $step = &stepfour(); while($step == 6) { &stepsix(); $step = &stepfour(); } &stepfive(); $step = &stepthree(); } #create the output matrix for my $i (0 .. $#mat) { for my $j (0 .. $#{$mat[$i]}) { if($mask[$i][$j] == 1) { $rsolution_mat->[$i] = $j; } } } #Code for tracing------------------ <<'ee'; print "\nInput Matrix:\n"; for($i=0;$i<=$#mat;$i++) { for($j=0;$j<=$#mat;$j++) { print $mat[$i][$j] . "\t"; } print "\n"; } print "\nMask Matrix:\n"; for($i=0;$i<=$#mat;$i++) { for($j=0;$j<=$#mat;$j++) { print $mask[$i][$j] . "\t"; } print "\n"; } print "\nOutput Matrix:\n"; print "$_\n" for @$rsolution_mat; ee #---------------------------------- } #Step 1 - Find minimum value for every row and subtract this min from each element of the row. sub stepone { # print "Step 1 \n"; #Find the minimum value for every row for my $row (@mat) { my $min = $row->[0]; for (@$row) { $min = $_ if $min > $_; } #Subtract the minimum value of the row from each element of the row. @$row = map {$_ - $min} @$row; } # print "Step 1 end \n"; } #Step 2 - Star the zeroes, Create the mask and cover matrices. Re-initialize the cover matrices for next steps. #To star a zero: We search for a zero in the matrix and than cover the column and row in which it occurs. Now this zero is starred. #A next starred zero can occur only in those columns and rows which have not been previously covered by any other starred zero. sub steptwo { # print "Step 2 \n"; my ($i, $j) = (0,0); for($i=0;$i<=$#mat;$i++) { for($j=0;$j<=$#{$mat[$i]};$j++) { if($mat[$i][$j] == 0 && $colcov[$j] == 0 && $rowcov[$i] == 0) { $mask[$i][$j] = 1; $colcov[$j] = 1; $rowcov[$i] = 1; } } } #Re-initialize the cover matrices &clear_covers(); # print "Step 2 end\n"; } #Step 3 - Check if each column has a starred zero. If yes then the problem is solved else proceed to step 4 sub stepthree { # print "Step 3 \n"; my $cnt = 0; for my $i (0 .. $#mat) { for my $j (0 .. $#mat) { if($mask[$i][$j] == 1) { $colcov[$j] = 1; $cnt++; } } } if($cnt > $#mat) { # print "Step 3 end. Next expected step 7 \n"; return 7; } else { # print "Step 3 end. Next expected step 4 \n"; return 4; } } #Step 4 - Try to find a zero which is not starred and whose columns and rows are not yet covered. #If such a zero found, prime it, try to find a starred zero in its row, # if not found proceed to step 5 # else continue #Else proceed to step 6. sub stepfour { # print "Step 4 \n"; while(1) { my ($row, $col) = &find_a_zero(); if ($row < 0) { # No zeroes return 6; } $mask[$row][$col] = 2; my $star_col = &find_star_in_row($row); if ($star_col >= 0) { $col = $star_col; $rowcov[$row] = 1; $colcov[$col] = 0; } else { $Z0_row = $row; $Z0_col = $col; return 5; } } } #Tries to find yet uncovered zero sub find_a_zero { for my $i (0 .. $#mat) { next if $rowcov[$i]; for my $j (reverse(0 .. $#mat)) # Prefer large $j { next if $colcov[$j]; return ($i, $j) if $mat[$i][$j] == 0; } } return (-1, -1); } #Tries to find starred zero in the given row and returns the column number sub find_star_in_row { my $row = shift; for my $j (0 .. $#mat) { if($mask[$row][$j] == 1) { return $j; } } return -1; } #Step 5 - Try to find a starred zero in the column of the uncovered zero found in the step 4. #If starred zero found, try to find a prime zero in its row. #Continue finding starred zero in the column and primed zero in the row until, #we get to a primed zero which does not have a starred zero in its column. #At this point reduce the non-zero values of mask matrix by 1. i.e. change prime zeros to starred zeroes. #Clear the cover matrices and clear any primes i.e. values=2 from mask matrix. sub stepfive { # print "Step 5 \n"; my $cnt = 0; my $done = 0; $path[$cnt][0] = $Z0_row; $path[$cnt][1] = $Z0_col; while($done == 0) { my $row = &find_star_in_col($path[$cnt][1]); if($row > -1) { $cnt++; $path[$cnt][0] = $row; $path[$cnt][1] = $path[$cnt - 1][1]; } else { $done = 1; } if($done == 0) { my $col = &find_prime_in_row($path[$cnt][0]); $cnt++; $path[$cnt][0] = $path[$cnt - 1][0]; $path[$cnt][1] = $col; } } &convert_path($cnt); &clear_covers(); &erase_primes(); # print "Step 5 end \n"; } #Tries to find starred zero in the given column and returns the row number sub find_star_in_col { my $col = shift; for my $i (0 .. $#mat) { return $i if $mask[$i][$col] == 1; } return -1; } #Tries to find primed zero in the given row and returns the column number sub find_prime_in_row { my $row = shift; for my $j (0 .. $#mat) { return $j if $mask[$row][$j] == 2; } return -1; } #Reduces non-zero value in the mask matrix by 1. #i.e. converts all primes to stars and stars to none. sub convert_path { my $cnt = shift; for my $i (0 .. $cnt) { for ( $mask[$path[$i][0]][$path[$i][1]] ) { $_ = ( $_ == 1 ) ? 0 : 1; } } } #Clears cover matrices sub clear_covers { @rowcov = @colcov = (0) x @mat; } #Changes all primes i.e. values=2 to 0. sub erase_primes { for my $row (@mask) { for my $j (0 .. $#$row) { $row->[$j] = 0 if $row->[$j] == 2; } } } #Step 6 - Find the minimum value from the rows and columns which are currently not covered. #Subtract this minimum value from all the elements of the columns which are not covered. #Add this minimum value to all the elements of the rows which are covered. #Proceed to step 4. sub stepsix { # print "Step 6 \n"; my ($i, $j); my $minval = 0; $minval = &find_smallest(); for($i=0;$i<=$#mat;$i++) { for($j=0;$j<=$#{$mat[$i]};$j++) { if($rowcov[$i] == 1) { $mat[$i][$j] += $minval; } if($colcov[$j] == 0) { $mat[$i][$j] -= $minval; } } } # print "Step 6 end \n"; } #Finds the minimum value from all the matrix values which are not covered. sub find_smallest { my $minval; for my $i (0 .. $#mat) { next if $rowcov[$i]; for my $j (0 .. $#mat) { next if $colcov[$j]; if( !defined($minval) || $minval > $mat[$i][$j]) { $minval = $mat[$i][$j]; } } } return $minval; } 1; __END__ =head1 NAME Algorithm::Munkres - Perl extension for Munkres' solution to classical Assignment problem for square and rectangular matrices This module extends the solution of Assignment problem for square matrices to rectangular matrices by padding zeros. Thus a rectangular matrix is converted to square matrix by padding necessary zeros. =head1 SYNOPSIS use Algorithm::Munkres; @mat = ( [2, 4, 7, 9], [3, 9, 5, 1], [8, 2, 9, 7], ); assign(\@mat,\@out_mat); Then the @out_mat array will have the output as: (0,3,1,2), where 0th element indicates that 0th row is assigned 0th column i.e value=2 1st element indicates that 1st row is assigned 3rd column i.e.value=1 2nd element indicates that 2nd row is assigned 1st column.i.e.value=2 3rd element indicates that 3rd row is assigned 2nd column.i.e.value=0 =head1 DESCRIPTION Assignment Problem: Given N jobs, N workers and the time taken by each worker to complete a job then how should the assignment of a Worker to a Job be done, so as to minimize the time taken. Thus if we have 3 jobs p,q,r and 3 workers x,y,z such that: x y z p 2 4 7 q 3 9 5 r 8 2 9 where the cell values of the above matrix give the time required for the worker(given by column name) to complete the job(given by the row name) then possible solutions are: Total 1. 2, 9, 9 20 2. 2, 2, 5 9 3. 3, 4, 9 16 4. 3, 2, 7 12 5. 8, 9, 7 24 6. 8, 4, 5 17 Thus (2) is the optimal solution for the above problem. This kind of brute-force approach of solving Assignment problem quickly becomes slow and bulky as N grows, because the number of possible solution are N! and thus the task is to evaluate each and then find the optimal solution.(If N=10, number of possible solutions: 3628800 !) Munkres' gives us a solution to this problem, which is implemented in this module. This module also solves Assignment problem for rectangular matrices (M x N) by converting them to square matrices by padding zeros. ex: If input matrix is: [2, 4, 7, 9], [3, 9, 5, 1], [8, 2, 9, 7] i.e 3 x 4 then we will convert it to 4 x 4 and the modified input matrix will be: [2, 4, 7, 9], [3, 9, 5, 1], [8, 2, 9, 7], [0, 0, 0, 0] =head1 EXPORT "assign" function by default. =head1 INPUT The input matrix should be in a two dimensional array(array of array) and the 'assign' subroutine expects a reference to this array and not the complete array. eg:assign(\@inp_mat, \@out_mat); The second argument to the assign subroutine is the reference to the output array. =head1 OUTPUT The assign subroutine expects references to two arrays as its input paramenters. The second parameter is the reference to the output array. This array is populated by assign subroutine. This array is single dimensional Nx1 matrix. For above example the output array returned will be: (0, 2, 1) where 0th element indicates that 0th row is assigned 0th column i.e value=2 1st element indicates that 1st row is assigned 2nd column i.e.value=5 2nd element indicates that 2nd row is assigned 1st column.i.e.value=2 =head1 SEE ALSO 1. http://216.249.163.93/bob.pilgrim/445/munkres.html 2. Munkres, J. Algorithms for the assignment and transportation Problems. J. Siam 5 (Mar. 1957), 32-38 3. François Bourgeois and Jean-Claude Lassalle. 1971. An extension of the Munkres algorithm for the assignment problem to rectangular matrices. Communication ACM, 14(12):802-804 =head1 AUTHOR Anagha Kulkarni, University of Minnesota Duluth kulka020 <at> d.umn.edu Ted Pedersen, University of Minnesota Duluth tpederse <at> d.umn.edu =head1 COPYRIGHT AND LICENSE Copyright (C) 2007-2008, Ted Pedersen and Anagha Kulkarni This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. =cut
neuralcoref/neuralcoref/train/scorer/lib/Algorithm/Munkres.pm/0
{ "file_path": "neuralcoref/neuralcoref/train/scorer/lib/Algorithm/Munkres.pm", "repo_id": "neuralcoref", "token_count": 5741 }
284
<jupyter_start><jupyter_text>Préparer des données (PyTorch) Installez les bibliothèques 🤗 *Transformers* et 🤗 *Datasets* pour exécuter ce *notebook*.<jupyter_code>!pip install datasets transformers[sentencepiece] import torch from transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification # Comme avant checkpoint = "camembert-base" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSequenceClassification.from_pretrained(checkpoint) sequences = [ "J'ai attendu un cours d'HuggingFace toute ma vie.", "Je déteste tellement ça !"] batch = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt") # C'est nouveau batch["labels"] = torch.tensor([1, 1]) optimizer = AdamW(model.parameters()) loss = model(**batch).loss loss.backward() optimizer.step() from datasets import load_dataset raw_datasets = load_dataset("paws-x", "fr") raw_datasets raw_train_dataset = raw_datasets["train"] raw_train_dataset[0] raw_train_dataset.features from transformers import AutoTokenizer checkpoint = "camembert-base" tokenizer = AutoTokenizer.from_pretrained(checkpoint) tokenized_sentences_1 = tokenizer(raw_datasets["train"]["sentence1"]) tokenized_sentences_2 = tokenizer(raw_datasets["train"]["sentence2"]) inputs = tokenizer("C'est la première phrase.", "C'est la deuxième.") inputs tokenizer.convert_ids_to_tokens(inputs["input_ids"]) tokenized_dataset = tokenizer( raw_datasets["train"]["sentence1"], raw_datasets["train"]["sentence2"], padding=True, truncation=True, ) def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) tokenized_datasets from transformers import DataCollatorWithPadding data_collator = DataCollatorWithPadding(tokenizer=tokenizer) samples = tokenized_datasets["train"][:8] samples = {k: v for k, v in samples.items() if k not in ["idx", "sentence1", "sentence2"]} [len(x) for x in samples["input_ids"]] batch = data_collator(samples) {k: v.shape for k, v in batch.items()}<jupyter_output><empty_output>
notebooks/course/fr/chapter3/section2_pt.ipynb/0
{ "file_path": "notebooks/course/fr/chapter3/section2_pt.ipynb", "repo_id": "notebooks", "token_count": 784 }
285
<jupyter_start><jupyter_text>Entraîner un nouveau *tokenizer* à partir d'un ancien Installez les bibliothèques 🤗 *Transformers* et 🤗 *Datasets* pour exécuter ce *notebook*.<jupyter_code>!pip install datasets transformers[sentencepiece] !apt install git-lfs<jupyter_output><empty_output><jupyter_text>Vous aurez besoin de configurer git, adaptez votre email et votre nom dans la cellule suivante.<jupyter_code>!git config --global user.email "you@example.com" !git config --global user.name "Your Name"<jupyter_output><empty_output><jupyter_text>Vous devrez également être connecté au Hub d'Hugging Face. Exécutez ce qui suit et entrez vos informations d'identification.<jupyter_code>from huggingface_hub import notebook_login notebook_login() from datasets import load_dataset # Le chargement peut prendre quelques minutes, alors prenez un café ou un thé pendant que vous attendez ! raw_datasets = load_dataset("code_search_net", "python") raw_datasets["train"] print(raw_datasets["train"][123456]["whole_func_string"]) # Ne décommentez pas la ligne suivante à moins que votre jeu de données soit petit ! # training_corpus = [raw_datasets["train"][i: i + 1000]["whole_func_string"] for i in range(0, len(raw_datasets["train"]), 1000)] training_corpus = ( raw_datasets["train"][i : i + 1000]["whole_func_string"] for i in range(0, len(raw_datasets["train"]), 1000) ) gen = (i for i in range(10)) print(list(gen)) print(list(gen)) def get_training_corpus(): return ( raw_datasets["train"][i : i + 1000]["whole_func_string"] for i in range(0, len(raw_datasets["train"]), 1000) ) training_corpus = get_training_corpus() def get_training_corpus(): dataset = raw_datasets["train"] for start_idx in range(0, len(dataset), 1000): samples = dataset[start_idx : start_idx + 1000] yield samples["whole_func_string"] from transformers import AutoTokenizer old_tokenizer = AutoTokenizer.from_pretrained("asi/gpt-fr-cased-base") example = '''def add_numbers(a, b): """Add the two numbers `a` and `b`.""" return a + b''' tokens = old_tokenizer.tokenize(example) tokens tokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 52000) # prend un peu de temps tokens = tokenizer.tokenize(example) tokens print(len(tokens)) print(len(old_tokenizer.tokenize(example))) example = """class LinearLayer(): def __init__(self, input_size, output_size): self.weight = torch.randn(input_size, output_size) self.bias = torch.zeros(output_size) def __call__(self, x): return x @ self.weights + self.bias """ tokenizer.tokenize(example) tokenizer.save_pretrained("code-search-net-tokenizer") from huggingface_hub import notebook_login notebook_login() tokenizer.push_to_hub("code-search-net-tokenizer") # Remplacez "huggingface-course" ci-dessous par votre espace de nom réel pour utiliser votre propre tokenizer tokenizer = AutoTokenizer.from_pretrained("huggingface-course/code-search-net-tokenizer")<jupyter_output><empty_output>
notebooks/course/fr/chapter6/section2.ipynb/0
{ "file_path": "notebooks/course/fr/chapter6/section2.ipynb", "repo_id": "notebooks", "token_count": 1147 }
286
<jupyter_start><jupyter_text>Résumé (PyTorch) Installez les bibliothèques 🤗 *Datasets* et 🤗 *Transformers* pour exécuter ce *notebook*.<jupyter_code>!pip install datasets transformers[sentencepiece] !pip install accelerate # Pour exécuter l'entraînement sur TPU, vous devez décommenter la ligne suivante : # !pip install cloud-tpu-client==0.10 torch==1.9.0 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl !apt install git-lfs<jupyter_output><empty_output><jupyter_text>Vous aurez besoin de configurer git, adaptez votre email et votre nom dans la cellule suivante.<jupyter_code>!git config --global user.email "you@example.com" !git config --global user.name "Your Name"<jupyter_output><empty_output><jupyter_text>Vous devrez également être connecté au Hub d'Hugging Face. Exécutez ce qui suit et entrez vos informations d'identification.<jupyter_code>from huggingface_hub import notebook_login notebook_login() from datasets import load_dataset french_dataset = load_dataset("amazon_reviews_multi", "fr") english_dataset = load_dataset("amazon_reviews_multi", "en") french_dataset def show_samples(dataset, num_samples=3, seed=42): sample = dataset["train"].shuffle(seed=seed).select(range(num_samples)) for example in sample: print(f"\n'>> Title: {example['review_title']}'") print(f"'>> Review: {example['review_body']}'") show_samples(french_dataset) french_dataset.set_format("pandas") french_df = french_dataset["train"][:] # Afficher les comptes des 20 premiers produits french_df["product_category"].value_counts()[:20] def filter_books(example): return ( example["product_category"] == "book" or example["product_category"] == "digital_ebook_purchase" ) french_dataset.reset_format() french_books = french_dataset.filter(filter_books) english_books = english_dataset.filter(filter_books) show_samples(french_dataset) from datasets import concatenate_datasets, DatasetDict books_dataset = DatasetDict() for split in english_books.keys(): books_dataset[split] = concatenate_datasets( [english_books[split], french_books[split]] ) books_dataset[split] = books_dataset[split].shuffle(seed=42) # Quelques exemples show_samples(books_dataset) books_dataset = books_dataset.filter(lambda x: len(x["review_title"].split()) > 2) from transformers import AutoTokenizer model_checkpoint = "google/mt5-small" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) inputs = tokenizer("J'ai adoré lire les Hunger Games !") inputs tokenizer.convert_ids_to_tokens(inputs.input_ids) max_input_length = 512 max_target_length = 30 def preprocess_function(examples): model_inputs = tokenizer( examples["review_body"], max_length=max_input_length, truncation=True ) # Configurer le tokenizer pour les cibles with tokenizer.as_target_tokenizer(): labels = tokenizer( examples["review_title"], max_length=max_target_length, truncation=True ) model_inputs["labels"] = labels["input_ids"] return model_inputs tokenized_datasets = books_dataset.map(preprocess_function, batched=True) generated_summary = "J'ai absolument adoré lire les Hunger Games" reference_summary = "J'ai adoré lire les Hunger Games" !pip install rouge_score from datasets import load_metric rouge_score = load_metric("rouge") scores = rouge_score.compute( predictions=[generated_summary], references=[reference_summary] ) scores scores["rouge1"].mid !pip install nltk import nltk nltk.download("punkt") from nltk.tokenize import sent_tokenize def three_sentence_summary(text): return "\n".join(sent_tokenize(text)[:3]) print(three_sentence_summary(books_dataset["train"][1]["review_body"])) def evaluate_baseline(dataset, metric): summaries = [three_sentence_summary(text) for text in dataset["review_body"]] return metric.compute(predictions=summaries, references=dataset["review_title"]) import pandas as pd score = evaluate_baseline(books_dataset["validation"], rouge_score) rouge_names = ["rouge1", "rouge2", "rougeL", "rougeLsum"] rouge_dict = dict((rn, round(score[rn].mid.fmeasure * 100, 2)) for rn in rouge_names) rouge_dict from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) from transformers import Seq2SeqTrainingArguments batch_size = 8 num_train_epochs = 8 # Montre la perte d'entraînement à chaque époque logging_steps = len(tokenized_datasets["train"]) // batch_size model_name = model_checkpoint.split("/")[-1] args = Seq2SeqTrainingArguments( output_dir=f"{model_name}-finetuned-amazon-en-fr", evaluation_strategy="epoch", learning_rate=5.6e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, save_total_limit=3, num_train_epochs=num_train_epochs, predict_with_generate=True, logging_steps=logging_steps, push_to_hub=True, ) import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred # Décoder les résumés générés en texte decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Remplacer -100 dans les étiquettes car nous ne pouvons pas les décoder labels = np.where(labels != -100, labels, tokenizer.pad_token_id) # Décoder les résumés de référence en texte decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # ROUGE attend une nouvelle ligne après chaque phrase decoded_preds = ["\n".join(sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(sent_tokenize(label.strip())) for label in decoded_labels] # Calculer les scores ROUGE result = rouge_score.compute( predictions=decoded_preds, references=decoded_labels, use_stemmer=True ) # Extraire les scores médians result = {key: value.mid.fmeasure * 100 for key, value in result.items()} return {k: round(v, 4) for k, v in result.items()} from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) tokenized_datasets = tokenized_datasets.remove_columns( books_dataset["train"].column_names ) features = [tokenized_datasets["train"][i] for i in range(2)] data_collator(features) from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, ) trainer.train() trainer.evaluate() tokenized_datasets.set_format("torch") model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) from torch.utils.data import DataLoader batch_size = 8 train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=data_collator, batch_size=batch_size, ) eval_dataloader = DataLoader( tokenized_datasets["validation"], collate_fn=data_collator, batch_size=batch_size ) from torch.optim import AdamW optimizer = AdamW(model.parameters(), lr=2e-5) from accelerate import Accelerator accelerator = Accelerator() model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader ) from transformers import get_scheduler num_train_epochs = 10 num_update_steps_per_epoch = len(train_dataloader) num_training_steps = num_train_epochs * num_update_steps_per_epoch lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps, ) def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [label.strip() for label in labels] # ROUGE attend une nouvelle ligne après chaque phrase preds = ["\n".join(nltk.sent_tokenize(pred)) for pred in preds] labels = ["\n".join(nltk.sent_tokenize(label)) for label in labels] return preds, labels from huggingface_hub import get_full_repo_name model_name = "test-bert-finetuned-squad-accelerate" repo_name = get_full_repo_name(model_name) repo_name from huggingface_hub import Repository output_dir = "results-mt5-finetuned-squad-accelerate" repo = Repository(output_dir, clone_from=repo_name) from tqdm.auto import tqdm import torch import numpy as np progress_bar = tqdm(range(num_training_steps)) for epoch in range(num_train_epochs): # Entraînement model.train() for step, batch in enumerate(train_dataloader): outputs = model(**batch) loss = outputs.loss accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) # Evaluation model.eval() for step, batch in enumerate(eval_dataloader): with torch.no_grad(): generated_tokens = accelerator.unwrap_model(model).generate( batch["input_ids"], attention_mask=batch["attention_mask"], ) generated_tokens = accelerator.pad_across_processes( generated_tokens, dim=1, pad_index=tokenizer.pad_token_id ) labels = batch["labels"] # Si nous n'avons pas rempli la longueur maximale, nous devons également remplir les étiquettes labels = accelerator.pad_across_processes( batch["labels"], dim=1, pad_index=tokenizer.pad_token_id ) generated_tokens = accelerator.gather(generated_tokens).cpu().numpy() labels = accelerator.gather(labels).cpu().numpy() # Remplacer -100 dans les étiquettes car nous ne pouvons pas les décoder labels = np.where(labels != -100, labels, tokenizer.pad_token_id) if isinstance(generated_tokens, tuple): generated_tokens = generated_tokens[0] decoded_preds = tokenizer.batch_decode( generated_tokens, skip_special_tokens=True ) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) decoded_preds, decoded_labels = postprocess_text( decoded_preds, decoded_labels ) rouge_score.add_batch(predictions=decoded_preds, references=decoded_labels) # Calculer les métriques result = rouge_score.compute() # Extraire les scores médians de ROUGE result = {key: value.mid.fmeasure * 100 for key, value in result.items()} result = {k: round(v, 4) for k, v in result.items()} print(f"Epoch {epoch}:", result) # Sauvegarder et télécharger accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save) if accelerator.is_main_process: tokenizer.save_pretrained(output_dir) repo.push_to_hub( commit_message=f"Training in progress epoch {epoch}", blocking=False ) from transformers import pipeline hub_model_id = "huggingface-course/mt5-small-finetuned-amazon-en-fr" summarizer = pipeline("summarization", model=hub_model_id) def print_summary(idx): review = books_dataset["test"][idx]["review_body"] title = books_dataset["test"][idx]["review_title"] summary = summarizer(books_dataset["test"][idx]["review_body"])[0]["summary_text"] print(f"'>>> Review: {review}'") print(f"\n'>>> Title: {title}'") print(f"\n'>>> Summary: {summary}'") print_summary(100) print_summary(0)<jupyter_output><empty_output>
notebooks/course/fr/chapter7/section5_pt.ipynb/0
{ "file_path": "notebooks/course/fr/chapter7/section5_pt.ipynb", "repo_id": "notebooks", "token_count": 4623 }
287
<jupyter_start><jupyter_text>Fonctions avancées d'Interface Installez les bibliothèques 🤗 Transformers et 🤗 Gradio pour exécuter ce *notebook*.<jupyter_code>!pip install datasets transformers[sentencepiece] !pip install gradio import random import gradio as gr def chat(message, history): history = history or [] if message.startswith("Combien"): response = random.randint(1, 10) elif message.startswith("Comment"): response = random.choice(["Super", "Bon", "Ok", "Mal"]) elif message.startswith("Où"): response = random.choice(["Ici", "Là", "Quelque part"]) else: response = "Je ne sais pas." history.append((message, response)) return history, history iface = gr.Interface( chat, ["text", "state"], ["chatbot", "state"], allow_screenshot=False, allow_flagging="never", ) iface.launch() import requests import tensorflow as tf import gradio as gr inception_net = tf.keras.applications.MobileNetV2() # charger le modèle # Télécharger des étiquettes lisibles par l'homme pour ImageNet response = requests.get("https://git.io/JJkYN") labels = response.text.split("\n") def classify_image(inp): inp = inp.reshape((-1, 224, 224, 3)) inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp) prediction = inception_net.predict(inp).flatten() return {labels[i]: float(prediction[i]) for i in range(1000)} image = gr.Image(shape=(224, 224)) label = gr.Label(num_top_classes=3) title = "Classification des images avec Gradio + Exemple d'interprétation" gr.Interface( fn=classify_image, inputs=image, outputs=label, interpretation="default", title=title ).launch()<jupyter_output><empty_output>
notebooks/course/fr/chapter9/section6.ipynb/0
{ "file_path": "notebooks/course/fr/chapter9/section6.ipynb", "repo_id": "notebooks", "token_count": 647 }
288
<jupyter_start><jupyter_text>CLIP Guided Stable Diffusion using [d🧨ffusers](https://github.com/huggingface/diffusers) This notebook shows how to do CLIP guidance with Stable diffusion using diffusers libray. This allows you to use newly released [CLIP models by LAION AI.](https://huggingface.co/laion).This notebook is based on the following amazing repos, all credits to the original authors!- https://github.com/Jack000/glid-3-xl- https://github.dev/crowsonkb/k-diffusion Initial Setup<jupyter_code>#@title Instal dependancies !pip install -qqq diffusers==0.11.1 transformers ftfy gradio accelerate<jupyter_output> |████████████████████████████████| 229 kB 8.9 MB/s  |████████████████████████████████| 4.9 MB 68.2 MB/s  |████████████████████████████████| 53 kB 2.2 MB/s  |████████████████████████████████| 5.3 MB 51.1 MB/s  |████████████████████████████████| 163 kB 68.4 MB/s  |████████████████████████████████| 6.6 MB 46.5 MB/s  |████████████████████████████████| 55 kB 4.4 MB/s  |████████████████████████████████| 2.3 MB 56.0 MB/s  |████████████████████████████████| 57 kB 6.2 MB/s  |████████████████████████████████| 270 kB 48.4 MB/s  |████████████████████████████████| 112 kB 58.6 MB/s  |████████████████████████████████| 84 kB 4.2 MB/s  |████████████████████████████████| 54 kB 3.8 MB/s  |████████████████████████████████| 84 kB 2.4 MB/s  |████████████████████████████████| 212 kB 12.2 MB/s  |████████████████████████████████| 63 kB 2.7 MB/s  |██████████████████████████████[...]<jupyter_text>Authenticate with Hugging Face HubTo use private and gated models on 🤗 Hugging Face Hub, login is required. If you are only using a public checkpoint (such as `CompVis/stable-diffusion-v1-4` in this notebook), you can skip this step.<jupyter_code>#@title Login from huggingface_hub import notebook_login notebook_login()<jupyter_output>Login successful Your token has been saved to /root/.huggingface/token<jupyter_text>CLIP Guided Stable Diffusion<jupyter_code>#@title Load the pipeline import torch from PIL import Image from diffusers import LMSDiscreteScheduler, DiffusionPipeline, PNDMScheduler from transformers import CLIPFeatureExtractor, CLIPModel model_id = "CompVis/stable-diffusion-v1-4" #@param {type: "string"} clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" #@param ["laion/CLIP-ViT-B-32-laion2B-s34B-b79K", "laion/CLIP-ViT-L-14-laion2B-s32B-b82K", "laion/CLIP-ViT-H-14-laion2B-s32B-b79K", "laion/CLIP-ViT-g-14-laion2B-s12B-b42K", "openai/clip-vit-base-patch32", "openai/clip-vit-base-patch16", "openai/clip-vit-large-patch14"] {allow-input: true} scheduler = "plms" #@param ['plms', 'lms'] def image_grid(imgs, rows, cols): assert len(imgs) == rows*cols w, h = imgs[0].size grid = Image.new('RGB', size=(cols*w, rows*h)) grid_w, grid_h = grid.size for i, img in enumerate(imgs): grid.paste(img, box=(i%cols*w, i//cols*h)) return grid if scheduler == "lms": scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear") else: scheduler = PNDMScheduler.from_config(model_id, subfolder="scheduler") feature_extractor = CLIPFeatureExtractor.from_pretrained(clip_model_id) clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) guided_pipeline = DiffusionPipeline.from_pretrained( model_id, custom_pipeline="clip_guided_stable_diffusion", custom_revision="main", # TODO: remove if diffusers>=0.12.0 clip_model=clip_model, feature_extractor=feature_extractor, scheduler=scheduler, torch_dtype=torch.float16, ) guided_pipeline = guided_pipeline.to("cuda") #@title Generate with Gradio Demo import gradio as gr import torch from torch import autocast from diffusers import StableDiffusionPipeline from PIL import Image last_model = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" def infer(prompt, clip_prompt, samples, steps, clip_scale, scale, seed, clip_model, use_cutouts, num_cutouts): global last_model print(last_model) if(last_model == clip_model): guided_pipeline = create_clip_guided_pipeline(model_id, clip_model_id) guided_pipeline = guided_pipeline.to("cuda") last_model = clip_model prompt = prompt clip_prompt = clip_prompt num_samples = samples num_inference_steps = steps guidance_scale = scale clip_guidance_scale = clip_scale if(use_cutouts): use_cutouts = "True" else: use_cutouts = "False" unfreeze_unet = "True" unfreeze_vae = "True" seed = seed if unfreeze_unet == "True": guided_pipeline.unfreeze_unet() else: guided_pipeline.freeze_unet() if unfreeze_vae == "True": guided_pipeline.unfreeze_vae() else: guided_pipeline.freeze_vae() generator = torch.Generator(device="cuda").manual_seed(seed) images = [] for i in range(num_samples): image = guided_pipeline( prompt, clip_prompt=clip_prompt if clip_prompt.strip() != "" else None, num_inference_steps=num_inference_steps, guidance_scale=guidance_scale, clip_guidance_scale=clip_guidance_scale, num_cutouts=num_cutouts, use_cutouts=use_cutouts == "True", generator=generator, ).images[0] images.append(image) #image_grid(images, 1, num_samples) return images css = """ .gradio-container { font-family: 'IBM Plex Sans', sans-serif; } .gr-button { color: white; border-color: black; background: black; } input[type='range'] { accent-color: black; } .dark input[type='range'] { accent-color: #dfdfdf; } .container { max-width: 730px; margin: auto; padding-top: 1.5rem; } #gallery { min-height: 22rem; margin-bottom: 15px; margin-left: auto; margin-right: auto; border-bottom-right-radius: .5rem !important; border-bottom-left-radius: .5rem !important; } #gallery>div>.h-full { min-height: 20rem; } .details:hover { text-decoration: underline; } .gr-button { white-space: nowrap; } .gr-button:focus { border-color: rgb(147 197 253 / var(--tw-border-opacity)); outline: none; box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); --tw-border-opacity: 1; --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); --tw-ring-opacity: .5; } #advanced-btn { font-size: .7rem !important; line-height: 19px; margin-top: 12px; margin-bottom: 12px; padding: 2px 8px; border-radius: 14px !important; } #advanced-options { display: none; margin-bottom: 20px; } .footer { margin-bottom: 45px; margin-top: 35px; text-align: center; border-bottom: 1px solid #e5e5e5; } .footer>p { font-size: .8rem; display: inline-block; padding: 0 10px; transform: translateY(10px); background: white; } .dark .footer { border-color: #303030; } .dark .footer>p { background: #0b0f19; } .acknowledgments h4{ margin: 1.25em 0 .25em 0; font-weight: bold; font-size: 115%; } """ block = gr.Blocks(css=css) examples = [ [ 'A high tech solarpunk utopia in the Amazon rainforest', 2, 45, 7.5, 1024, ], [ 'A pikachu fine dining with a view to the Eiffel Tower', 2, 45, 7, 1024, ], [ 'A mecha robot in a favela in expressionist style', 2, 45, 7, 1024, ], [ 'an insect robot preparing a delicious meal', 2, 45, 7, 1024, ], [ "A small cabin on top of a snowy mountain in the style of Disney, artstation", 2, 45, 7, 1024, ], ] with block: gr.HTML( """ <div style="text-align: center; max-width: 650px; margin: 0 auto;"> <div style=" display: inline-flex; align-items: center; gap: 0.8rem; font-size: 1.75rem; " > <svg width="0.65em" height="0.65em" viewBox="0 0 115 115" fill="none" xmlns="http://www.w3.org/2000/svg" > <rect width="23" height="23" fill="white"></rect> <rect y="69" width="23" height="23" fill="white"></rect> <rect x="23" width="23" height="23" fill="#AEAEAE"></rect> <rect x="23" y="69" width="23" height="23" fill="#AEAEAE"></rect> <rect x="46" width="23" height="23" fill="white"></rect> <rect x="46" y="69" width="23" height="23" fill="white"></rect> <rect x="69" width="23" height="23" fill="black"></rect> <rect x="69" y="69" width="23" height="23" fill="black"></rect> <rect x="92" width="23" height="23" fill="#D9D9D9"></rect> <rect x="92" y="69" width="23" height="23" fill="#AEAEAE"></rect> <rect x="115" y="46" width="23" height="23" fill="white"></rect> <rect x="115" y="115" width="23" height="23" fill="white"></rect> <rect x="115" y="69" width="23" height="23" fill="#D9D9D9"></rect> <rect x="92" y="46" width="23" height="23" fill="#AEAEAE"></rect> <rect x="92" y="115" width="23" height="23" fill="#AEAEAE"></rect> <rect x="92" y="69" width="23" height="23" fill="white"></rect> <rect x="69" y="46" width="23" height="23" fill="white"></rect> <rect x="69" y="115" width="23" height="23" fill="white"></rect> <rect x="69" y="69" width="23" height="23" fill="#D9D9D9"></rect> <rect x="46" y="46" width="23" height="23" fill="black"></rect> <rect x="46" y="115" width="23" height="23" fill="black"></rect> <rect x="46" y="69" width="23" height="23" fill="black"></rect> <rect x="23" y="46" width="23" height="23" fill="#D9D9D9"></rect> <rect x="23" y="115" width="23" height="23" fill="#AEAEAE"></rect> <rect x="23" y="69" width="23" height="23" fill="black"></rect> </svg> <h1 style="font-weight: 900; margin-bottom: 7px;"> CLIP Guided Stable Diffusion Demo </h1> </div> <p style="margin-bottom: 10px; font-size: 94%"> Demo allows you to use newly released <a href="https://huggingface.co/laion" style="text-decoration: underline">CLIP models by LAION AI</a> with Stable Diffusion </p> </div> """ ) with gr.Group(): with gr.Box(): with gr.Row().style(mobile_collapse=False, equal_height=True): text = gr.Textbox( label="Enter your prompt", show_label=False, max_lines=1, placeholder="Enter your prompt", ).style( border=(True, False, True, True), rounded=(True, False, False, True), container=False, ) btn = gr.Button("Generate image").style( margin=False, rounded=(False, True, True, False), ) gallery = gr.Gallery( label="Generated images", show_label=False, elem_id="gallery" ).style(grid=[2], height="auto") advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") with gr.Row(elem_id="advanced-options"): with gr.Column(): clip_prompt = gr.Textbox( label="Enter a CLIP prompt if you want it to differ", show_label=False, max_lines=1, placeholder="Enter a CLIP prompt if you want it to differ", ) with gr.Row(): samples = gr.Slider(label="Images", minimum=1, maximum=2, value=1, step=1) steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=45, step=1) with gr.Row(): use_cutouts = gr.Checkbox(label="Use cutouts?") num_cutouts = gr.Slider(label="Cutouts", minimum=1, maximum=16, value=4, step=1) with gr.Row(): with gr.Column(): clip_model = gr.Dropdown(["laion/CLIP-ViT-B-32-laion2B-s34B-b79K", "laion/CLIP-ViT-L-14-laion2B-s32B-b82K", "laion/CLIP-ViT-H-14-laion2B-s32B-b79K", "laion/CLIP-ViT-g-14-laion2B-s12B-b42K", "openai/clip-vit-base-patch32", "openai/clip-vit-base-patch16", "openai/clip-vit-large-patch14"], value="laion/CLIP-ViT-B-32-laion2B-s34B-b79K", show_label=False) with gr.Row(): scale = gr.Slider( label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1 ) seed = gr.Slider( label="Seed", minimum=0, maximum=2147483647, step=1, randomize=True, ) clip_scale = gr.Slider( label="CLIP Guidance Scale", minimum=0, maximum=5000, value=100, step=1 ) ex = gr.Examples(examples=examples, fn=infer, inputs=[text, samples, steps, scale, clip_scale, seed], outputs=gallery, cache_examples=False) ex.dataset.headers = [""] text.submit(infer, inputs=[text, clip_prompt, samples, steps, scale, clip_scale, seed, clip_model, use_cutouts, num_cutouts], outputs=gallery) btn.click(infer, inputs=[text, clip_prompt, samples, steps, scale, clip_scale, seed, clip_model, use_cutouts, num_cutouts], outputs=gallery) advanced_button.click( None, [], text, _js=""" () => { const options = document.querySelector("body > gradio-app").querySelector("#advanced-options"); options.style.display = ["none", ""].includes(options.style.display) ? "flex" : "none"; }""", ) gr.HTML( """ <div class="footer"> <p>Model by <a href="https://huggingface.co/CompVis" style="text-decoration: underline;" target="_blank">CompVis</a> and <a href="https://huggingface.co/stabilityai" style="text-decoration: underline;" target="_blank">Stability AI</a> - Gradio Demo by 🤗 Hugging Face </p> </div> <div class="acknowledgments"> <p><h4>LICENSE</h4> The model is licensed with a <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" style="text-decoration: underline;" target="_blank">CreativeML Open RAIL-M</a> license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank" style="text-decoration: underline;" target="_blank">read the license</a></p> <p><h4>Biases and content acknowledgment</h4> Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B dataset</a>, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the <a href="https://huggingface.co/CompVis/stable-diffusion-v1-4" style="text-decoration: underline;" target="_blank">model card</a></p> </div> """ ) block.launch(debug=True) #@title Generate on Colab prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" #@param {type: "string"} #@markdown `clip_prompt` is optional, if you leave it blank the same prompt is sent to Stable Diffusion and CLIP clip_prompt = "" #@param {type: "string"} num_samples = 1 #@param {type: "number"} num_inference_steps = 50 #@param {type: "number"} guidance_scale = 7.5 #@param {type: "number"} clip_guidance_scale = 100 #@param {type: "number"} num_cutouts = 4 #@param {type: "number"} use_cutouts = "False" #@param ["False", "True"] unfreeze_unet = "True" #@param ["False", "True"] unfreeze_vae = "True" #@param ["False", "True"] seed = 3788086447 #@param {type: "number"} if unfreeze_unet == "True": guided_pipeline.unfreeze_unet() else: guided_pipeline.freeze_unet() if unfreeze_vae == "True": guided_pipeline.unfreeze_vae() else: guided_pipeline.freeze_vae() generator = torch.Generator(device="cuda").manual_seed(seed) images = [] for i in range(num_samples): image = guided_pipeline( prompt, clip_prompt=clip_prompt if clip_prompt.strip() != "" else None, num_inference_steps=num_inference_steps, guidance_scale=guidance_scale, clip_guidance_scale=clip_guidance_scale, num_cutouts=num_cutouts, use_cutouts=use_cutouts == "True", generator=generator, ).images[0] images.append(image) image_grid(images, 1, num_samples)<jupyter_output><empty_output>
notebooks/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb/0
{ "file_path": "notebooks/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb", "repo_id": "notebooks", "token_count": 9092 }
289
<jupyter_start><jupyter_text>**The Stable Diffusion Guide** 🎨 *...using `🧨 diffusers`* **Intro**Stable Diffusion is a [Latent Diffusion model](https://github.com/CompVis/latent-diffusion) developed by researchers from the Machine Vision and Learning group at LMU Munich, *a.k.a* CompVis.Model checkpoints were publicly released at the end of August by a collaboration of Stability AI, CompVis, and Runway with support from EleutherAI and LAION. For more information, you can check out [the official blog post](https://stability.ai/blog/stable-diffusion-public-release).Since its public release the community has done an incredible job at working together to make the stable diffusion checkpoints **faster**, **more memory efficient**, and **more performant**.[🧨 Diffusers](https://github.com/huggingface/diffusers) offers a simple API to run stable diffusion with all memory, compute and quality improvements.This notebook walks you through the improvements one-by-one and you can best leverage *Stable Diffusion* for **inference**. SetupFirst, please make sure you are using a GPU runtime to run this notebook, so inference is much faster. If the following command fails, use the `Runtime` menu above and select `Change runtime type`.<jupyter_code>!nvidia-smi<jupyter_output>Thu Jan 5 10:04:48 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 72C P0 30W / 70W | 9870MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-------[...]<jupyter_text>Next, you should install `diffusers` as well `scipy`, `ftfy` and `transformers`.<jupyter_code>!pip install diffusers[torch]==0.11.1 transformers scipy ftfy accelerate<jupyter_output>Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting diffusers[torch] Downloading diffusers-0.11.1-py3-none-any.whl (524 kB)  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 524.9/524.9 KB 9.6 MB/s eta 0:00:00 [?25hCollecting transformers Downloading transformers-4.25.1-py3-none-any.whl (5.8 MB)  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 71.7 MB/s eta 0:00:00 [?25hRequirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (1.7.3) Collecting ftfy Downloading ftfy-6.1.1-py3-none-any.whl (53 kB)  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.1/53.1 KB 7.7 MB/s eta 0:00:00 [?25hCollecting accelerate Downloading accelerate-0.15.0-py3-none-any.whl (191 kB)  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 191.5/191.5 KB 774.2 kB/s eta 0:00:00 [...]<jupyter_text>Prompt Engineering 🎨When running *Stable Diffusion* in inference, we usually want to generate a certain type, style of image and then improve upon it. Improving upon a previously generated image means running inference over and over again with a different prompt and potentially a different seed until we are happy with our generation. So to begin with, it is most important to speed up stable diffusion as much as possible to generate as many pictures as possible in a given amount of time. This can be done by both improving the **computational efficiency** (speed) and the **memory efficiency** (GPU RAM).Let's start by looking into the computational efficiency first. Through-out the notebook, we will focus on [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5):<jupyter_code>model_id = "runwayml/stable-diffusion-v1-5"<jupyter_output><empty_output><jupyter_text>Let's load the pipeline as defined on https://huggingface.co/runwayml/stable-diffusion-v1-5 . Speed Optimization<jupyter_code>from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained(model_id)<jupyter_output><empty_output><jupyter_text>We aim at generating a beautiful photograph of an *old warrior chief* and will later try to find the best prompt to generate such a photograph. For now, let's keep the prompt simple:<jupyter_code>prompt = "portrait photo of a old warrior chief"<jupyter_output><empty_output><jupyter_text>To begin with, we should make sure we run inference on GPU, so let's move the pipeline to GPU, just like you would with any PyTorch module.<jupyter_code>pipe = pipe.to("cuda")<jupyter_output><empty_output><jupyter_text>To generate an image, you should use the `__call__` method. You can have a look at its docstring to better understand what arguments can be passed [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2imgdiffusers.StableDiffusionPipeline.__call__). To make sure we can reproduce more or less the same image in every call, let's make use of the generator. See documentation on reproducibality [here]( ) for more information.<jupyter_code>import torch generator = torch.Generator("cuda").manual_seed(0)<jupyter_output><empty_output><jupyter_text>Now, let's take a spin on it.<jupyter_code>image = pipe(prompt, generator=generator).images[0] image<jupyter_output><empty_output><jupyter_text>Cool, this now took roughly 30 seconds on a T4 GPU (you might see faster inference if you're allocated GPU is better than a T4).The default run we did above used full float32 precision and run the default 50 inference steps. The easiest speed-ups come from switiching to float16 (or half) precision and simply running less inference steps. Let's load the model now instead in float16.<jupyter_code>pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda")<jupyter_output><empty_output><jupyter_text>And we can again call the pipeline to generate an image.<jupyter_code>generator = torch.Generator("cuda").manual_seed(0) image = pipe(prompt, generator=generator).images[0] image<jupyter_output><empty_output><jupyter_text>Cool, this is almost three times as fast for arguably the exact same image quality. We strongly suggest to always run your pipelines in float16 as so far we have very rarely seen degradations in quality because of it.Next, let's see if we really need to use 50 inference steps or whether we could use significantly less. Let's have a look at all schedulers, the stable diffusion pipeline is compatible with.<jupyter_code>pipe.scheduler.compatibles<jupyter_output><empty_output><jupyter_text>Cool, that's a lot of schedulers.🧨 Diffusers is constantely adding a bunch of novel schedulers/samplers that can be used with Stable Diffusion. For more information, we recommend to take a look at the official documentation [here](https://huggingface.co/docs/diffusers/main/en/api/schedulers/overview).Alright, right now Stable Diffusion is using the `PNDMScheduler` which usually requires around 50 inference steps. However, other schedulers such as `DPMSolverMultistepScheduler` or `DPMSolverSinglestepScheduler` seem to get away with just 20 to 25 inference steps. Let's try them out. You can set a new scheduler by making use of the [from_config](https://huggingface.co/docs/diffusers/main/en/api/configurationdiffusers.ConfigMixin.from_config) function.<jupyter_code>from diffusers import DPMSolverMultistepScheduler pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)<jupyter_output><empty_output><jupyter_text>Now, let's try to reduce the number of inference steps to just 20.<jupyter_code>generator = torch.Generator("cuda").manual_seed(0) image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] image<jupyter_output><empty_output><jupyter_text>The image now does look a little different, but it's arguably still of equally high quality. We now cut inference time to just 4 seconds though 😍. Memory Optimization More memory indirectly also means more speed since we're often trying to maximize how many images we can generate per second, so more images per inference, more images per second usually. The easiest way to see how much we can increase memory is to simply try it out, and see when we get a *"Out-of-memory (OOM)"* error.One can run batched inference by simply passing a list of prompts and generators. Let's define a quick function that generates a batch for us.<jupyter_code>def get_inputs(batch_size=1): generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] prompts = batch_size * [prompt] num_inference_steps = 20 return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps}<jupyter_output><empty_output><jupyter_text>We also need a method that allows us to easily display a batch of images.<jupyter_code>from PIL import Image def image_grid(imgs, rows=2, cols=2): w, h = imgs[0].size grid = Image.new('RGB', size=(cols*w, rows*h)) for i, img in enumerate(imgs): grid.paste(img, box=(i%cols*w, i//cols*h)) return grid<jupyter_output><empty_output><jupyter_text>Cool, let's see how much memory we can use starting with batch_size=4.<jupyter_code>images = pipe(**get_inputs(batch_size=4)).images image_grid(images)<jupyter_output><empty_output><jupyter_text>Going over a batch_size of 4 will error out in this colab. Also we can see we only generate slighly more images per second (3.75s/image) compared to 4s/image previously.However, the community has found some nice tricks to improve the memory constraints further${}^1$. By far most of the memory is taken up by the cross attention layers. Instead of running this operation in batch, one can run it sequentially to save a significant amout of memory.It can easily be enabled by calling `enable_attention_slicing` as is documented [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2imgdiffusers.StableDiffusionPipeline.enable_attention_slicing).---${}^1$ After stable diffusion was relesaed, the community found improvements within days and shared the freely over GitHub - open-source at its finest!I believe theh original idea came from [this](https://github.com/basujindal/stable-diffusion/pull/117). GitHub thread.<jupyter_code>pipe.enable_attention_slicing()<jupyter_output><empty_output><jupyter_text>Great now that attention slicing is enabled, let's try to double the batch size again, going for `batch_size=8`.<jupyter_code>images = pipe(**get_inputs(batch_size=8)).images image_grid(images, rows=2, cols=4)<jupyter_output><empty_output><jupyter_text>Nice, it works. However, the speed gain is again not very big (it might however be much more significant on other GPUs).We're at roughly 3.5 seconds per image 🔥 which is probably the fastest we can be with a simple T4 without sacrificing quality.Next, let's look into how to improve the quality! Quality ImprovementsNow that our image generation pipeline is blazing fast, let's try to get maximum image quality.First of all, image quality is extremely subjective, so it's difficult to make general claims here.The most obvious step to take to improve quality is to use *better checkpoints*. Since the release of Stable Diffusion many improved versions have been released, which are summarized here:- *Official Release - 22 Aug 2022*: [Stable-Diffusion 1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)- *20 October 2022*: [Stable-Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)- *24 Nov 2022*: [Stable-Diffusion 2.0](https://huggingface.co/stabilityai/stable-diffusion-2-0)- *7 Dec 2022*: [Stable-Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)Newer versions don't necessarily mean better image quality with the same parameters. People mentioned that *2.0* is slighly worse than *1.5* for certain prompts, but given the right prompt engineering *2.0* and *2.1* seem to be better. Overall, we strongly recommend to just try the models out and read up on advice online $^{2}$Additionally, the community has started fine-tuning many of the above versions on certain styles with some of them having an extremely high quality and gaining a lot of traction. We recommend to simply have a look at all [diffusers checkpoints sorted by downloads and try out the different checkpoints](https://huggingface.co/models?library=diffusers).For the following, we will stick to v1.5 for simplicity.---$^{2}$ E.g. it has been shown that using negative prompts is very important for 2.0 and 2.1 to get the highest possible quality. See for example [this nice blog post](https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/). Next, we can also try to optimize single components of the pipeline, e.g. switching out the latent decoder. For more details on how the whole Stable Diffusion pipeline works, please have a look at [this blog post](https://huggingface.co/blog/stable_diffusion).Let's load [stabilityai's newest auto-decoder](https://huggingface.co/stabilityai/stable-diffusion-2-1).<jupyter_code>from diffusers import AutoencoderKL vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda")<jupyter_output><empty_output><jupyter_text>Now we can set it to the vae of the pipeline to use it.<jupyter_code>pipe.vae = vae<jupyter_output><empty_output><jupyter_text>Let's run the same prompt as before to compare quality.<jupyter_code>images = pipe(**get_inputs(batch_size=8)).images image_grid(images, rows=2, cols=4)<jupyter_output><empty_output><jupyter_text>Seems like the difference is only very minor, but the new generations are arguably a bit *sharper*. Cool, finally, let's look a bit into prompt engineering.Our goal was to generate a photo of an old warrior chief. Let's now try to bring a bit more color into the photos and make the look more impressive.Originally our prompt was "*portrait photo of a old warrior chief*".To improve the prompt, it often helps to add cues that could have been used online to save high quality photos, as well as adding more details since.Essentially, when doing prompt engineering, one has to think:- How was the photo or similar photos of the one I want probably stored on the internet?- What additional detail can I give that steers the models into the style that I want.Cool, let's add more details.<jupyter_code>prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes"<jupyter_output><empty_output><jupyter_text>and let's also add some cues that usually help to generate higher quality images.<jupyter_code>prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" prompt<jupyter_output><empty_output><jupyter_text>Cool, let's now try this prompt.<jupyter_code>images = pipe(**get_inputs(batch_size=8)).images image_grid(images, rows=2, cols=4)<jupyter_output><empty_output><jupyter_text>Pretty impressive! We got definitely some very high quality image generations there. The 2nd image is my personal favorite, so I'll re-use this seed and see whether I can tweak the prompts slighly by using "oldest warrior", "old", "", and "young" instead of "old".<jupyter_code>prompts = [ "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", ] generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] # 1 because we want the 2nd image images = pipe(prompt=prompts, generator=generator, num_inference_steps=25).images image_grid(images)<jupyter_output><empty_output>
notebooks/diffusers/sd_101_guide.ipynb/0
{ "file_path": "notebooks/diffusers/sd_101_guide.ipynb", "repo_id": "notebooks", "token_count": 5420 }
290
""" On one node, launch with `deepspeed --num_gpus N idefics_zero3_finetuning.py` by replacing N with the number of your GPUs For several nodes, using Slurm, a template script is provided at `examples/idefics/idefics_zero3_finetuning/slurm_script_idefics_zero3_finetuning_multinode.slurm` For more information, follow the tutorial on using DeepSpeed with Transformers at https://huggingface.co/docs/transformers/main_classes/deepspeed """ import torch import torchvision.transforms as transforms from datasets import load_dataset from PIL import Image from transformers import AutoProcessor, IdeficsForVisionText2Text, Trainer, TrainingArguments device = "cuda" if torch.cuda.is_available() else "cpu" checkpoint = "HuggingFaceM4/idefics-9b" processor = AutoProcessor.from_pretrained(checkpoint, use_auth_token=True) def convert_to_rgb(image): # `image.convert("RGB")` would only work for .jpg images, as it creates a wrong background # for transparent images. The call to `alpha_composite` handles this case if image.mode == "RGB": return image image_rgba = image.convert("RGBA") background = Image.new("RGBA", image_rgba.size, (255, 255, 255)) alpha_composite = Image.alpha_composite(background, image_rgba) alpha_composite = alpha_composite.convert("RGB") return alpha_composite def ds_transforms(example_batch): image_size = processor.image_processor.image_size image_mean = processor.image_processor.image_mean image_std = processor.image_processor.image_std image_transform = transforms.Compose( [ convert_to_rgb, transforms.RandomResizedCrop( (image_size, image_size), scale=(0.9, 1.0), interpolation=transforms.InterpolationMode.BICUBIC ), transforms.ToTensor(), transforms.Normalize(mean=image_mean, std=image_std), ] ) prompts = [] for i in range(len(example_batch["caption"])): # We split the captions to avoid having very long examples, which would require more GPU ram during training caption = example_batch["caption"][i].split(".")[0] try: # There are a handful of images that are not hosted anymore. This is a small (dummy) hack to skip these processor.image_processor.fetch_images(example_batch["image_url"][i]) except Exception: print( "Warning: at least one image couldn't be retrieved from the internet in an example. Skipping the" " batch." ) prompts.append( [ example_batch["image_url"][i], f"Question: What's on the picture? Answer: This is {example_batch['name'][i]}. {caption}</s>", ], ) inputs = processor(prompts, transform=image_transform, return_tensors="pt").to(device) inputs["labels"] = inputs["input_ids"] return inputs # load and prepare dataset ds = load_dataset("TheFusion21/PokemonCards") ds = ds["train"].train_test_split(test_size=0.002) train_ds = ds["train"] eval_ds = ds["test"] train_ds.set_transform(ds_transforms) eval_ds.set_transform(ds_transforms) # Important, define the training_args before the model ds_config = { "communication_data_type": "fp32", "bf16": {"enabled": True}, "zero_optimization": { "stage": 3, "overlap_comm": False, "reduce_bucket_size": "auto", "contiguous_gradients": True, "stage3_gather_16bit_weights_on_model_save": False, "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 2e9, "stage3_max_reuse_distance": 2e9, "offload_optimizer": {"device": "none"}, "offload_param": {"device": "none"}, }, "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "steps_per_print": 2000000, } training_args = TrainingArguments( output_dir="idefics-pokemon", learning_rate=2e-4, bf16=True, per_device_train_batch_size=1, per_device_eval_batch_size=1, gradient_accumulation_steps=1, # gradient_checkpointing=True, # Uncomment if OOM dataloader_pin_memory=False, save_total_limit=3, evaluation_strategy="steps", save_strategy="steps", save_steps=40, eval_steps=20, logging_steps=20, max_steps=40, remove_unused_columns=False, push_to_hub=False, label_names=["labels"], load_best_model_at_end=True, report_to="none", optim="adamw_torch", deepspeed=ds_config, ) model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16) trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=eval_ds, ) result = trainer.train() print(result) # Prints one per process - mostly here for sanity check
notebooks/examples/idefics/idefics_zero3_finetuning/idefics_zero3_finetuning.py/0
{ "file_path": "notebooks/examples/idefics/idefics_zero3_finetuning/idefics_zero3_finetuning.py", "repo_id": "notebooks", "token_count": 1980 }
291
<jupyter_start><jupyter_text>Explain *Anything* Like I'm Five: A Model for Open Domain Long Form Question Answering--- Table of Contents 1. [**Introduction**](intro) a. [Preliminaries](prelims) b. [Note on Data and Biases](reddit_biases)2. [**Task and Data Description**](task_description) 3. [**Sparse Retrieval:** Making Support Documents with ElasticSearch](elasticsearch) 4. [**Dense Retrieval:** Making Support Documents with an ELI5-Trained Model](dense_retrieval) b. [Contrastive Training with ELI5 In-Batch Negatives](dense_train) c. [Using the Trained Dense Retriever and Wikipedia Index](dense_use) d. [Retrieval Model Evaluation](dense_eval) 5. [**Generating Answers with a Sequence-to-Sequence Model**](generation) --- --- Screenshot of the demo for the question: "How do people make chocolate?" --- 1. IntroductionImagine that you are taken with a sudden desire to understand **how the fruit of a tropical tree gets transformed into chocolate bars**, or want to understand **the role of fever in the human body's immune response**: how would you go about finding that information?If your specific question has already been asked and answered clearly and succintly on one of the many question answering platforms available on the Internet (such as [**Quora**](https://www.quora.com/How-is-chocolate-made), [**Reddit**](https://www.reddit.com/user/ex_5_libris/comments/9c8gb1/chocolate_how_chocolate_is_made/), or [**Yahoo Answers**](https://answers.yahoo.com/question/index?qid=20070615082202AArsYN1)), you're in luck: modern search engines will probably take you to that pre-existing answer pretty reliably in a matter of a few clicks. If no one else has asked the exact question you are interested in, however, the process will be a little more involved. You will likely have to collect relevant information from a variety of sources, figure out how these pieces of knowledge fit together in relation to your query, and synthetize a narrative that answers your initial question.Now, wouldn't it be great if your computer could do all of that for you: **gather** the right sources (*e.g.* paragraphs from relevant Wikipedia pages), **synthetize** the information, and **write up** an easy-to-read, original summary of the relevant points? Such a system isn't quite available yet, at least not one that can provide *reliable* information in its summary. Even though current systems excel at finding an extractive span that answers a factoid question in a given document, they still find open-domain settings where a model needs to find its own sources of information and long answer generation challenging. Thankfully, a number of recent advances in natural language understanding and generation have made working toward solving this problem much easier! These advances include progress in the pre-training (e.g. [BART](https://arxiv.org/abs/1910.13461), [T5](https://arxiv.org/abs/1910.10683)) and evaluation (e.g. for [factuality](https://arxiv.org/abs/2004.04228)) of sequence-to-sequence models for conditional text generation, new ways to use language understanding models to find information in Wikipedia (e.g. [REALM](https://kentonl.com/pub/gltpc.2020.pdf), [DPR](https://arxiv.org/abs/2004.04906)), and a new training dataset introduced in the paper [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190).The [ELI5](https://arxiv.org/abs/1907.09190) dataset was built by gathering questions that were asked by community members of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit, along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.**In this notebook,** we show how we can take advantage of these recent advances to train a **long form question answering** system which takes in a question, fetches 10 relevant passages from a [Wikipedia snapshot](https://www.aclweb.org/anthology/2020.lrec-1.297/), and writes a multi-sentence answer based on the question and retrieved passages. In particular, training embedding-based retrieval models to gather supporting evidence for open-domain questions is relatively new research area: the last few months have seen some significant progress in cases where direct supervision is available, or with extensive task-specific pretraining. Here, we show how the ELI5 dataset allows us to **train a dense retrieval system without access to either**, making dense retrieval models more accessible. See [this presentation](https://docs.google.com/presentation/d/1A5wJEzFYGdNem7egJ-BTm6EMI3jGNe1lalyChYL54gw) from the [Hugging Face reading group](https://github.com/huggingface/awesome-papers) for a non-exhaustive overview of recent work in the field. Follow along to learn about the steps involved and read some background on the state of the art for some related tasks, or go straight to the: [**Live Demo!**](https://huggingface.co/qa/) (And don't forget to scroll down on the left sidebar to show all of the generation options!) 1.a - Preliminaries The implementation presented here relies on the [Hugging Face](https://huggingface.co/) [🤗transformers](https://github.com/huggingface/transformers) and [🤗nlp](https://github.com/huggingface/nlp) libraries. Wikipedia indexing relies on [ElasticSearch](https://www.elastic.co/elasticsearch) with its [python bindings](https://github.com/elastic/elasticsearch-py) for the sparse version, and [faiss](https://github.com/facebookresearch/faiss/) for the dense version. You can get all of these by running:> pip install elasticsearch > pip install faiss_gpu > pip install nlp > pip install transformers > > wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.1-linux-x86_64.tar.gz > tar -xzvf elasticsearch-7.7.1-linux-x86_64.tar.gz The training relies on two datasets: [ELI5](https://arxiv.org/abs/1907.09190), a processed version of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit, and the [Wiki40b](https://www.aclweb.org/anthology/2020.lrec-1.297/) Wikipedia image. You can download both using the [🤗nlp](https://github.com/huggingface/nlp) linrary with:<jupyter_code>import nlp eli5 = nlp.load_dataset('eli5') wiki40b_snippets = nlp.load_dataset('wiki_snippets', name='wiki40b_en_100_0')['train']<jupyter_output><empty_output><jupyter_text>Additionally, all of the useful methods used in this notebook are compiled in the [lfqa_utils.py](https://github.com/huggingface/longform-qa/lfqa_utils.py) script:<jupyter_code>from lfqa_utils import *<jupyter_output><empty_output><jupyter_text>1.b - Note on Data and BiasesBefore we go any further, let us take a moment to talk about the provenance of our training data. While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the [recent post from Reddit founder u/spez](https://www.reddit.com/r/announcements/comments/gxas21/upcoming_changes_to_our_content_policy_our_board/) outlining some of the ways he thinks the website's historical policies have been responsible for this problem, [Adrienne Massanari's 2015 article on GamerGate](https://www.researchgate.net/publication/283848479_Gamergate_and_The_Fappening_How_Reddit's_algorithm_governance_and_culture_support_toxic_technocultures) and follow-up works, or a [2019 Wired article on misogyny on Reddit](https://www.wired.com/story/misogyny-reddit-research/).While there has been some recent work in the NLP community on *de-biasing* models (e.g. [Black is to Criminal as Caucasian is to Police:Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047) for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.As mentioned above, the magnitude of the problem depends on the specific communities/subreddits. This work uses data from [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), and the `nlp` library also gives access to examples from [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/). There are some encouraging signs for all of these communities: [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) and [r/askscience](https://www.reddit.com/r/askscience/) have similar structures and purposes, and [r/askscience](https://www.reddit.com/r/askscience/) was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a [hackerfall post](https://hackerfall.com/story/study-and-interactive-visualization-of-toxicity-in), [thecut.com write-up](https://www.thecut.com/2015/03/interactive-chart-of-reddits-toxicity.html) and supporting [data](https://chart-studio.plotly.com/~bsbell21/210/toxicity-vs-supportiveness-by-subreddit/data)). Meanwhile, the [r/AskHistorians rules](https://www.reddit.com/r/AskHistorians/wiki/rules) mention that the admins will not tolerate "*racism, sexism, or any other forms of bigotry*".This is obviously not enough to exonerate the model (the pre-training step, for example, raises its own questions on that topic), and there is still a lot of interesting work to do to be able to quantify the biases in a conditional text generation model. One thing you can do to help: if you find any particularly egregious answers provided by the model when using the demo, or want to work on this research question please send a DM to [@YJernite on Twitter](https://twitter.com/YJernite)! 2. Task and Data DescriptionLet's recap: we are interested in the task of Long Form Question Answering. As in other Question Answering tasks, the model is presented with a question, and is required to generate a natural language answer. Whereas a majority of QA datasets contain mostly **factoid** questions, where the answer, such as a date or the name of a single entity, can be expressed in a few words or single sentence, Long Form QA focuses on questions which call for an **explanation** consisting of a few sentences or a few paragraphs.In order to teach a model to answer such questions, we use questions and answers written by Reddit users. Note that the `nlp.load_dataset` command above actually downloaded questions and their associated answers from the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits. We focus here on the **ELI5/explainlikeimfive** part to train the system, as these examples tend to be a little simpler. Let's look at one item from the test set:<jupyter_code>eli5['test_eli5'][12345]<jupyter_output><empty_output><jupyter_text>So here we have the question:> Why does water heated to room temperature feel colder than the air around it? This definitely requires a multi-step explanation: no single phrase can sum up all of the information we are looking for. Here are the answers that were given on ELI5, and were given scores of +5 and +2 respectively by Reddit users:> 1. Water transfers heat more efficiently than air. When something feels cold it's because heat is being transferred from your skin to whatever you're touching. Since water absorbs the heat more readily than air, it feels colder. > 2. Air isn't as good at transferring heat compared to something like water or steel (sit on a room temperature steel bench vs. a room temperature wooden bench, and the steel one will feel more cold). When you feel cold, what you're feeling is heat being transferred out of you. If there is no breeze, you feel a certain way. If there's a breeze, you will get colder faster (because the moving air is pulling the heat away from you), and if you get into water, its quite good at pulling heat from you. Get out of the water and have a breeze blow on you while you're wet, all of the water starts evaporating, pulling even more heat from you. First, note that in this case **we have two answers** which broadly describe the same phenomenon: the first one is scored higher because it is more succint and to the point. This example already illustrates one important feature of the LFQA task: **there are usually several valid ways to answer a given question.** Of the 272K examples in the ELI5 training set, nearly two thirds (167K) have at least two answers. We'll need to keep this in mind when training and evaluation of the model. Secondly, we need to give our model access to the information that is expressed in both these answers. Recently released models have been shown to include a significant amount of world knowledge in their parameters without the need of any external knowledge at all (see e.g. the [Closed-book QA performance of the T5 model](https://arxiv.org/abs/2002.08910)). There are several advantages to giving the model explicit access to information in text form however. First, a larger number of parameters in a model implies a larger computational cost. Secondly, getting information from a text database allows us to easily update the model's knowledge without having to re-train its parameters.--- --- Overview of the full question answering process. First, the Document Retriever selects a set of passages from Wikipedia that have information relevant to the question. Then, the Answer Generation Model reads the concatenation of the question and retrieverd passages, and writes out the answer.--- Here, we choose to give the model access to Wikipedia text. Full Wikipedia articles are typically too long for most current models to handle, and notable exceptions like the [Reformer](https://arxiv.org/abs/2001.04451) or [Longformer](https://arxiv.org/abs/2004.05150) architectures unfortunately do not yet have pre-trained sequence-to-sequence variants. Thus, we follow previous work in splitting Wikipedia articles into disjoint snippets of 100 words, and keep track of the title of the article and sections a snippet came from. Here's how you can get a pre-processed Wiki40b version split into 100-word passages with the `nlp` library, and an example snippet which has some of the information we're looking for ("*little conduction would occur since air is a poor conductor of heat*"):<jupyter_code>wiki40b_snippets[8991855]<jupyter_output><empty_output><jupyter_text>In the next two sections, we show how we can use either a [sparse retriever](elasticsearch) or a [trained dense retriever](dense_retrieval) to automatically find relevant snippets for a question. 3. Sparse Retrieval: Making Support Documents with ElasticSearchIn this section, we show how to use either such a "classical" Information Retrieval (IR) system based on **sparse** word matching with [ElasticSearch](https://www.elastic.co/elasticsearch/), an extremely popular and efficient search engine that can be used for finding documents that match a given query based on word overlap. Specifically, ElasticSearch provides a convenient way to index documents so they can easily be queried for nearest neighbor search using the [BM25](https://en.wikipedia.org/wiki/Okapi_BM25) similarity function (which relies on TF-IDF weighting of words). While this word-matching based approach has obvious limitations, such as failing to take synonyms and sometimes grammatical variation into account, it does pretty well overall and has only recently been overtaken by embedding-based systems for Wikipedia-based Open-Domain QA tasks.In order to use ElasticSearch, you will first need to launch a server. In a different window, run:> ./elasticsearch-7.7.0/bin/elasticsearchBy default, your ElasticSearch server will be listening on `localhost` port `9200`. To connect to it run:<jupyter_code>es_client = Elasticsearch([{'host': 'localhost', 'port': '9200'}])<jupyter_output><empty_output><jupyter_text>The `eli5_utils.py` script provides utilities to create (`make_es_index_snippets`) and query (`query_es_index`) an ElasticSearch index from within Python.The main implementation details are: 1. We index the article title, section title, and text of each of the passages for BM25 passages. These choices are implemented in the `index_config` variable: ```python index_config = { "settings": { "number_of_shards": 1, }, "mappings": { "properties": { "article_title": {"type": "text", "analyzer": "standard", "similarity": "BM25"}, "section_title": {"type": "text", "analyzer": "standard", "similarity": "BM25"}, "passage_text": {"type": "text", "analyzer": "standard", "similarity": "BM25"} } } }```2. To query the index, we found it useful to add a few task-dependent stop-words. The query text is then compared to all of the indexed fields, giving more weight to the passage text:```python banned = ['how', 'why', 'what', 'where', 'which', 'do', 'does', 'is', '?', 'eli5', 'eli5:'] q = ' '.join([w for w in q.split() if w not in banned]) response = es_client.search( index = index_name, body = { "query": { "multi_match": { "query": q, "fields": ["article_title", "section_title", "passage_text^2"], "type": "cross_fields", } }, "size": n_results, } )``` Here's the command to create the index, it should take one to three hours depending on your system.<jupyter_code>if not es_client.indices.exists('wiki40b_snippets_100w'): make_es_index_snippets(es_client, wiki40b_snippets, index_name='wiki40b_snippets_100w')<jupyter_output><empty_output><jupyter_text>Now let's test the ElasticSearch retriever with our running example ELI5 question about skin-to-water heat transfer by returning the 10 best candidate passages:<jupyter_code>question = eli5['test_eli5'][12345]['title'] doc, res_list = query_es_index(question, es_client, index_name='wiki40b_snippets_100w') df = pd.DataFrame({ 'Article': ['---'] + [res['article_title'] for res in res_list], 'Sections': ['---'] + [res['section_title'] if res['section_title'].strip() != '' else res['article_title'] for res in res_list], 'Text': ['--- ' + question] + [res['passage_text'] for res in res_list], }) df.style.set_properties(**{'text-align': 'left'})<jupyter_output><empty_output><jupyter_text>We can immediately see both the strengths and limitations of this approach. The system manages to retrieve documents that are all broadly on topic, mentioning some combination of *water*, *air*, *relative temperature*, and *temperature transfer*. In spite of this, only example 8 ends up containing information that is actually relevant to the question:> Cold air with high relative humidity "feels" colder than dry air of the same temperature because high humidity in cold weather increases the conduction of heat from the body. We got lucky this time, but this passage could as easily have been ranked 11th and not been included in the support document we provide to the answer generation system. As it is, the model will have to sort through mostly off-topic information to find this sentence when reading the resulting supporting document. 4. Retrieving Support Documents with an ELI5-Trained Dense ModelThe sparse retriever works by finding passages which feature the words from the query. However, it has no way to know *a priori* which of these words are more important in context, and seems to struggle with understanding the central theme of the query (human-perceived temperature).Thankfully, some recent works have taken advantage of advances in pre-trained contextual word representations to solve this problem. Models such as [DPR](https://arxiv.org/abs/2004.04906) or [REALM](https://arxiv.org/abs/2002.08909) for example learn to compute a vector representation of the query, as well as vector representations of Wikipedia passages in such a way that the passages that best answers a question maximize the dot product between the two representations. Retrieval is then reduced to a Maximum Inner Product Search, which can be executed efficiently using systems like [FAISS](https://github.com/facebookresearch/faiss).These successes are very encouraging for our Open-Domain Long Form QA application. However, our task and setup do not quite meet the requirements of either of either of these approaches. On the one hand, the [DPR](https://arxiv.org/abs/2004.04906) system is trained using gold passage annotations: most major QA dataset tell the system which Wikipedia passage contains the answer. Unfortunately, we do not have such annotations for the ELI5 data. On the other hand, while [REALM](https://arxiv.org/abs/2002.08909) is trained without passage supervision, it requires a pretty expensive pre-training step with an [Inverse Cloze Task](https://arxiv.org/abs/1906.00300) (100,000 steps with batch size 4096), and the ability to re-compute the embeddings of all Wikipedia passages regularly during training.In order to train a similar dense retrieval system at reduced cost without having access to gold passage annotation, we will have to **take advantage of another unique feature of our dataset**, namely the fact that the long form answers are quite similar in style to the Wikipedia passages we want to index. Our hypothesis then is that if we train a system to embed the questions and answers in our dataset in a way that allows us to easily match questions to answers, then using the answer embedder on Wikipedia passages should allow us to similarly match questions to supporting evidence from Wikipedia. 4.a - Contrastive Training with ELI5 In-Batch NegativesAs mentioned above, we want to train a system to produce question and answer embeddings, such that the dot product between the representation of a question and any of its answers is greater than between it and answers of all of the other questions in the dataset. Unfortunately, actually comparing all questions to all answers before taking every single gradient step is computationally prohibitive: instead, we follow previous work in simply processing medium to large batches of question-answer pairs, and making sure that the dot product of a question with its answer is larger than with all other answers in the batch, and *vice versa*. We use a cross-entropy loss for the multinomial distribution over all of the answers (or questions) in a batch, and make use of [PyTorch gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html) to be able to use large batches with limited GPU memory: you can find all implementation details in the `RetrievalQAEmbedder` class in `eli5_utils.py`.--- --- To train the retriever, we show the model batches of 512 question-answer pairs. The model needs to ensure that the embedding of each question in the batch is closer to the embedding of its corresponding answer than to the embedding of any other answer in the batch.--- We use a single BERT-style pre-trained model to embed the questions and answers, and learn different projection matrices to bring both representations down to dimension 128: the projection matrices are trained from scratch as the sentence embedding model is fine-tuned. We found that the 8-layer distilled version of BERT from the [Well-Read Students Learn Better paper](https://arxiv.org/abs/1908.08962) performed as well or better as full BERT for a notable gain in computation speed: if you want an even faster model, that work provides pre-trained models spanning the full range of computation/accuracy trade-offs.The model can than be trained with the following code: with batch size 32/512 on a single 16GB GPU, you can run 10 training epochs in under 6 hours.<jupyter_code># training arguments class ArgumentsQAR(): def __init__(self): self.batch_size = 512 self.max_length = 128 self.checkpoint_batch_size = 32 self.print_freq = 100 self.pretrained_model_name = "google/bert_uncased_L-8_H-768_A-12" self.model_save_name = "retriever_models/eli5_retriever_model_l-8_h-768_b-512-512" self.learning_rate = 2e-4 self.num_epochs = 10 qar_args = ArgumentsQAR() # prepare torch Dataset objects qar_train_dset = ELI5DatasetQARetriver(eli5['train_eli5'], training=True) qar_valid_dset = ELI5DatasetQARetriver(eli5['validation_eli5'], training=False) # load pre-trained BERT and make model qar_tokenizer, qar_model = make_qa_retriever_model( model_name=qar_args.pretrained_model_name, from_file=None, device="cuda:0" ) # train the model train_qa_retriever(qar_model, qar_tokenizer, qar_train_dset, qar_valid_dset, qar_args)<jupyter_output><empty_output><jupyter_text>If you don't want to train the model yourself, you can also download trained weights from the [Hugging Face model repository](https://huggingface.co/models) with:<jupyter_code>qar_tokenizer = AutoTokenizer.from_pretrained('yjernite/retribert-base-uncased') qar_model = AutoModel.from_pretrained('yjernite/retribert-base-uncased').to('cuda:1') _ = qar_model.eval()<jupyter_output><empty_output><jupyter_text>Once the model is trained, it can be used to compute passage embeddings for all Wikipedia snippets. The `make_qa_dense_index` method takes advantage of `numpy` memory-mapping, so embeddings are written directly to disk. Again with a single GPU, computing the full set of passage embeddings should take about 18 hours.<jupyter_code>if not os.path.isfile('wiki40b_passages_reps_32_l-8_h-768_b-512-512.dat'): make_qa_dense_index( qar_model, qar_tokenizer, wiki40b_snippets, device='cuda:0', index_name='wiki40b_passages_reps_32_l-8_h-768_b-512-512.dat' )<jupyter_output><empty_output><jupyter_text>4.b - Using the Trained Dense Retriever and Wikipedia IndexNow that we have trained our model to compute query and answer embeddings and used it to compute passage embeddings for all our Wikipedia snippets, let's see whether it can actually find supporting evidence for a new question. Recall the the two steps to using the dense retriever: we first compute an embedding for a new question, then do Max Inner Product Search with the pre-computed passage representations.--- --- At test time, the Retriever Model encodes the question, and compares its embedding to the pre-computed representation of all the Wikipedia passages. The ten passages with the closest embedding are returned to create the support document.--- The MIPS part can be executed efficiently with the `faiss` library. Additionally, since we computed 128-dimensional passage embeddings, the whole of the representations fits on a GPU, making retrieval even faster. We can create the `faiss_gpu` index with the following code:<jupyter_code>faiss_res = faiss.StandardGpuResources() wiki40b_passage_reps = np.memmap( 'wiki40b_passages_reps_32_l-8_h-768_b-512-512.dat', dtype='float32', mode='r', shape=(wiki40b_snippets.num_rows, 128) ) wiki40b_index_flat = faiss.IndexFlatIP(128) wiki40b_gpu_index = faiss.index_cpu_to_gpu(faiss_res, 1, wiki40b_index_flat) wiki40b_gpu_index.add(wiki40b_passage_reps)<jupyter_output><empty_output><jupyter_text>Now we can use the `query_qa_dense_index` function to query the dense index for our running example question about perceived temperature:<jupyter_code>question = eli5['test_eli5'][12345]['title'] doc, res_list = query_qa_dense_index(question, qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, device='cuda:1') df = pd.DataFrame({ 'Article': ['---'] + [res['article_title'] for res in res_list], 'Sections': ['---'] + [res['section_title'] if res['section_title'].strip() != '' else res['article_title'] for res in res_list], 'Text': ['--- ' + question] + [res['passage_text'] for res in res_list], }) df.style.set_properties(**{'text-align': 'left'})<jupyter_output><empty_output><jupyter_text>The retrieved documents are quite different from the ones returned by the sparse retrieval, with a greater focus on how water helps draw heat from a body, either through evaporation or through better conduction, which is information the model needs to answer this question.The retriever still misses out on one aspect of the query: the way the question is formulated implies that in the considered scenario the person is immersed in water rather than just wet, which makes the "latent heat" and evaporation arguments a little less relevant, but that's a really subtle distinction! 4.c - Retriever Model EvaluationWe have trained a retrieval model that *seems* to be working a little better than the traditional word-matching based approach, at least on our running example. Before we use it to actually answer questions, however, we would like to be able to get some **quantitative evaluation** of the performances of both approaches.For the retriever, we want to favor **recall** over precision: our first priority is to make sure that all of the information needed to write the answers is present in the support document. If there is unrelated information, the generation model can learn to sort it out. We measure this by computing the proportion of words in the high-scoring answers which are present in the retrieved support document. To focus on important words, we also weigh answer words by their *Inverse Document Frequency*. This gives us the following **IDF-recall** scoring function:<jupyter_code># We first select high-scoring answers (answers beyond the first must have a score of at least 3) test_qa_list = [(exple['title'], ' '.join([a for i, (a, sc) in enumerate(zip(exple['answers']['text'], exple['answers']['score'])) \ if i == 0 or sc >= 3 ])) for exple in eli5['test_eli5']] # We then compute word frequencies in answer text answer_doc_freq = {} for q, a in test_qa_list: for w in a.lower().split(): answer_doc_freq[w] = answer_doc_freq.get(w, 0) + 1 # The IDF-recall function is then: def da_idf_recall(doc, answer): d_words = dict([(w, True) for w in doc.lower().split()]) a_words = answer.lower().split() recall = sum([1. / math.log(1 + answer_doc_freq.get(w, 1)) for w in a_words if w in d_words]) / \ sum([1. / math.log(1 + answer_doc_freq.get(w, 1)) for w in a_words]) return recall<jupyter_output><empty_output><jupyter_text>The `evaluate_retriever` function in `eli5_utils.py` takes a retrieval and scoring function and computes both the average retrieval time and score of the document relative the the provided answer. Let's write some short-hand functions for the dense and sparse retrievers with our currently loaded indexes, and evaluate them on the ELI5 test set (be advised that evaluating the retriever on the full test set takes up to two hours):<jupyter_code>def dense_ret_for_eval(question, n_ret): _, dense_res_list = query_qa_dense_index( question, qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, n_results=n_ret, device='cuda:1' ) dense_doc = ' '.join([res['passage_text'] for res in dense_res_list]) return dense_doc def sparse_ret_for_eval(question, n_ret): _, sparse_res_list = query_es_index( question, es_client, index_name='wiki40b_snippets_100w', n_results=n_ret ) sparse_doc = ' '.join([res['passage_text'] for res in sparse_res_list]) return sparse_doc dense_score = evaluate_retriever(test_qa_list, dense_ret_for_eval, da_idf_recall) sparse_score = evaluate_retriever(test_qa_list, sparse_ret_for_eval, da_idf_recall) df = pd.DataFrame({ 'IDF-Recall': [sparse_score['idf_recall'], dense_score['idf_recall']], 'Time/Query': [sparse_score['retrieval_time'], dense_score['retrieval_time']], }, index=[ 'Sparse', 'Dense']) df.style.format({'IDF-Recall': "{:.4f}", 'Time/Query': "{:.4f}"})<jupyter_output><empty_output><jupyter_text>This metric obviously has limitations. Since it only looks at individual word matches, it is oblivious to *word order* or *paraphrases* among others. However, we can be encouraged by the fact that the dense retriever not only yields **higher IDF-recall**, it also takes **less than a third of the time** of the ElasticSearch-based system! Considering these results, we can confidently use it for the next part: training the sequence-to-sequence answer generation system. 5. Generating Answers with a Sequence-to-Sequence ModelNow that we know how to create an evidence document with supporting information for a given question, let's look into training the second component of our system: the **answer generation module**. We will instantiate it as a sequence-to-sequence model which uses the [BART](https://arxiv.org/abs/1910.13461) architecture, and initialize it with the [bart-large pretrained weights](https://huggingface.co/facebook/bart-large). In short, the [BART paper](https://arxiv.org/abs/1910.13461) uses a denoising auto-encoder style objective to pre-train an encoder-decoder model (similarly to how masked language modeling is used to pre-trained BERT-style encoders). Among other applications, they show that large-scale pre-training with their objective followed by fine-tuning on ELI5 data yields the state-of-the-art ROUGE performance for the original version of the dataset (which uses pre-computed support documents made from CommonCrawl pages).We provide the concatenation of the question and support document as input to the model, and train the decoder to minimize the perplexity of the gold answer. One notable choice is that **we train the model using all high-scoring answers in the training set**, so the model will see several instances of the same question-document input with different outputs. The supporting passages are separated by a special token ``, so the input for our running example will look like:> question: Why does water heated to room temperature feel colder than the air around it? context: \\ when the skin is completely wet. The body continuously loses ... this heat comes from the liquid itself and the surrounding gas and surfaces. \\ protected by a glass panel. Consequently, these types of collectors... Since heat loss due to convection cannot cross a vacuum, it forms an efficient isolation mechanism to keep heat inside the collector pipes. Since two flat \\ ... \\ changes. Conduction On... Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. The process of heat transferThe first thing we do is pre-compute the support documents for the training and validation sets so we can use all available GPUs to train the sequence-to-sequence model. The model is then trained with the `train_qa_s2s` function in `eli5_utils.py`. A 16GB GPU accomodates up to two examples at a time, so here is the code to train the model using 4 GPUs with `torch.nn.DataPArallel`. One epoch should take about 18 hours:<jupyter_code># pre-computing support documents eli5_train_docs = [] for example in eli5['train_eli5']: support_doc, dense_res_list = query_qa_dense_index( example['title'], qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, n_results=n_ret ) eli5_train_docs += [(example['q_id'], support_doc, dense_res_list)] eli5_valid_docs = [] for example in eli5['validation_eli5']: support_doc, dense_res_list = query_qa_dense_index( example['title'], qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, n_results=n_ret ) eli5_valid_docs += [(example['q_id'], support_doc, dense_res_list)] # training loop proper class ArgumentsS2S(): def __init__(self): self.batch_size = 8 self.backward_freq = 16 self.max_length = 1024 self.print_freq = 100 self.model_save_name = "seq2seq_models/eli5_bart_model" self.learning_rate = 2e-4 self.num_epochs = 3 s2s_args = ArgumentsS2S() eli5_train_docs = json.load(open('precomputed/eli5_train_precomputed_dense_docs.json')) eli5_valid_docs = json.load(open('precomputed/eli5_valid_precomputed_dense_docs.json')) s2s_train_dset = ELI5DatasetS2S(eli5['train_eli5'], document_cache=dict([(k, d) for k, d, src_ls in eli5_train_docs])) s2s_valid_dset = ELI5DatasetS2S(eli5['validation_eli5'], document_cache=dict([(k, d) for k, d, src_ls in eli5_valid_docs]), training=False) qa_s2s_tokenizer, pre_model = make_qa_s2s_model( model_name="facebook/bart-large", from_file=None, device="cuda:0" ) qa_s2s_model = torch.nn.DataParallel(pre_model) train_qa_s2s(qa_s2s_model, qa_s2s_tokenizer, s2s_train_dset, s2s_valid_dset, s2s_args)<jupyter_output><empty_output><jupyter_text>Again, if you don't want to train the model yourself, we made trained weights available on the [Hugging Face model repository](https://huggingface.co/models) , which you can download with:<jupyter_code>qa_s2s_tokenizer = AutoTokenizer.from_pretrained('yjernite/bart_eli5') qa_s2s_model = AutoModelForSeq2SeqLM.from_pretrained('yjernite/bart_eli5').to('cuda:0') _ = qa_s2s_model.eval()<jupyter_output><empty_output><jupyter_text>We now have everything we need to answer any question! Now let's try the full system on our running example along with the first four questions of the test set:<jupyter_code>questions = [] answers = [] for i in [12345] + [j for j in range(4)]: # create support document with the dense index question = eli5['test_eli5'][i]['title'] doc, res_list = query_qa_dense_index( question, qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, device='cuda:1' ) # concatenate question and support document into BART input question_doc = "question: {} context: {}".format(question, doc) # generate an answer with beam search answer = qa_s2s_generate( question_doc, qa_s2s_model, qa_s2s_tokenizer, num_answers=1, num_beams=8, min_len=64, max_len=256, max_input_length=1024, device="cuda:0" )[0] questions += [question] answers += [answer] df = pd.DataFrame({ 'Question': questions, 'Answer': answers, }) df.style.set_properties(**{'text-align': 'left'})<jupyter_output><empty_output><jupyter_text>We made it, and a lot of these answers actually make sense! The model seems to sometimes struggle with coherence and with starting some of the answers, but we're getting some pretty good information overall.The last thing we'll do is see how we can get a quantitative evaluation of the model performance. Here, we'll use the ROUGE implementation provided in the `nlp` library. Note that it is a different implementation than the one used in the [BART](https://arxiv.org/abs/1910.13461) and [ELI5](https://arxiv.org/abs/1907.09190) papers: the [rouge](https://pypi.org/project/rouge/) Python package they use normalises all numerical values, among other pre-processing choices, leading to higher numbers. We reproduce their evaluation in the Appendix section, but recommend using the more sensitive metric provided by the `nlp` package, which can be computed with:<jupyter_code>predicted = [] reference = [] # Generate answers for the full test set for i in range(eli5['test_eli5'].num_rows): # create support document with the dense index question = eli5['test_eli5'][i]['title'] doc, res_list = query_qa_dense_index( question, qar_model, qar_tokenizer, wiki40b_snippets, wiki40b_gpu_index, device='cuda:1' ) # concatenate question and support document into BART input question_doc = "question: {} context: {}".format(question, doc) # generate an answer with beam search answer = qa_s2s_generate( question_doc, qa_s2s_model, qa_s2s_tokenizer, num_answers=1, num_beams=8, min_len=96, max_len=256, max_input_length=1024, device="cuda:0" )[0] predicted += [answer] reference += [eli5['test_eli5'][i]['answers']['text'][0]] # Compare each generation to the fist answer from the dataset nlp_rouge = nlp.load_metric('rouge') scores = nlp_rouge.compute( predicted, reference, rouge_types=['rouge1', 'rouge2', 'rougeL', 'rougeLsum'], use_agregator=True, use_stemmer=False ) df = pd.DataFrame({ 'rouge1': [scores['rouge1'].mid.precision, scores['rouge1'].mid.recall, scores['rouge1'].mid.fmeasure], 'rouge2': [scores['rouge2'].mid.precision, scores['rouge2'].mid.recall, scores['rouge2'].mid.fmeasure], 'rougeL': [scores['rougeL'].mid.precision, scores['rougeL'].mid.recall, scores['rougeL'].mid.fmeasure], }, index=[ 'P', 'R', 'F']) df.style.format({'rouge1': "{:.4f}", 'rouge2': "{:.4f}", 'rougeL': "{:.4f}"})<jupyter_output><empty_output><jupyter_text>That's it for today! And once again, if you want to play with the model a bit more and ask it whatever question comes to mind, please feel free to head over to: [**Our Live Demo!**](https://huggingface.co/qa/) Thank you for reading! Appendix:Here we reproduce the ROUGE evaluation from the original [ELI5 paper](https://arxiv.org/abs/1907.09190) to be able to comparable our performance to theirs. Our generation setting leads to lower ROUGE-1 and ROUGE-2 than the state-of-the-art reported in [BART](https://arxiv.org/abs/1910.13461) (30.6 and 6.2 respectively), and higher ROUGE-L (24.3).<jupyter_code>from nltk import PorterStemmer from rouge import Rouge from spacy.lang.en import English from time import time stemmer = PorterStemmer() rouge = Rouge() tokenizer = English().Defaults.create_tokenizer() def compute_rouge_eli5(compare_list): preds = [" ".join([stemmer.stem(str(w)) for w in tokenizer(pred)]) for gold, pred in compare_list] golds = [" ".join([stemmer.stem(str(w)) for w in tokenizer(gold)]) for gold, pred in compare_list] scores = rouge.get_scores(preds, golds, avg=True) return scores compare_list = [(g, p) for p, g in zip(predicted, reference)] scores = compute_rouge_eli5(compare_list) df = pd.DataFrame({ 'rouge1': [scores['rouge-1']['p'], scores['rouge-1']['r'], scores['rouge-1']['f']], 'rouge2': [scores['rouge-2']['p'], scores['rouge-2']['r'], scores['rouge-2']['f']], 'rougeL': [scores['rouge-l']['p'], scores['rouge-l']['r'], scores['rouge-l']['f']], }, index=[ 'P', 'R', 'F']) df.style.format({'rouge1': "{:.4f}", 'rouge2': "{:.4f}", 'rougeL': "{:.4f}"})<jupyter_output><empty_output>
notebooks/longform-qa/Long_Form_Question_Answering_with_ELI5_and_Wikipedia.ipynb/0
{ "file_path": "notebooks/longform-qa/Long_Form_Question_Answering_with_ELI5_and_Wikipedia.ipynb", "repo_id": "notebooks", "token_count": 13060 }
292
import argparse import logging import os import sys import tensorflow as tf from datasets import load_dataset from tqdm import tqdm from transformers import AutoTokenizer, TFAutoModelForSequenceClassification from transformers.file_utils import is_sagemaker_dp_enabled if os.environ.get("SDP_ENABLED") or is_sagemaker_dp_enabled(): SDP_ENABLED = True os.environ["SAGEMAKER_INSTANCE_TYPE"] = "p3dn.24xlarge" import smdistributed.dataparallel.tensorflow as sdp else: SDP_ENABLED = False def fit(model, loss, opt, train_dataset, epochs, train_batch_size, max_steps=None): pbar = tqdm(train_dataset) for i, batch in enumerate(pbar): with tf.GradientTape() as tape: inputs, targets = batch outputs = model(batch) loss_value = loss(targets, outputs.logits) if SDP_ENABLED: tape = sdp.DistributedGradientTape(tape, sparse_as_dense=True) grads = tape.gradient(loss_value, model.trainable_variables) opt.apply_gradients(zip(grads, model.trainable_variables)) pbar.set_description(f"Loss: {loss_value:.4f}") if SDP_ENABLED: if i == 0: sdp.broadcast_variables(model.variables, root_rank=0) sdp.broadcast_variables(opt.variables(), root_rank=0) first_batch = False if max_steps and i >= max_steps: break train_results = {"loss": loss_value.numpy()} return train_results def get_datasets(): # Load dataset train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"]) # Preprocess train dataset train_dataset = train_dataset.map( lambda e: tokenizer(e["text"], truncation=True, padding="max_length"), batched=True ) train_dataset.set_format(type="tensorflow", columns=["input_ids", "attention_mask", "label"]) train_features = { x: train_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ["input_ids", "attention_mask"] } tf_train_dataset = tf.data.Dataset.from_tensor_slices((train_features, train_dataset["label"])) # Preprocess test dataset test_dataset = test_dataset.map( lambda e: tokenizer(e["text"], truncation=True, padding="max_length"), batched=True ) test_dataset.set_format(type="tensorflow", columns=["input_ids", "attention_mask", "label"]) test_features = { x: test_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ["input_ids", "attention_mask"] } tf_test_dataset = tf.data.Dataset.from_tensor_slices((test_features, test_dataset["label"])) if SDP_ENABLED: tf_train_dataset = tf_train_dataset.shard(sdp.size(), sdp.rank()) tf_test_dataset = tf_test_dataset.shard(sdp.size(), sdp.rank()) tf_train_dataset = tf_train_dataset.batch(args.train_batch_size, drop_remainder=True) tf_test_dataset = tf_test_dataset.batch(args.eval_batch_size, drop_remainder=True) return tf_train_dataset, tf_test_dataset if __name__ == "__main__": parser = argparse.ArgumentParser() # Hyperparameters sent by the client are passed as command-line arguments to the script. parser.add_argument("--epochs", type=int, default=3) parser.add_argument("--train_batch_size", type=int, default=16) parser.add_argument("--eval_batch_size", type=int, default=8) parser.add_argument("--model_name", type=str) parser.add_argument("--learning_rate", type=str, default=5e-5) parser.add_argument("--do_train", type=bool, default=True) parser.add_argument("--do_eval", type=bool, default=True) # Data, model, and output directories parser.add_argument("--output_data_dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"]) parser.add_argument("--model_dir", type=str, default=os.environ["SM_MODEL_DIR"]) parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"]) args, _ = parser.parse_known_args() # Set up logging logger = logging.getLogger(__name__) logging.basicConfig( level=logging.getLevelName("INFO"), handlers=[logging.StreamHandler(sys.stdout)], format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", ) if SDP_ENABLED: sdp.init() gpus = tf.config.experimental.list_physical_devices("GPU") for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) if gpus: tf.config.experimental.set_visible_devices(gpus[sdp.local_rank()], "GPU") # Load model and tokenizer model = TFAutoModelForSequenceClassification.from_pretrained(args.model_name) tokenizer = AutoTokenizer.from_pretrained(args.model_name) # get datasets tf_train_dataset, tf_test_dataset = get_datasets() # fine optimizer and loss optimizer = tf.keras.optimizers.Adam(learning_rate=args.learning_rate) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metrics = [tf.keras.metrics.SparseCategoricalAccuracy()] model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # Training if args.do_train: # train_results = model.fit(tf_train_dataset, epochs=args.epochs, batch_size=args.train_batch_size) train_results = fit( model, loss, optimizer, tf_train_dataset, args.epochs, args.train_batch_size, max_steps=None ) logger.info("*** Train ***") output_eval_file = os.path.join(args.output_data_dir, "train_results.txt") if not SDP_ENABLED or sdp.rank() == 0: with open(output_eval_file, "w") as writer: logger.info("***** Train results *****") logger.info(train_results) for key, value in train_results.items(): logger.info(" %s = %s", key, value) writer.write("%s = %s\n" % (key, value)) # Evaluation if args.do_eval and (not SDP_ENABLED or sdp.rank() == 0): result = model.evaluate(tf_test_dataset, batch_size=args.eval_batch_size, return_dict=True) logger.info("*** Evaluate ***") output_eval_file = os.path.join(args.output_data_dir, "eval_results.txt") with open(output_eval_file, "w") as writer: logger.info("***** Eval results *****") logger.info(result) for key, value in result.items(): logger.info(" %s = %s", key, value) writer.write("%s = %s\n" % (key, value)) # Save result if SDP_ENABLED: if sdp.rank() == 0: model.save_pretrained(args.model_dir) tokenizer.save_pretrained(args.model_dir) else: model.save_pretrained(args.model_dir) tokenizer.save_pretrained(args.model_dir)
notebooks/sagemaker/07_tensorflow_distributed_training_data_parallelism/scripts/train.py/0
{ "file_path": "notebooks/sagemaker/07_tensorflow_distributed_training_data_parallelism/scripts/train.py", "repo_id": "notebooks", "token_count": 2936 }
293
<jupyter_start><jupyter_text>HuggingFace Hub meets Amazon SageMaker Fine-tune a Multi-Class Classification with `Trainer` and `emotion` dataset and push it to the [Hugging Face Hub](https://huggingface.co/models) IntroductionWelcome to our end-to-end multi-class Text-Classification example. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer for multi-class text classification. In particular, the pre-trained model will be fine-tuned using the `emotion` dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. ![emotion-widget.png](./imgs/emotion-widget.png)_**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_ Development Environment and Permissions Installation_*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_<jupyter_code>!pip install "sagemaker>=2.140.0" "transformers==4.26.1" "datasets[s3]==2.10.1" --upgrade import sagemaker.huggingface<jupyter_output><empty_output><jupyter_text>Permissions _If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._<jupyter_code>import sagemaker import boto3 sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}")<jupyter_output><empty_output><jupyter_text>PreprocessingWe are using the `datasets` library to download and preprocess the `emotion` dataset. After preprocessing, the dataset will be uploaded to our `sagemaker_session_bucket` to be used within our training job. The [emotion](https://github.com/dair-ai/emotion_dataset) dataset consists of 16000 training examples, 2000 validation examples, and 2000 testing examples. Tokenization<jupyter_code>from datasets import load_dataset from transformers import AutoTokenizer # tokenizer used in preprocessing tokenizer_name = 'distilbert-base-uncased' # dataset used dataset_name = 'emotion' # s3 key prefix for the data s3_prefix = 'samples/datasets/emotion' # download tokenizer tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) # tokenizer helper function def tokenize(batch): return tokenizer(batch['text'], padding='max_length', truncation=True) # load dataset train_dataset, test_dataset = load_dataset(dataset_name, split=['train', 'test']) # tokenize dataset train_dataset = train_dataset.map(tokenize, batched=True) test_dataset = test_dataset.map(tokenize, batched=True) # set format for pytorch train_dataset = train_dataset.rename_column("label", "labels") train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels']) test_dataset = test_dataset.rename_column("label", "labels") test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])<jupyter_output>Downloading: 3.62kB [00:00, 629kB/s] Downloading: 3.28kB [00:00, 998kB/s] Using custom data configuration default Reusing dataset emotion (/Users/philipp/.cache/huggingface/datasets/emotion/default/0.0.0/348f63ca8e27b3713b6c04d723efe6d824a56fb3d1449794716c0f0296072705) 100%|██████████| 2/2 [00:00<00:00, 99.32it/s] 100%|██████████| 16/16 [00:02<00:00, 7.47ba/s] 100%|██████████| 2/2 [00:00<00:00, 6.47ba/s]<jupyter_text>Uploading data to `sagemaker_session_bucket`After we processed the `datasets` we are going to use the new `FileSystem` [integration](https://huggingface.co/docs/datasets/filesystems.html) to upload our dataset to S3.<jupyter_code># save train_dataset to s3 training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train' train_dataset.save_to_disk(training_input_path) # save test_dataset to s3 test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test' test_dataset.save_to_disk(test_input_path)<jupyter_output><empty_output><jupyter_text>Creating an Estimator and start a training job List of supported models: https://huggingface.co/models?library=pytorch,transformers&sort=downloads setting up `push_to_hub` for our model. The `train.py` scripts implements the `push_to_hub` using the `Trainer` and `TrainingArguments`. To push our model to the [Hub](https://huggingface.co/models) we need to define the `push_to_hub`. hyperparameter and set it to `True` and provide out [Hugging Face Token](https://hf.co/settings/token). Additionally, we can configure the repository name and saving strategy using the `hub_model_id`, `hub_strategy`.You can find documentation to those parameters [here](https://huggingface.co/transformers/main_classes/trainer.html). We are going to provide our HF Token securely with out exposing it to the public using `notebook_login` from the `huggingface_hub` SDK. But be careful your token will still be visible insight the logs of the training job. If you run `huggingface_estimator.fit(...,wait=True)` you will see the token in the logs.A better way of providing your `HF_TOKEN` to your training jobs would be using [AWS Secret Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) You can also directly find your token at [https://hf.co/settings/token](https://hf.co/settings/token).<jupyter_code>from huggingface_hub import notebook_login notebook_login() from sagemaker.huggingface import HuggingFace from huggingface_hub import HfFolder import time # hyperparameters, which are passed into the training job hyperparameters={'epochs': 1, # number of training epochs 'train_batch_size': 32, # batch size for training 'eval_batch_size': 64, # batch size for evaluation 'learning_rate': 3e-5, # learning rate used during training 'model_id':'distilbert-base-uncased', # pre-trained model 'fp16': True, # Whether to use 16-bit (mixed) precision training 'push_to_hub': True, # Defines if we want to push the model to the hub 'hub_model_id': 'sagemaker-distilbert-emotion', # The model id of the model to push to the hub 'hub_strategy': 'every_save', # The strategy to use when pushing the model to the hub 'hub_token': HfFolder.get_token() # HuggingFace token to have permission to push } # define Training Job Name job_name = f'push-to-hub-sample-{time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())}' # create the Estimator huggingface_estimator = HuggingFace( entry_point = 'train.py', # fine-tuning script used in training jon source_dir = './scripts', # directory where fine-tuning script is stored instance_type = 'ml.p3.2xlarge', # instances type used for the training job instance_count = 1, # the number of instances used for training base_job_name = job_name, # the name of the training job role = role, # Iam role used in training job to access AWS ressources, e.g. S3 transformers_version = '4.26', # the transformers version used in the training job pytorch_version = '1.13', # the pytorch_version version used in the training job py_version = 'py39', # the python version used in the training job hyperparameters = hyperparameters, # the hyperparameter used for running the training job ) # define a data input dictonary with our uploaded s3 uris data = { 'train': training_input_path, 'test': test_input_path } # starting the train job with our uploaded datasets as input # setting wait to False to not expose the HF Token huggingface_estimator.fit(data, wait=False) # adding waiter to see when training is done waiter = huggingface_estimator.sagemaker_session.sagemaker_client.get_waiter('training_job_completed_or_stopped') waiter.wait(TrainingJobName=huggingface_estimator.latest_training_job.name)<jupyter_output><empty_output><jupyter_text>Accessing the model on hf.co/modelswe can access the model on [hf.co/models](https://hf.co/models) using the `hub_model_id` and our username.<jupyter_code>from huggingface_hub import HfApi whoami = HfApi().whoami() username = whoami['name'] print(f"https://huggingface.co/{username}/{hyperparameters['hub_model_id']}")<jupyter_output>https://huggingface.co/philschmid/sagemaker-distilbert-emotion<jupyter_text>Deploying the model from Hugging Face to a SageMaker EndpointTo deploy our model to Amazon SageMaker we can create a `HuggingFaceModel` and provide the Hub configuration (`HF_MODEL_ID` & `HF_TASK`) to deploy it. Alternatively, we can use the the `hugginface_estimator` to deploy our model from S3 with `huggingface_estimator.deploy()`.<jupyter_code>from sagemaker.huggingface import HuggingFaceModel import sagemaker role = sagemaker.get_execution_role() # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':f"{username}/{hyperparameters['hub_model_id']}", 'HF_TASK':'text-classification' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.26', pytorch_version='1.13', py_version='py39', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.m5.xlarge' # ec2 instance type )<jupyter_output><empty_output><jupyter_text>Then, we use the returned predictor object to call the endpoint.<jupyter_code>sentiment_input= {"inputs": "Winter is coming and it will be dark soon."} predictor.predict(sentiment_input)<jupyter_output><empty_output><jupyter_text>Finally, we delete the inference endpoint.<jupyter_code>predictor.delete_model() predictor.delete_endpoint()<jupyter_output><empty_output>
notebooks/sagemaker/14_train_and_push_to_hub/sagemaker-notebook.ipynb/0
{ "file_path": "notebooks/sagemaker/14_train_and_push_to_hub/sagemaker-notebook.ipynb", "repo_id": "notebooks", "token_count": 3941 }
294
from transformers import DonutProcessor, VisionEncoderDecoderModel import torch device = "cuda" if torch.cuda.is_available() else "cpu" def model_fn(model_dir): # Load our model from Hugging Face processor = DonutProcessor.from_pretrained(model_dir) model = VisionEncoderDecoderModel.from_pretrained(model_dir) # Move model to GPU model.to(device) return model, processor def predict_fn(data, model_and_processor): # unpack model and tokenizer model, processor = model_and_processor image = data.get("inputs") pixel_values = processor.feature_extractor(image, return_tensors="pt").pixel_values task_prompt = "<s>" # start of sequence token for decoder since we are not having a user prompt decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids # run inference outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, early_stopping=True, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) # process output prediction = processor.batch_decode(outputs.sequences)[0] prediction = processor.token2json(prediction) return prediction
notebooks/sagemaker/26_document_ai_donut/scripts/inference.py/0
{ "file_path": "notebooks/sagemaker/26_document_ai_donut/scripts/inference.py", "repo_id": "notebooks", "token_count": 571 }
295
import argparse import shutil from pathlib import Path from typing import Dict, List import yaml SUBPACKAGE_TOC_INSERT_INDEX = 2 parser = argparse.ArgumentParser( description="Script to combine doc builds from subpackages with base doc build of Optimum. " "Assumes all subpackage doc builds are present in the root of the `optimum` repo." ) parser.add_argument( "--subpackages", nargs="+", help="Subpackages to integrate docs with Optimum. Use hardware partner names like `habana`, `graphcore`, or `intel`", ) parser.add_argument("--version", type=str, default="main", help="The version of the Optimum docs") def rename_subpackage_toc(subpackage: str, toc: Dict): """ Extends table of contents sections with the subpackage name as the parent folder. Args: subpackage (str): subpackage name. toc (Dict): table of contents. """ for item in toc: for file in item["sections"]: if "local" in file: file["local"] = f"{subpackage}/" + file["local"] else: # if "local" is not in file, it means we have a subsection, hence the recursive call rename_subpackage_toc(subpackage, [file]) def rename_copy_subpackage_html_paths(subpackage: str, subpackage_path: Path, optimum_path: Path, version: str): """ Copy and rename the files from the given subpackage's documentation to Optimum's documentation. In Optimum's documentation, the subpackage files are organized as follows: optimum_doc ├──────── subpackage_1 │ ├────── folder_1 │ │ └───── ... │ ├────── ... │ └────── folder_x │ └───── ... ├──────── ... ├──────── subpackage_n │ ├────── folder_1 │ │ └───── ... │ ├────── ... │ └────── folder_y │ └───── ... └──────── usual_optimum_doc Args: subpackage (str): subpackage name subpackage_path (Path): path to the subpackage's documentation optimum_path (Path): path to Optimum's documentation version (str): Optimum's version """ subpackage_html_paths = list(subpackage_path.rglob("*.html")) # The language folder is the 4th folder in the subpackage HTML paths language_folder_level = 3 for html_path in subpackage_html_paths: language_folder = html_path.parts[language_folder_level] # Build the relative path from the language folder relative_path_from_language_folder = Path(*html_path.parts[language_folder_level + 1 :]) # New path in Optimum's doc new_path_in_optimum = Path( f"{optimum_path}/optimum/{version}/{language_folder}/{subpackage}/{relative_path_from_language_folder}" ) # Build the parent folders if necessary new_path_in_optimum.parent.mkdir(parents=True, exist_ok=True) shutil.copyfile(html_path, new_path_in_optimum) def add_neuron_doc(base_toc: List): """ Extends the table of content with a section about Optimum Neuron. Args: base_toc (List): table of content for the doc of Optimum. """ # Update optimum table of contents base_toc.insert( SUBPACKAGE_TOC_INSERT_INDEX, { "sections": [ { # Ideally this should directly point at https://huggingface.co/docs/optimum-neuron/index # Current hacky solution is to have a redirection in _redirects.yml "local": "docs/optimum-neuron/index", "title": "🤗 Optimum Neuron", } ], "title": "AWS Trainium/Inferentia", "isExpanded": False, }, ) def add_tpu_doc(base_toc: List): """ Extends the table of content with a section about Optimum TPU. Args: base_toc (List): table of content for the doc of Optimum. """ # Update optimum table of contents base_toc.insert( SUBPACKAGE_TOC_INSERT_INDEX, { "sections": [ { # Ideally this should directly point at https://huggingface.co/docs/optimum-tpu/index # Current hacky solution is to have a redirection in _redirects.yml "local": "docs/optimum-tpu/index", "title": "🤗 Optimum-TPU", } ], "title": "Google TPUs", "isExpanded": False, }, ) def main(): args = parser.parse_args() optimum_path = Path("optimum-doc-build") # Load optimum table of contents base_toc_path = next(optimum_path.rglob("_toctree.yml")) with open(base_toc_path, "r") as f: base_toc = yaml.safe_load(f) # Copy and rename all files from subpackages' docs to Optimum doc for subpackage in args.subpackages[::-1]: if subpackage == "neuron": # Neuron has its own doc so it is managed differently add_neuron_doc(base_toc) elif subpackage == "tpu": # Optimum TPU has its own doc so it is managed differently add_tpu_doc(base_toc) elif subpackage == "nvidia": # At the moment, Optimum Nvidia's doc is the README of the GitHub repo # It is linked to in optimum/docs/source/nvidia_overview.mdx continue else: subpackage_path = Path(f"{subpackage}-doc-build") # The doc of Furiosa will be missing for PRs if subpackage == "furiosa" and not subpackage_path.is_dir(): continue # Copy all HTML files from subpackage into optimum rename_copy_subpackage_html_paths( subpackage, subpackage_path, optimum_path, args.version, ) # Load subpackage table of contents subpackage_toc_path = next(subpackage_path.rglob("_toctree.yml")) with open(subpackage_toc_path, "r") as f: subpackage_toc = yaml.safe_load(f) # Extend table of contents sections with the subpackage name as the parent folder rename_subpackage_toc(subpackage, subpackage_toc) # Just keep the name of the partner in the TOC title if subpackage == "amd": subpackage_toc[0]["title"] = subpackage_toc[0]["title"].split("Optimum-")[-1] else: subpackage_toc[0]["title"] = subpackage_toc[0]["title"].split("Optimum ")[-1] if subpackage != "graphcore": # Update optimum table of contents base_toc.insert(SUBPACKAGE_TOC_INSERT_INDEX, subpackage_toc[0]) # Write final table of contents with open(base_toc_path, "w") as f: yaml.safe_dump(base_toc, f, allow_unicode=True) if __name__ == "__main__": main()
optimum/docs/combine_docs.py/0
{ "file_path": "optimum/docs/combine_docs.py", "repo_id": "optimum", "token_count": 3219 }
296
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Quickstart At its core, 🤗 Optimum uses _configuration objects_ to define parameters for optimization on different accelerators. These objects are then used to instantiate dedicated _optimizers_, _quantizers_, and _pruners_. Before applying quantization or optimization, we first need to export our model to the ONNX format. ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english" >>> save_directory = "tmp/onnx/" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) # doctest: +IGNORE_RESULT ``` Let's see now how we can apply dynamic quantization with ONNX Runtime: ```python >>> from optimum.onnxruntime.configuration import AutoQuantizationConfig >>> from optimum.onnxruntime import ORTQuantizer >>> # Define the quantization methodology >>> qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False) >>> quantizer = ORTQuantizer.from_pretrained(ort_model) >>> # Apply dynamic quantization on the model >>> quantizer.quantize(save_dir=save_directory, quantization_config=qconfig) # doctest: +IGNORE_RESULT ``` In this example, we've quantized a model from the Hugging Face Hub, but it could also be a path to a local model directory. The result from applying the `quantize()` method is a `model_quantized.onnx` file that can be used to run inference. Here's an example of how to load an ONNX Runtime model and generate predictions with it: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import pipeline, AutoTokenizer >>> model = ORTModelForSequenceClassification.from_pretrained(save_directory, file_name="model_quantized.onnx") >>> tokenizer = AutoTokenizer.from_pretrained(save_directory) >>> cls_pipeline = pipeline("text-classification", model=model, tokenizer=tokenizer) >>> results = cls_pipeline("I love burritos!") ``` Similarly, you can apply static quantization by simply setting `is_static` to `True` when instantiating the `QuantizationConfig` object: ```python >>> qconfig = AutoQuantizationConfig.arm64(is_static=True, per_channel=False) ``` Static quantization relies on feeding batches of data through the model to estimate the activation quantization parameters ahead of inference time. To support this, 🤗 Optimum allows you to provide a _calibration dataset_. The calibration dataset can be a simple `Dataset` object from the 🤗 Datasets library, or any dataset that's hosted on the Hugging Face Hub. For this example, we'll pick the [`sst2`](https://huggingface.co/datasets/glue/viewer/sst2/test) dataset that the model was originally trained on: ```python >>> from functools import partial >>> from optimum.onnxruntime.configuration import AutoCalibrationConfig # Define the processing function to apply to each example after loading the dataset >>> def preprocess_fn(ex, tokenizer): ... return tokenizer(ex["sentence"]) >>> # Create the calibration dataset >>> calibration_dataset = quantizer.get_calibration_dataset( ... "glue", ... dataset_config_name="sst2", ... preprocess_function=partial(preprocess_fn, tokenizer=tokenizer), ... num_samples=50, ... dataset_split="train", ... ) >>> # Create the calibration configuration containing the parameters related to calibration. >>> calibration_config = AutoCalibrationConfig.minmax(calibration_dataset) >>> # Perform the calibration step: computes the activations quantization ranges >>> ranges = quantizer.fit( ... dataset=calibration_dataset, ... calibration_config=calibration_config, ... operators_to_quantize=qconfig.operators_to_quantize, ... ) >>> # Apply static quantization on the model >>> quantizer.quantize( # doctest: +IGNORE_RESULT ... save_dir=save_directory, ... calibration_tensors_range=ranges, ... quantization_config=qconfig, ... ) ``` As a final example, let's take a look at applying _graph optimizations_ techniques such as operator fusion and constant folding. As before, we load a configuration object, but this time by setting the optimization level instead of the quantization approach: ```python >>> from optimum.onnxruntime.configuration import OptimizationConfig >>> # Here the optimization level is selected to be 1, enabling basic optimizations such as redundant node eliminations and constant folding. Higher optimization level will result in a hardware dependent optimized graph. >>> optimization_config = OptimizationConfig(optimization_level=1) ``` Next, we load an _optimizer_ to apply these optimisations to our model: ```python >>> from optimum.onnxruntime import ORTOptimizer >>> optimizer = ORTOptimizer.from_pretrained(ort_model) >>> # Optimize the model >>> optimizer.optimize(save_dir=save_directory, optimization_config=optimization_config) # doctest: +IGNORE_RESULT ``` And that's it - the model is now optimized and ready for inference! As you can see, the process is similar in each case: 1. Define the optimization / quantization strategies via an `OptimizationConfig` / `QuantizationConfig` object 2. Instantiate a `ORTQuantizer` or `ORTOptimizer` class 3. Apply the `quantize()` or `optimize()` method 4. Run inference Check out the [`examples`](https://github.com/huggingface/optimum/tree/main/examples) directory for more sophisticated usage. Happy optimising 🤗!
optimum/docs/source/onnxruntime/quickstart.mdx/0
{ "file_path": "optimum/docs/source/onnxruntime/quickstart.mdx", "repo_id": "optimum", "token_count": 1758 }
297
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Multiple choice The script [`run_swag.py`](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/quantization/multiple-choice/run_swag.py) allows us to apply different quantization approaches (such as dynamic and static quantization) using the [ONNX Runtime](https://github.com/microsoft/onnxruntime) quantization tool for multiple choice tasks. The following example applies post-training dynamic quantization on a BERT fine-tuned on the SWAG dataset. ```bash python run_swag.py \ --model_name_or_path ehdwns1516/bert-base-uncased_SWAG \ --quantization_approach dynamic \ --do_eval \ --output_dir /tmp/quantized_bert_swag ``` In order to apply dynamic or static quantization, `quantization_approach` must be set to respectively `dynamic` or `static`.
optimum/examples/onnxruntime/quantization/multiple-choice/README.md/0
{ "file_path": "optimum/examples/onnxruntime/quantization/multiple-choice/README.md", "repo_id": "optimum", "token_count": 402 }
298
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> ## Summarization By running the script [`run_summarization.py`](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/summarization/run_summarization.py), you will be able to leverage the [`ONNX Runtime`](https://github.com/microsoft/onnxruntime) accelerator to fine-tune and evaluate models from the [HuggingFace hub](https://huggingface.co/models) on summarization tasks. ### Supported models Theoretically, all sequence-to-sequence models with [ONNXConfig](https://github.com/huggingface/transformers/blob/main/src/transformers/onnx/features.py) support in Transformers shall work, here are the models that the Optimum team has tested and validated. * `Bart` * `T5` `run_summarization.py` is a lightweight example of how to download and preprocess a dataset from the 🤗 Datasets library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it. __The following example applies the acceleration features powered by ONNX Runtime.__ ### Onnx Runtime Training The following example fine-tunes a BERT on the SQuAD 1.0 dataset. ```bash torchrun --nproc_per_node=NUM_GPUS_YOU_HAVE run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --do_train \ --do_eval \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir /tmp/ort_summarization_t5/ \ --overwrite_output_dir \ --predict_with_generate ``` __Note__ > *To enable ONNX Runtime training, your devices need to be equipped with GPU. Install the dependencies either with our prepared* *[Dockerfiles](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/docker/) or follow the instructions* *in [`torch_ort`](https://github.com/pytorch/ort/blob/main/torch_ort/docker/README.md).* > *The inference will use PyTorch by default, if you want to use ONNX Runtime backend instead, add the flag `--inference_with_ort`.* ---
optimum/examples/onnxruntime/training/summarization/README.md/0
{ "file_path": "optimum/examples/onnxruntime/training/summarization/README.md", "repo_id": "optimum", "token_count": 832 }
299