Datasets:

ArXiv:
diffusers-bot commited on
Commit
a6689a9
1 Parent(s): 5489bbb

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. main/README.md +181 -205
  2. main/lpw_stable_diffusion_xl.py +1 -1
main/README.md CHANGED
@@ -27,7 +27,7 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
27
  | Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | - | [Juan Carlos Piñeros](https://github.com/juancopi81) |
28
  | GlueGen Stable Diffusion | Stable Diffusion Pipeline that supports prompts in different languages using GlueGen adapter. | [GlueGen Stable Diffusion](#gluegen-stable-diffusion-pipeline) | - | [Phạm Hồng Vinh](https://github.com/rootonchair) |
29
  | Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
30
- | Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting | [Text Based Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Dhruv Karan](https://github.com/unography) |
31
  | Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
32
  | K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
33
  | Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
@@ -40,7 +40,7 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
40
  | CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
41
  | TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
42
  | EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | - | [Joqsan Azocar](https://github.com/Joqsan) |
43
- | Stable Diffusion RePaint | Stable Diffusion pipeline using [RePaint](https://arxiv.org/abs/2201.0986) for inpainting. | [Stable Diffusion RePaint](#stable-diffusion-repaint ) | - | [Markus Pobitzer](https://github.com/Markus-Pobitzer) |
44
  | TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Image to Image Pipeline](#tensorrt-image2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
45
  | Stable Diffusion IPEX Pipeline | Accelerate Stable Diffusion inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion on IPEX](#stable-diffusion-on-ipex) | - | [Yingjie Han](https://github.com/yingjie-han/) |
46
  | CLIP Guided Images Mixing Stable Diffusion Pipeline | Сombine images using usual diffusion models. | [CLIP Guided Images Mixing Using Stable Diffusion](#clip-guided-images-mixing-with-stable-diffusion) | - | [Karachev Denis](https://github.com/TheDenk) |
@@ -192,10 +192,9 @@ prompt = "wooden boat"
192
  init_image = load_image("https://raw.githubusercontent.com/Picsart-AI-Research/HD-Painter/main/__assets__/samples/images/2.jpg")
193
  mask_image = load_image("https://raw.githubusercontent.com/Picsart-AI-Research/HD-Painter/main/__assets__/samples/masks/2.png")
194
 
195
- image = pipe (prompt, init_image, mask_image, use_rasg = True, use_painta = True, generator=torch.manual_seed(12345)).images[0]
196
 
197
  make_image_grid([init_image, mask_image, image], rows=1, cols=3)
198
-
199
  ```
200
 
201
  ### Marigold Depth Estimation
@@ -223,7 +222,7 @@ pipe = DiffusionPipeline.from_pretrained(
223
 
224
  # (New) LCM version (faster speed)
225
  pipe = DiffusionPipeline.from_pretrained(
226
- "prs-eth/marigold-lcm-v1-0",
227
  custom_pipeline="marigold_depth_estimation"
228
  # torch_dtype=torch.float16, # (optional) Run with half-precision (16-bit float).
229
  # variant="fp16", # (optional) Use with `torch_dtype=torch.float16`, to directly load fp16 checkpoint
@@ -366,7 +365,6 @@ guided_pipeline = DiffusionPipeline.from_pretrained(
366
  custom_pipeline="clip_guided_stable_diffusion",
367
  clip_model=clip_model,
368
  feature_extractor=feature_extractor,
369
-
370
  torch_dtype=torch.float16,
371
  )
372
  guided_pipeline.enable_attention_slicing()
@@ -394,7 +392,7 @@ for i, img in enumerate(images):
394
  ```
395
 
396
  The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
397
- Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images:
398
 
399
  ![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg).
400
 
@@ -468,11 +466,9 @@ pipe.enable_attention_slicing()
468
 
469
 
470
  ### Text-to-Image
471
-
472
  images = pipe.text2img("An astronaut riding a horse").images
473
 
474
  ### Image-to-Image
475
-
476
  init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
477
 
478
  prompt = "A fantasy landscape, trending on artstation"
@@ -480,7 +476,6 @@ prompt = "A fantasy landscape, trending on artstation"
480
  images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
481
 
482
  ### Inpainting
483
-
484
  img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
485
  mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
486
  init_image = download_image(img_url).resize((512, 512))
@@ -497,7 +492,7 @@ As shown above this one pipeline can run all both "text-to-image", "image-to-ima
497
  Features of this custom pipeline:
498
 
499
  - Input a prompt without the 77 token length limit.
500
- - Includes tx2img, img2img. and inpainting pipelines.
501
  - Emphasize/weigh part of your prompt with parentheses as so: `a baby deer with (big eyes)`
502
  - De-emphasize part of your prompt as so: `a [baby] deer with big eyes`
503
  - Precisely weigh part of your prompt as so: `a baby deer with (big eyes:1.3)`
@@ -511,7 +506,7 @@ Prompt weighting equivalents:
511
 
512
  You can run this custom pipeline as so:
513
 
514
- #### pytorch
515
 
516
  ```python
517
  from diffusers import DiffusionPipeline
@@ -520,16 +515,14 @@ import torch
520
  pipe = DiffusionPipeline.from_pretrained(
521
  'hakurei/waifu-diffusion',
522
  custom_pipeline="lpw_stable_diffusion",
523
-
524
  torch_dtype=torch.float16
525
  )
526
- pipe=pipe.to("cuda")
527
 
528
  prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
529
  neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
530
 
531
- pipe.text2img(prompt, negative_prompt=neg_prompt, width=512,height=512,max_embeddings_multiples=3).images[0]
532
-
533
  ```
534
 
535
  #### onnxruntime
@@ -548,11 +541,10 @@ pipe = DiffusionPipeline.from_pretrained(
548
  prompt = "a photo of an astronaut riding a horse on mars, best quality"
549
  neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
550
 
551
- pipe.text2img(prompt,negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
552
-
553
  ```
554
 
555
- if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal.
556
 
557
  ### Speech to Image
558
 
@@ -587,7 +579,6 @@ diffuser_pipeline = DiffusionPipeline.from_pretrained(
587
  custom_pipeline="speech_to_image_diffusion",
588
  speech_model=model,
589
  speech_processor=processor,
590
-
591
  torch_dtype=torch.float16,
592
  )
593
 
@@ -647,7 +638,6 @@ import torch
647
  pipe = DiffusionPipeline.from_pretrained(
648
  "CompVis/stable-diffusion-v1-4",
649
  custom_pipeline="wildcard_stable_diffusion",
650
-
651
  torch_dtype=torch.float16,
652
  )
653
  prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
@@ -707,7 +697,6 @@ for i in range(args.num_images):
707
  images.append(th.from_numpy(np.array(image)).permute(2, 0, 1) / 255.)
708
  grid = tvu.make_grid(th.stack(images, dim=0), nrow=4, padding=0)
709
  tvu.save_image(grid, f'{prompt}_{args.weights}' + '.png')
710
-
711
  ```
712
 
713
  ### Imagic Stable Diffusion
@@ -721,13 +710,14 @@ from io import BytesIO
721
  import torch
722
  import os
723
  from diffusers import DiffusionPipeline, DDIMScheduler
 
724
  has_cuda = torch.cuda.is_available()
725
  device = torch.device('cpu' if not has_cuda else 'cuda')
726
  pipe = DiffusionPipeline.from_pretrained(
727
  "CompVis/stable-diffusion-v1-4",
728
- safety_checker=None,
729
  custom_pipeline="imagic_stable_diffusion",
730
- scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
731
  ).to(device)
732
  generator = torch.Generator("cuda").manual_seed(0)
733
  seed = 0
@@ -837,7 +827,7 @@ image.save('./seed_resize/seed_resize_{w}_{h}_image_compare.png'.format(w=width,
837
 
838
  ### Multilingual Stable Diffusion Pipeline
839
 
840
- The following code can generate an images from texts in different languages using the pre-trained [mBART-50 many-to-one multilingual machine translation model](https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt) and Stable Diffusion.
841
 
842
  ```python
843
  from PIL import Image
@@ -881,7 +871,6 @@ diffuser_pipeline = DiffusionPipeline.from_pretrained(
881
  detection_pipeline=language_detection_pipeline,
882
  translation_model=trans_model,
883
  translation_tokenizer=trans_tokenizer,
884
-
885
  torch_dtype=torch.float16,
886
  )
887
 
@@ -905,9 +894,9 @@ This example produces the following images:
905
 
906
  ### GlueGen Stable Diffusion Pipeline
907
 
908
- GlueGen is a minimal adapter that allow alignment between any encoder (Text Encoder of different language, Multilingual Roberta, AudioClip) and CLIP text encoder used in standard Stable Diffusion model. This method allows easy language adaptation to available english Stable Diffusion checkpoints without the need of an image captioning dataset as well as long training hours.
909
 
910
- Make sure you downloaded `gluenet_French_clip_overnorm_over3_noln.ckpt` for French (there are also pre-trained weights for Chinese, Italian, Japanese, Spanish or train your own) at [GlueGen's official repo](https://github.com/salesforce/GlueGen/tree/main)
911
 
912
  ```python
913
  from PIL import Image
@@ -974,7 +963,6 @@ mask_image = PIL.Image.open(mask_path).convert("RGB").resize((512, 512))
974
  pipe = DiffusionPipeline.from_pretrained(
975
  "runwayml/stable-diffusion-inpainting",
976
  custom_pipeline="img2img_inpainting",
977
-
978
  torch_dtype=torch.float16
979
  )
980
  pipe = pipe.to("cuda")
@@ -1019,13 +1007,13 @@ image = pipe(image=image, text=text, prompt=prompt).images[0]
1019
 
1020
  ### Bit Diffusion
1021
 
1022
- Based <https://arxiv.org/abs/2208.04202>, this is used for diffusion on discrete data - eg, discreate image data, DNA sequence data. An unconditional discreate image can be generated like this:
1023
 
1024
  ```python
1025
  from diffusers import DiffusionPipeline
 
1026
  pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="bit_diffusion")
1027
  image = pipe().images[0]
1028
-
1029
  ```
1030
 
1031
  ### Stable Diffusion with K Diffusion
@@ -1091,37 +1079,36 @@ image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
1091
 
1092
  ### Checkpoint Merger Pipeline
1093
 
1094
- Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges upto 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format.
1095
 
1096
- The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect at least 13GB RAM Usage on Kaggle GPU kernels and
1097
- on colab you might run out of the 12GB memory even while merging two checkpoints.
1098
 
1099
  Usage:-
1100
 
1101
  ```python
1102
  from diffusers import DiffusionPipeline
1103
 
1104
- #Return a CheckpointMergerPipeline class that allows you to merge checkpoints.
1105
- #The checkpoint passed here is ignored. But still pass one of the checkpoints you plan to
1106
- #merge for convenience
1107
  pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger")
1108
 
1109
- #There are multiple possible scenarios:
1110
- #The pipeline with the merged checkpoints is returned in all the scenarios
1111
 
1112
- #Compatible checkpoints a.k.a matched model_index.json files. Ignores the meta attributes in model_index.json during comparison.( attrs with _ as prefix )
1113
- merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4","CompVis/stable-diffusion-v1-2"], interp = "sigmoid", alpha = 0.4)
1114
 
1115
- #Incompatible checkpoints in model_index.json but merge might be possible. Use force = True to ignore model_index.json compatibility
1116
- merged_pipe_1 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion"], force = True, interp = "sigmoid", alpha = 0.4)
1117
 
1118
- #Three checkpoint merging. Only "add_difference" method actually works on all three checkpoints. Using any other options will ignore the 3rd checkpoint.
1119
- merged_pipe_2 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion","prompthero/openjourney"], force = True, interp = "add_difference", alpha = 0.4)
1120
 
1121
  prompt = "An astronaut riding a horse on Mars"
1122
 
1123
  image = merged_pipe(prompt).images[0]
1124
-
1125
  ```
1126
 
1127
  Some examples along with the merge details:
@@ -1132,7 +1119,7 @@ Some examples along with the merge details:
1132
 
1133
  2. "hakurei/waifu-diffusion" + "prompthero/openjourney" ; Inverse Sigmoid interpolation; alpha = 0.8
1134
 
1135
- ![Stable plus Waifu Sigmoid 0.8](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/waifu_openjourney_inv_sig_0.8.png)
1136
 
1137
  3. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" + "prompthero/openjourney"; Add Difference interpolation; alpha = 0.5
1138
 
@@ -1197,16 +1184,16 @@ from PIL import Image
1197
  pipe = DiffusionPipeline.from_pretrained(
1198
  "CompVis/stable-diffusion-v1-4",
1199
  custom_pipeline="magic_mix",
1200
- scheduler = DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
1201
  ).to('cuda')
1202
 
1203
  img = Image.open('phone.jpg')
1204
  mix_img = pipe(
1205
  img,
1206
- prompt = 'bed',
1207
- kmin = 0.3,
1208
- kmax = 0.5,
1209
- mix_factor = 0.5,
1210
  )
1211
  mix_img.save('phone_bed_mix.jpg')
1212
  ```
@@ -1227,8 +1214,8 @@ For more example generations check out this [demo notebook](https://github.com/d
1227
 
1228
  ### Stable UnCLIP
1229
 
1230
- UnCLIPPipeline("kakaobrain/karlo-v1-alpha") provide a prior model that can generate clip image embedding from text.
1231
- StableDiffusionImageVariationPipeline("lambdalabs/sd-image-variations-diffusers") provide a decoder model than can generate images from clip image embedding.
1232
 
1233
  ```python
1234
  import torch
@@ -1269,7 +1256,7 @@ image.save("./shiba-inu.jpg")
1269
  print(pipeline.decoder_pipe.__class__)
1270
  # <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_image_variation.StableDiffusionImageVariationPipeline'>
1271
 
1272
- # this pipeline only use prior module in "kakaobrain/karlo-v1-alpha"
1273
  # It is used to convert clip text embedding to clip image embedding.
1274
  print(pipeline)
1275
  # StableUnCLIPPipeline {
@@ -1329,10 +1316,10 @@ pipe.to(device)
1329
 
1330
  start_prompt = "A photograph of an adult lion"
1331
  end_prompt = "A photograph of a lion cub"
1332
- #For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
1333
  generator = torch.Generator(device=device).manual_seed(42)
1334
 
1335
- output = pipe(start_prompt, end_prompt, steps = 6, generator = generator, enable_sequential_cpu_offload=False)
1336
 
1337
  for i,image in enumerate(output.images):
1338
  img.save('result%s.jpg' % i)
@@ -1367,10 +1354,10 @@ pipe = DiffusionPipeline.from_pretrained(
1367
  pipe.to(device)
1368
 
1369
  images = [Image.open('./starry_night.jpg'), Image.open('./flowers.jpg')]
1370
- #For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
1371
  generator = torch.Generator(device=device).manual_seed(42)
1372
 
1373
- output = pipe(image = images ,steps = 6, generator = generator)
1374
 
1375
  for i,image in enumerate(output.images):
1376
  image.save('starry_to_flowers_%s.jpg' % i)
@@ -1392,7 +1379,7 @@ The resulting images in order:-
1392
 
1393
  ### DDIM Noise Comparative Analysis Pipeline
1394
 
1395
- #### **Research question: What visual concepts do the diffusion models learn from each noise level during training?**
1396
 
1397
  The [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227) paper proposed an approach to answer the above question, which is their second contribution.
1398
  The approach consists of the following steps:
@@ -1409,7 +1396,7 @@ import torch
1409
  from PIL import Image
1410
  import numpy as np
1411
 
1412
- image_path = "path/to/your/image" # images from CelebA-HQ might be better
1413
  image_pil = Image.open(image_path)
1414
  image_name = image_path.split("/")[-1].split(".")[0]
1415
 
@@ -1448,6 +1435,7 @@ import torch
1448
  from diffusers import DiffusionPipeline
1449
  from PIL import Image
1450
  from transformers import CLIPFeatureExtractor, CLIPModel
 
1451
  feature_extractor = CLIPFeatureExtractor.from_pretrained(
1452
  "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
1453
  )
@@ -1622,6 +1610,7 @@ import requests
1622
  import torch
1623
  from io import BytesIO
1624
  from diffusers import StableDiffusionPipeline, RePaintScheduler
 
1625
  def download_image(url):
1626
  response = requests.get(url)
1627
  return PIL.Image.open(BytesIO(response.content)).convert("RGB")
@@ -1679,7 +1668,7 @@ image.save('tensorrt_img2img_new_zealand_hills.png')
1679
  ```
1680
 
1681
  ### Stable Diffusion BoxDiff
1682
- BoxDiff is a training-free method for controlled generation with bounding box coordinates. It shoud work with any Stable Diffusion model. Below shows an example with `stable-diffusion-2-1-base`.
1683
  ```py
1684
  import torch
1685
  from PIL import Image, ImageDraw
@@ -1839,13 +1828,13 @@ Output Image
1839
 
1840
  ### Stable Diffusion on IPEX
1841
 
1842
- This diffusion pipeline aims to accelarate the inference of Stable-Diffusion on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
1843
 
1844
  To use this pipeline, you need to:
1845
 
1846
  1. Install [IPEX](https://github.com/intel/intel-extension-for-pytorch)
1847
 
1848
- **Note:** For each PyTorch release, there is a corresponding release of the IPEX. Here is the mapping relationship. It is recommended to install Pytorch/IPEX2.0 to get the best performance.
1849
 
1850
  |PyTorch Version|IPEX Version|
1851
  |--|--|
@@ -1864,26 +1853,26 @@ python -m pip install intel_extension_for_pytorch
1864
  python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
1865
  ```
1866
 
1867
- 2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
1868
 
1869
  **Note:** The setting of generated image height/width for `prepare_for_ipex()` should be same as the setting of pipeline inference.
1870
 
1871
  ```python
1872
  pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_ipex")
1873
  # For Float32
1874
- pipe.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
1875
  # For BFloat16
1876
- pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
1877
  ```
1878
 
1879
  Then you can use the ipex pipeline in a similar way to the default stable diffusion pipeline.
1880
 
1881
  ```python
1882
  # For Float32
1883
- image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
1884
  # For BFloat16
1885
  with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
1886
- image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
1887
  ```
1888
 
1889
  The following code compares the performance of the original stable diffusion pipeline with the ipex-optimized pipeline.
@@ -1901,7 +1890,7 @@ def elapsed_time(pipeline, nb_pass=3, num_inference_steps=20):
1901
  # warmup
1902
  for _ in range(2):
1903
  images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images
1904
- #time evaluation
1905
  start = time.time()
1906
  for _ in range(nb_pass):
1907
  pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512)
@@ -1922,7 +1911,7 @@ with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
1922
  latency = elapsed_time(pipe)
1923
  print("Latency of StableDiffusionIPEXPipeline--bf16", latency)
1924
  latency = elapsed_time(pipe2)
1925
- print("Latency of StableDiffusionPipeline--bf16",latency)
1926
 
1927
  ############## fp32 inference performance ###############
1928
 
@@ -1937,13 +1926,12 @@ pipe4 = StableDiffusionPipeline.from_pretrained(model_id)
1937
  latency = elapsed_time(pipe3)
1938
  print("Latency of StableDiffusionIPEXPipeline--fp32", latency)
1939
  latency = elapsed_time(pipe4)
1940
- print("Latency of StableDiffusionPipeline--fp32",latency)
1941
-
1942
  ```
1943
 
1944
  ### Stable Diffusion XL on IPEX
1945
 
1946
- This diffusion pipeline aims to accelarate the inference of Stable-Diffusion XL on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
1947
 
1948
  To use this pipeline, you need to:
1949
 
@@ -1968,7 +1956,7 @@ python -m pip install intel_extension_for_pytorch
1968
  python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
1969
  ```
1970
 
1971
- 2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
1972
 
1973
  **Note:** The values of `height` and `width` used during preparation with `prepare_for_ipex()` should be the same when running inference with the prepared pipeline.
1974
 
@@ -2011,7 +1999,7 @@ def elapsed_time(pipeline, nb_pass=3, num_inference_steps=1):
2011
  # warmup
2012
  for _ in range(2):
2013
  images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=0.0).images
2014
- #time evaluation
2015
  start = time.time()
2016
  for _ in range(nb_pass):
2017
  pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=0.0)
@@ -2047,8 +2035,7 @@ pipe4 = StableDiffusionXLPipeline.from_pretrained(model_id, low_cpu_mem_usage=Tr
2047
  latency = elapsed_time(pipe3, num_inference_steps=steps)
2048
  print("Latency of StableDiffusionXLPipelineIpex--fp32", latency, "s for total", steps, "steps")
2049
  latency = elapsed_time(pipe4, num_inference_steps=steps)
2050
- print("Latency of StableDiffusionXLPipeline--fp32",latency, "s for total", steps, "steps")
2051
-
2052
  ```
2053
 
2054
  ### CLIP Guided Images Mixing With Stable Diffusion
@@ -2061,7 +2048,7 @@ This approach is using (optional) CoCa model to avoid writing image description.
2061
 
2062
  ### Stable Diffusion XL Long Weighted Prompt Pipeline
2063
 
2064
- This SDXL pipeline support unlimited length prompt and negative prompt, compatible with A1111 prompt weighted style.
2065
 
2066
  You can provide both `prompt` and `prompt_2`. If only one prompt is provided, `prompt_2` will be a copy of the provided `prompt`. Here is a sample code to use this pipeline.
2067
 
@@ -2089,31 +2076,31 @@ pipe.to("cuda")
2089
  t2i_images = pipe(
2090
  prompt=prompt,
2091
  negative_prompt=neg_prompt,
2092
- ).images # alternatively, you can call the .text2img() function
2093
 
2094
  # img2img
2095
- input_image = load_image("/path/to/local/image.png") # or URL to your input image
2096
  i2i_images = pipe.img2img(
2097
  prompt=prompt,
2098
  negative_prompt=neg_prompt,
2099
  image=input_image,
2100
- strength=0.8, # higher strength will result in more variation compared to original image
2101
  ).images
2102
 
2103
  # inpaint
2104
- input_mask = load_image("/path/to/local/mask.png") # or URL to your input inpainting mask
2105
  inpaint_images = pipe.inpaint(
2106
  prompt="photo of a cute (black) cat running on the grass" * 20,
2107
  negative_prompt=neg_prompt,
2108
  image=input_image,
2109
  mask=input_mask,
2110
- strength=0.6, # higher strength will result in more variation compared to original image
2111
  ).images
2112
 
2113
  pipe.to("cpu")
2114
  torch.cuda.empty_cache()
2115
 
2116
- from IPython.display import display # assuming you are using this code in a notebook
2117
  display(t2i_images[0])
2118
  display(i2i_images[0])
2119
  display(inpaint_images[0])
@@ -2153,9 +2140,9 @@ coca_model = open_clip.create_model('coca_ViT-L-14', pretrained='laion2B-s13B-b9
2153
  coca_model.dtype = torch.float16
2154
  coca_transform = open_clip.image_transform(
2155
  coca_model.visual.image_size,
2156
- is_train = False,
2157
- mean = getattr(coca_model.visual, 'image_mean', None),
2158
- std = getattr(coca_model.visual, 'image_std', None),
2159
  )
2160
  coca_tokenizer = SimpleTokenizer()
2161
 
@@ -2207,7 +2194,7 @@ This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/230
2207
  ```python
2208
  from diffusers import LMSDiscreteScheduler, DiffusionPipeline
2209
 
2210
- # Creater scheduler and model (similar to StableDiffusionPipeline)
2211
  scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
2212
  pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler, custom_pipeline="mixture_tiling")
2213
  pipeline.to("cuda")
@@ -2248,7 +2235,6 @@ from diffusers.pipelines.stable_diffusion import StableDiffusionInpaintPipeline
2248
  # Use the PNDMScheduler scheduler here instead
2249
  scheduler = PNDMScheduler.from_pretrained("stabilityai/stable-diffusion-2-inpainting", subfolder="scheduler")
2250
 
2251
-
2252
  pipe = StableDiffusionInpaintPipeline.from_pretrained("stabilityai/stable-diffusion-2-inpainting",
2253
  custom_pipeline="stable_diffusion_tensorrt_inpaint",
2254
  variant='fp16',
@@ -2287,7 +2273,7 @@ from diffusers.pipelines.pipeline_utils import Image2ImageRegion, Text2ImageRegi
2287
  # Load and preprocess guide image
2288
  iic_image = preprocess_image(Image.open("input_image.png").convert("RGB"))
2289
 
2290
- # Creater scheduler and model (similar to StableDiffusionPipeline)
2291
  scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
2292
  pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda:0", custom_pipeline="mixture_canvas")
2293
  pipeline.to("cuda")
@@ -2298,7 +2284,7 @@ output = pipeline(
2298
  canvas_width=352,
2299
  regions=[
2300
  Text2ImageRegion(0, 800, 0, 352, guidance_scale=8,
2301
- prompt=f"best quality, masterpiece, WLOP, sakimichan, art contest winner on pixiv, 8K, intricate details, wet effects, rain drops, ethereal, mysterious, futuristic, UHD, HDR, cinematic lighting, in a beautiful forest, rainy day, award winning, trending on artstation, beautiful confident cheerful young woman, wearing a futuristic sleeveless dress, ultra beautiful detailed eyes, hyper-detailed face, complex, perfect, model,  textured, chiaroscuro, professional make-up, realistic, figure in frame, "),
2302
  Image2ImageRegion(352-800, 352, 0, 352, reference_image=iic_image, strength=1.0),
2303
  ],
2304
  num_inference_steps=100,
@@ -2317,22 +2303,19 @@ It is a simple and minimalist diffusion model.
2317
  The following code shows how to use the IADB pipeline to generate images using a pretrained celebahq-256 model.
2318
 
2319
  ```python
2320
-
2321
  pipeline_iadb = DiffusionPipeline.from_pretrained("thomasc4/iadb-celebahq-256", custom_pipeline='iadb')
2322
 
2323
  pipeline_iadb = pipeline_iadb.to('cuda')
2324
 
2325
- output = pipeline_iadb(batch_size=4,num_inference_steps=128)
2326
  for i in range(len(output[0])):
2327
  plt.imshow(output[0][i])
2328
  plt.show()
2329
-
2330
  ```
2331
 
2332
  Sampling with the IADB formulation is easy, and can be done in a few lines (the pipeline already implements it):
2333
 
2334
  ```python
2335
-
2336
  def sample_iadb(model, x0, nb_step):
2337
  x_alpha = x0
2338
  for t in range(nb_step):
@@ -2343,13 +2326,11 @@ def sample_iadb(model, x0, nb_step):
2343
  x_alpha = x_alpha + (alpha_next-alpha)*d
2344
 
2345
  return x_alpha
2346
-
2347
  ```
2348
 
2349
  The training loop is also straightforward:
2350
 
2351
  ```python
2352
-
2353
  # Training loop
2354
  while True:
2355
  x0 = sample_noise()
@@ -2380,7 +2361,7 @@ import torch
2380
  from pipeline_zero1to3 import Zero1to3StableDiffusionPipeline
2381
  from diffusers.utils import load_image
2382
 
2383
- model_id = "kxic/zero123-165000" # zero123-105000, zero123-165000, zero123-xl
2384
 
2385
  pipe = Zero1to3StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
2386
 
@@ -2401,9 +2382,9 @@ query_pose3 = [-55.0, 90.0, 0.0]
2401
  # H, W = (256, 256) # H, W = (512, 512) # zero123 training is 256,256
2402
 
2403
  # for batch input
2404
- input_image1 = load_image("./demo/4_blackarm.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/4_blackarm.png")
2405
- input_image2 = load_image("./demo/8_motor.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/8_motor.png")
2406
- input_image3 = load_image("./demo/7_london.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/7_london.png")
2407
  input_images = [input_image1, input_image2, input_image3]
2408
  query_poses = [query_pose1, query_pose2, query_pose3]
2409
 
@@ -2434,7 +2415,6 @@ input_images = pre_images
2434
  images = pipe(input_imgs=input_images, prompt_imgs=input_images, poses=query_poses, height=H, width=W,
2435
  guidance_scale=3.0, num_images_per_prompt=num_images_per_prompt, num_inference_steps=50).images
2436
 
2437
-
2438
  # save imgs
2439
  log_dir = "logs"
2440
  os.makedirs(log_dir, exist_ok=True)
@@ -2444,12 +2424,11 @@ for obj in range(bs):
2444
  for idx in range(num_images_per_prompt):
2445
  images[i].save(os.path.join(log_dir,f"obj{obj}_{idx}.jpg"))
2446
  i += 1
2447
-
2448
  ```
2449
 
2450
  ### Stable Diffusion XL Reference
2451
 
2452
- This pipeline uses the Reference . Refer to the [stable_diffusion_reference](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#stable-diffusion-reference).
2453
 
2454
  ```py
2455
  import torch
@@ -2457,6 +2436,7 @@ from PIL import Image
2457
  from diffusers.utils import load_image
2458
  from diffusers import DiffusionPipeline
2459
  from diffusers.schedulers import UniPCMultistepScheduler
 
2460
  input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
2461
 
2462
  # pipe = DiffusionPipeline.from_pretrained(
@@ -2529,7 +2509,7 @@ from diffusers import DiffusionPipeline
2529
  # load the pipeline
2530
  # make sure you're logged in with `huggingface-cli login`
2531
  model_id_or_path = "runwayml/stable-diffusion-v1-5"
2532
- #can also be used with dreamlike-art/dreamlike-photoreal-2.0
2533
  pipe = DiffusionPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16, custom_pipeline="pipeline_fabric").to("cuda")
2534
 
2535
  # let's specify a prompt
@@ -2560,7 +2540,7 @@ torch.manual_seed(0)
2560
  image = pipe(
2561
  prompt=prompt,
2562
  negative_prompt=negative_prompt,
2563
- liked = liked,
2564
  num_inference_steps=20,
2565
  ).images[0]
2566
 
@@ -2730,7 +2710,7 @@ pipe.to(torch_device="cuda", torch_dtype=torch.float32)
2730
  ```py
2731
  prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
2732
 
2733
- # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
2734
  num_inference_steps = 4
2735
 
2736
  images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
@@ -2762,9 +2742,9 @@ prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
2762
 
2763
  input_image=Image.open("myimg.png")
2764
 
2765
- strength = 0.5 #strength =0 (no change) strength=1 (completely overwrite image)
2766
 
2767
- # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
2768
  num_inference_steps = 4
2769
 
2770
  images = pipe(prompt=prompt, image=input_image, strength=strength, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
@@ -2808,7 +2788,7 @@ images = pipe(
2808
  guidance_scale=8.0,
2809
  embedding_interpolation_type="lerp",
2810
  latent_interpolation_type="slerp",
2811
- process_batch_size=4, # Make it higher or lower based on your GPU memory
2812
  generator=torch.Generator(seed),
2813
  )
2814
 
@@ -2827,7 +2807,7 @@ Two checkpoints are available for use:
2827
  - [ldm3d-pano](https://huggingface.co/Intel/ldm3d-pano). This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used.
2828
  - [ldm3d-sr](https://huggingface.co/Intel/ldm3d-sr). This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline pipeline.
2829
 
2830
- '''py
2831
  from PIL import Image
2832
  import os
2833
  import torch
@@ -2838,11 +2818,11 @@ from diffusers import StableDiffusionLDM3DPipeline, DiffusionPipeline
2838
  pipe_ldm3d = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c")
2839
  pipe_ldm3d.to("cuda")
2840
 
2841
- prompt =f"A picture of some lemons on a table"
2842
  output = pipe_ldm3d(prompt)
2843
  rgb_image, depth_image = output.rgb, output.depth
2844
- rgb_image[0].save(f"lemons_ldm3d_rgb.jpg")
2845
- depth_image[0].save(f"lemons_ldm3d_depth.png")
2846
 
2847
  # Upscale the previous output to a resolution of (1024, 1024)
2848
 
@@ -2850,19 +2830,19 @@ pipe_ldm3d_upscale = DiffusionPipeline.from_pretrained("Intel/ldm3d-sr", custom_
2850
 
2851
  pipe_ldm3d_upscale.to("cuda")
2852
 
2853
- low_res_img = Image.open(f"lemons_ldm3d_rgb.jpg").convert("RGB")
2854
- low_res_depth = Image.open(f"lemons_ldm3d_depth.png").convert("L")
2855
  outputs = pipe_ldm3d_upscale(prompt="high quality high resolution uhd 4k image", rgb=low_res_img, depth=low_res_depth, num_inference_steps=50, target_res=[1024, 1024])
2856
 
2857
- upscaled_rgb, upscaled_depth =outputs.rgb[0], outputs.depth[0]
2858
- upscaled_rgb.save(f"upscaled_lemons_rgb.png")
2859
- upscaled_depth.save(f"upscaled_lemons_depth.png")
2860
- '''
2861
 
2862
  ### ControlNet + T2I Adapter Pipeline
2863
 
2864
- This pipelines combines both ControlNet and T2IAdapter into a single pipeline, where the forward pass is executed once.
2865
- It receives `control_image` and `adapter_image`, as well as `controlnet_conditioning_scale` and `adapter_conditioning_scale`, for the ControlNet and Adapter modules, respectively. Whenever `adapter_conditioning_scale = 0` or `controlnet_conditioning_scale = 0`, it will act as a full ControlNet module or as a full T2IAdapter module, respectively.
2866
 
2867
  ```py
2868
  import cv2
@@ -2925,7 +2905,6 @@ images = pipe(
2925
  adapter_conditioning_scale=strength,
2926
  ).images
2927
  images[0].save("controlnet_and_adapter.png")
2928
-
2929
  ```
2930
 
2931
  ### ControlNet + T2I Adapter + Inpainting Pipeline
@@ -2996,12 +2975,11 @@ images = pipe(
2996
  strength=0.7,
2997
  ).images
2998
  images[0].save("controlnet_and_adapter_inpaint.png")
2999
-
3000
  ```
3001
 
3002
  ### Regional Prompting Pipeline
3003
 
3004
- This pipeline is a port of the [Regional Prompter extension](https://github.com/hako-mikan/sd-webui-regional-prompter) for [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to diffusers.
3005
  This code implements a pipeline for the Stable Diffusion model, enabling the division of the canvas into multiple regions, with different prompts applicable to each region. Users can specify regions in two ways: using `Cols` and `Rows` modes for grid-like divisions, or the `Prompt` mode for regions calculated based on prompts.
3006
 
3007
  ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/rp_pipeline1.png)
@@ -3012,6 +2990,7 @@ This code implements a pipeline for the Stable Diffusion model, enabling the div
3012
 
3013
  ```py
3014
  from examples.community.regional_prompting_stable_diffusion import RegionalPromptingStableDiffusionPipeline
 
3015
  pipe = RegionalPromptingStableDiffusionPipeline.from_single_file(model_path, vae=vae)
3016
 
3017
  rp_args = {
@@ -3019,7 +2998,7 @@ rp_args = {
3019
  "div": "1;1;1"
3020
  }
3021
 
3022
- prompt ="""
3023
  green hair twintail BREAK
3024
  red blouse BREAK
3025
  blue skirt
@@ -3029,12 +3008,12 @@ images = pipe(
3029
  prompt=prompt,
3030
  negative_prompt=negative_prompt,
3031
  guidance_scale=7.5,
3032
- height = 768,
3033
- width = 512,
3034
- num_inference_steps =20,
3035
- num_images_per_prompt = 1,
3036
- rp_args = rp_args
3037
- ).images
3038
 
3039
  time = time.strftime(r"%Y%m%d%H%M%S")
3040
  i = 1
@@ -3059,19 +3038,19 @@ blue skirt
3059
 
3060
  ### 2-Dimentional division
3061
 
3062
- The prompt consists of instructions separated by the term `BREAK` and is assigned to different regions of a two-dimensional space. The image is initially split in the main splitting direction, which in this case is rows, due to the presence of a single semicolon`;`, dividing the space into an upper and a lower section. Additional sub-splitting is then applied, indicated by commas. The upper row is split into ratios of `2:1:1`, while the lower row is split into a ratio of `4:6`. Rows themselves are split in a `1:2` ratio. According to the reference image, the blue sky is designated as the first region, green hair as the second, the bookshelf as the third, and so on, in a sequence based on their position from the top left. The terrarium is placed on the desk in the fourth region, and the orange dress and sofa are in the fifth region, conforming to their respective splits.
3063
 
3064
- ```
3065
  rp_args = {
3066
  "mode":"rows",
3067
  "div": "1,2,1,1;2,4,6"
3068
  }
3069
 
3070
- prompt ="""
3071
  blue sky BREAK
3072
  green hair BREAK
3073
  book shelf BREAK
3074
- terrarium on desk BREAK
3075
  orange dress and sofa
3076
  """
3077
  ```
@@ -3080,10 +3059,10 @@ orange dress and sofa
3080
 
3081
  ### Prompt Mode
3082
 
3083
- There are limitations to methods of specifying regions in advance. This is because specifying regions can be a hindrance when designating complex shapes or dynamic compositions. In the region specified by the prompt, the regions is determined after the image generation has begun. This allows us to accommodate compositions and complex regions.
3084
  For further infomagen, see [here](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/prompt_en.md).
3085
 
3086
- ### syntax
3087
 
3088
  ```
3089
  baseprompt target1 target2 BREAK
@@ -3105,14 +3084,14 @@ is also effective.
3105
 
3106
  In this example, masks are calculated for shirt, tie, skirt, and color prompts are specified only for those regions.
3107
 
3108
- ```
3109
  rp_args = {
3110
- "mode":"prompt-ex",
3111
- "save_mask":True,
3112
  "th": "0.4,0.6,0.6",
3113
  }
3114
 
3115
- prompt ="""
3116
  a girl in street with shirt, tie, skirt BREAK
3117
  red, shirt BREAK
3118
  green, tie BREAK
@@ -3122,7 +3101,7 @@ blue , skirt
3122
 
3123
  ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/rp_pipeline3.png)
3124
 
3125
- ### threshold
3126
 
3127
  The threshold used to determine the mask created by the prompt. This can be set as many times as there are masks, as the range varies widely depending on the target prompt. If multiple regions are used, enter them separated by commas. For example, hair tends to be ambiguous and requires a small value, while face tends to be large and requires a small value. These should be ordered by BREAK.
3128
 
@@ -3141,7 +3120,7 @@ The difference is that in Prompt, duplicate regions are added, whereas in Prompt
3141
 
3142
  ### Accuracy
3143
 
3144
- In the case of a 512 x 512 image, Attention mode reduces the size of the region to about 8 x 8 pixels deep in the U-Net, so that small regions get mixed up; Latent mode calculates 64*64, so that the region is exact.
3145
 
3146
  ```
3147
  girl hair twintail frills,ribbons, dress, face BREAK
@@ -3154,7 +3133,7 @@ When an image is generated, the generated mask is displayed. It is generated at
3154
 
3155
  ### Use common prompt
3156
 
3157
- You can attach the prompt up to ADDCOMM to all prompts by separating it first with ADDCOMM. This is useful when you want to include elements common to all regions. For example, when generating pictures of three people with different appearances, it's necessary to include the instruction of 'three people' in all regions. It's also useful when inserting quality tags and other things."For example, if you write as follows:
3158
 
3159
  ```
3160
  best quality, 3persons in garden, ADDCOMM
@@ -3177,24 +3156,24 @@ Negative prompts are equally effective across all regions, but it is possible to
3177
 
3178
  ### Parameters
3179
 
3180
- To activate Regional Prompter, it is necessary to enter settings in rp_args. The items that can be set are as follows. rp_args is a dictionary type.
3181
 
3182
  ### Input Parameters
3183
 
3184
  Parameters are specified through the `rp_arg`(dictionary type).
3185
 
3186
- ```
3187
  rp_args = {
3188
  "mode":"rows",
3189
  "div": "1;1;1"
3190
  }
3191
 
3192
- pipe(prompt =prompt, rp_args = rp_args)
3193
  ```
3194
 
3195
  ### Required Parameters
3196
 
3197
- - `mode`: Specifies the method for defining regions. Choose from `Cols`, `Rows`, `Prompt` or `Prompt-Ex`. This parameter is case-insensitive.
3198
  - `divide`: Used in `Cols` and `Rows` modes. Details on how to specify this are provided under the respective `Cols` and `Rows` sections.
3199
  - `th`: Used in `Prompt` mode. The method of specification is detailed under the `Prompt` section.
3200
 
@@ -3208,7 +3187,7 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
3208
 
3209
  - Reference paper
3210
 
3211
- ```
3212
  @article{chung2022diffusion,
3213
  title={Diffusion posterior sampling for general noisy inverse problems},
3214
  author={Chung, Hyungjin and Kim, Jeongsol and Mccann, Michael T and Klasky, Marc L and Ye, Jong Chul},
@@ -3220,7 +3199,7 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
3220
  - This pipeline allows zero-shot conditional sampling from the posterior distribution $p(x|y)$, given observation on $y$, unconditional generative model $p(x)$ and differentiable operator $y=f(x)$.
3221
 
3222
  - For example, $f(.)$ can be downsample operator, then $y$ is a downsampled image, and the pipeline becomes a super-resolution pipeline.
3223
- - To use this pipeline, you need to know your operator $f(.)$ and corrupted image $y$, and pass them during the call. For example, as in the main function of dps_pipeline.py, you need to first define the Gaussian blurring operator $f(.)$. The operator should be a callable nn.Module, with all the parameter gradient disabled:
3224
 
3225
  ```python
3226
  import torch.nn.functional as F
@@ -3250,7 +3229,7 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
3250
  def weights_init(self):
3251
  if self.blur_type == "gaussian":
3252
  n = np.zeros((self.kernel_size, self.kernel_size))
3253
- n[self.kernel_size // 2,self.kernel_size // 2] = 1
3254
  k = scipy.ndimage.gaussian_filter(n, sigma=self.std)
3255
  k = torch.from_numpy(k)
3256
  self.k = k
@@ -3280,7 +3259,7 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
3280
  self.conv.update_weights(self.kernel.type(torch.float32))
3281
 
3282
  for param in self.parameters():
3283
- param.requires_grad=False
3284
 
3285
  def forward(self, data, **kwargs):
3286
  return self.conv(data)
@@ -3317,7 +3296,7 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
3317
  - ![sample](https://github.com/tongdaxu/Images/assets/22267548/4d2a1216-08d1-4aeb-9ce3-7a2d87561d65)
3318
  - Gaussian blurred image:
3319
  - ![ddpm_generated_image](https://github.com/tongdaxu/Images/assets/22267548/65076258-344b-4ed8-b704-a04edaade8ae)
3320
- - You can download those image to run the example on your own.
3321
 
3322
  - Next, we need to define a loss function used for diffusion posterior sample. For most of the cases, the RMSE is fine:
3323
 
@@ -3326,7 +3305,7 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
3326
  return torch.sqrt(torch.sum((yhat-y)**2))
3327
  ```
3328
 
3329
- - And next, as any other diffusion models, we need the score estimator and scheduler. As we are working with $256x256$ face images, we use ddmp-celebahq-256:
3330
 
3331
  ```python
3332
  # set up scheduler
@@ -3343,20 +3322,20 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
3343
  # finally, the pipeline
3344
  dpspipe = DPSPipeline(model, scheduler)
3345
  image = dpspipe(
3346
- measurement = measurement,
3347
- operator = operator,
3348
- loss_fn = RMSELoss,
3349
- zeta = 1.0,
3350
  ).images[0]
3351
  image.save("dps_generated_image.png")
3352
  ```
3353
 
3354
- - The zeta is a hyperparameter that is in range of $[0,1]$. It need to be tuned for best effect. By setting zeta=1, you should be able to have the reconstructed result:
3355
  - Reconstructed image:
3356
  - ![sample](https://github.com/tongdaxu/Images/assets/22267548/0ceb5575-d42e-4f0b-99c0-50e69c982209)
3357
 
3358
  - The reconstruction is perceptually similar to the source image, but different in details.
3359
- - In dps_pipeline.py, we also provide a super-resolution example, which should produce:
3360
  - Downsampled image:
3361
  - ![dps_mea](https://github.com/tongdaxu/Images/assets/22267548/ff6a33d6-26f0-42aa-88ce-f8a76ba45a13)
3362
  - Reconstructed image:
@@ -3368,9 +3347,8 @@ This pipeline combines AnimateDiff and ControlNet. Enjoy precise motion control
3368
 
3369
  ```py
3370
  import torch
3371
- from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter
3372
- from diffusers.pipelines import DiffusionPipeline
3373
- from diffusers.schedulers import DPMSolverMultistepScheduler
3374
  from PIL import Image
3375
 
3376
  motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
@@ -3385,7 +3363,8 @@ pipe = DiffusionPipeline.from_pretrained(
3385
  controlnet=controlnet,
3386
  vae=vae,
3387
  custom_pipeline="pipeline_animatediff_controlnet",
3388
- ).to(device="cuda", dtype=torch.float16)
 
3389
  pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
3390
  model_id, subfolder="scheduler", beta_schedule="linear", clip_sample=False, timestep_spacing="linspace", steps_offset=1
3391
  )
@@ -3406,7 +3385,6 @@ result = pipe(
3406
  num_inference_steps=20,
3407
  ).frames[0]
3408
 
3409
- from diffusers.utils import export_to_gif
3410
  export_to_gif(result.frames[0], "result.gif")
3411
  ```
3412
 
@@ -3431,9 +3409,8 @@ You can also use multiple controlnets at once!
3431
 
3432
  ```python
3433
  import torch
3434
- from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter
3435
- from diffusers.pipelines import DiffusionPipeline
3436
- from diffusers.schedulers import DPMSolverMultistepScheduler
3437
  from PIL import Image
3438
 
3439
  motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
@@ -3449,7 +3426,8 @@ pipe = DiffusionPipeline.from_pretrained(
3449
  controlnet=[controlnet1, controlnet2],
3450
  vae=vae,
3451
  custom_pipeline="pipeline_animatediff_controlnet",
3452
- ).to(device="cuda", dtype=torch.float16)
 
3453
  pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
3454
  model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1, beta_schedule="linear",
3455
  )
@@ -3496,7 +3474,6 @@ result = pipe(
3496
  num_inference_steps=20,
3497
  )
3498
 
3499
- from diffusers.utils import export_to_gif
3500
  export_to_gif(result.frames[0], "result.gif")
3501
  ```
3502
 
@@ -3625,7 +3602,6 @@ pipe.train_lora(prompt, image)
3625
  output = pipe(prompt, image, mask_image, source_points, target_points)
3626
  output_image = PIL.Image.fromarray(output)
3627
  output_image.save("./output.png")
3628
-
3629
  ```
3630
 
3631
  ### Instaflow Pipeline
@@ -3674,19 +3650,19 @@ This pipeline provides null-text inversion for editing real images. It enables n
3674
 
3675
  - Reference paper
3676
 
3677
- ```@article{hertz2022prompt,
3678
- title={Prompt-to-prompt image editing with cross attention control},
3679
- author={Hertz, Amir and Mokady, Ron and Tenenbaum, Jay and Aberman, Kfir and Pritch, Yael and Cohen-Or, Daniel},
3680
- booktitle={arXiv preprint arXiv:2208.01626},
3681
- year={2022}
 
3682
  ```}
3683
 
3684
  ```py
3685
- from diffusers.schedulers import DDIMScheduler
3686
  from examples.community.pipeline_null_text_inversion import NullTextPipeline
3687
  import torch
3688
 
3689
- # Load the pipeline
3690
  device = "cuda"
3691
  # Provide invert_prompt and the image for null-text optimization.
3692
  invert_prompt = "A lying cat"
@@ -3698,13 +3674,13 @@ prompt = "A lying cat"
3698
  # or different if editing.
3699
  prompt = "A lying dog"
3700
 
3701
- #Float32 is essential to a well optimization
3702
  model_path = "runwayml/stable-diffusion-v1-5"
3703
  scheduler = DDIMScheduler(num_train_timesteps=1000, beta_start=0.00085, beta_end=0.0120, beta_schedule="scaled_linear")
3704
- pipeline = NullTextPipeline.from_pretrained(model_path, scheduler = scheduler, torch_dtype=torch.float32).to(device)
3705
 
3706
- #Saves the inverted_latent to save time
3707
- inverted_latent, uncond = pipeline.invert(input_image, invert_prompt, num_inner_steps=10, early_stop_epsilon= 1e-5, num_inference_steps = steps)
3708
  pipeline(prompt, uncond, inverted_latent, guidance_scale=7.5, num_inference_steps=steps).images[0].save(input_image+".output.jpg")
3709
  ```
3710
 
@@ -3761,7 +3737,7 @@ for frame in frames:
3761
  controlnet = ControlNetModel.from_pretrained(
3762
  "lllyasviel/sd-controlnet-canny").to('cuda')
3763
 
3764
- # You can use any fintuned SD here
3765
  pipe = DiffusionPipeline.from_pretrained(
3766
  "runwayml/stable-diffusion-v1-5", controlnet=controlnet, custom_pipeline='rerender_a_video').to('cuda')
3767
 
@@ -3803,7 +3779,7 @@ This pipeline is the implementation of [Style Aligned Image Generation via Share
3803
  from typing import List
3804
 
3805
  import torch
3806
- from diffusers.pipelines.pipeline_utils import DiffusionPipeline
3807
  from PIL import Image
3808
 
3809
  model_id = "a-r-r-o-w/dreamshaper-xl-turbo"
@@ -3872,7 +3848,7 @@ output = pipe(
3872
  image=image,
3873
  prompt="A snail moving on the ground",
3874
  strength=0.8,
3875
- latent_interpolation_method="slerp", # can be lerp, slerp, or your own callback
3876
  )
3877
  frames = output.frames[0]
3878
  export_to_gif(frames, "animation.gif")
@@ -3882,11 +3858,10 @@ export_to_gif(frames, "animation.gif")
3882
 
3883
  IP Adapter FaceID is an experimental IP Adapter model that uses image embeddings generated by `insightface`, so no image encoder needs to be loaded.
3884
  You need to install `insightface` and all its requirements to use this model.
3885
- You must pass the image embedding tensor as `image_embeds` to the StableDiffusionPipeline instead of `ip_adapter_image`.
3886
  You can find more results [here](https://github.com/huggingface/diffusers/pull/6276).
3887
 
3888
  ```py
3889
- import diffusers
3890
  import torch
3891
  from diffusers.utils import load_image
3892
  import cv2
@@ -3916,7 +3891,7 @@ pipeline.load_ip_adapter_face_id("h94/IP-Adapter-FaceID", "ip-adapter-faceid_sd1
3916
  pipeline.to("cuda")
3917
 
3918
  generator = torch.Generator(device="cpu").manual_seed(42)
3919
- num_images=2
3920
 
3921
  image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png")
3922
 
@@ -3939,13 +3914,13 @@ for i in range(num_images):
3939
 
3940
  ### InstantID Pipeline
3941
 
3942
- InstantID is a new state-of-the-art tuning-free method to achieve ID-Preserving generation with only single image, supporting various downstream tasks. For any usgae question, please refer to the [official implementation](https://github.com/InstantID/InstantID).
3943
 
3944
  ```py
3945
- # !pip install opencv-python transformers accelerate insightface
3946
  import diffusers
3947
  from diffusers.utils import load_image
3948
- from diffusers.models import ControlNetModel
3949
 
3950
  import cv2
3951
  import torch
@@ -3963,12 +3938,13 @@ app.prepare(ctx_id=0, det_size=(640, 640))
3963
  # prepare models under ./checkpoints
3964
  # https://huggingface.co/InstantX/InstantID
3965
  from huggingface_hub import hf_hub_download
 
3966
  hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/config.json", local_dir="./checkpoints")
3967
  hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/diffusion_pytorch_model.safetensors", local_dir="./checkpoints")
3968
  hf_hub_download(repo_id="InstantX/InstantID", filename="ip-adapter.bin", local_dir="./checkpoints")
3969
 
3970
- face_adapter = f'./checkpoints/ip-adapter.bin'
3971
- controlnet_path = f'./checkpoints/ControlNetModel'
3972
 
3973
  # load IdentityNet
3974
  controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
@@ -3979,7 +3955,7 @@ pipe = StableDiffusionXLInstantIDPipeline.from_pretrained(
3979
  controlnet=controlnet,
3980
  torch_dtype=torch.float16
3981
  )
3982
- pipe.cuda()
3983
 
3984
  # load adapter
3985
  pipe.load_ip_adapter_instantid(face_adapter)
@@ -4046,8 +4022,9 @@ import cv2
4046
  import torch
4047
  import numpy as np
4048
 
4049
- from diffusers import ControlNetModel,DDIMScheduler, DiffusionPipeline
4050
  import sys
 
4051
  gmflow_dir = "/path/to/gmflow"
4052
  sys.path.insert(0, gmflow_dir)
4053
 
@@ -4075,7 +4052,7 @@ def video_to_frame(video_path: str, interval: int):
4075
  input_video_path = 'https://github.com/williamyang1991/FRESCO/raw/main/data/car-turn.mp4'
4076
  output_video_path = 'car.gif'
4077
 
4078
- # You can use any fintuned SD here
4079
  model_path = 'SG161222/Realistic_Vision_V2.0'
4080
 
4081
  prompt = 'a red car turns in the winter'
@@ -4120,14 +4097,13 @@ output_frames = pipe(
4120
 
4121
  output_frames[0].save(output_video_path, save_all=True,
4122
  append_images=output_frames[1:], duration=100, loop=0)
4123
-
4124
  ```
4125
 
4126
  # Perturbed-Attention Guidance
4127
 
4128
  [Project](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) / [arXiv](https://arxiv.org/abs/2403.17377) / [GitHub](https://github.com/KU-CVLAB/Perturbed-Attention-Guidance)
4129
 
4130
- This implementation is based on [Diffusers](https://huggingface.co/docs/diffusers/index). StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).
4131
 
4132
  ## Example Usage
4133
 
@@ -4147,14 +4123,14 @@ pipe = StableDiffusionPipeline.from_pretrained(
4147
  torch_dtype=torch.float16
4148
  )
4149
 
4150
- device="cuda"
4151
  pipe = pipe.to(device)
4152
 
4153
  pag_scale = 5.0
4154
  pag_applied_layers_index = ['m0']
4155
 
4156
  batch_size = 4
4157
- seed=10
4158
 
4159
  base_dir = "./results/"
4160
  grid_dir = base_dir + "/pag" + str(pag_scale) + "/"
@@ -4164,7 +4140,7 @@ if not os.path.exists(grid_dir):
4164
 
4165
  set_seed(seed)
4166
 
4167
- latent_input = randn_tensor(shape=(batch_size,4,64,64),generator=None, device=device, dtype=torch.float16)
4168
 
4169
  output_baseline = pipe(
4170
  "",
@@ -4196,6 +4172,6 @@ grid_image.save(grid_dir + "sample.png")
4196
 
4197
  ## PAG Parameters
4198
 
4199
- pag_scale : gudiance scale of PAG (ex: 5.0)
4200
 
4201
- pag_applied_layers_index : index of the layer to apply perturbation (ex: ['m0'])
 
27
  | Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | - | [Juan Carlos Piñeros](https://github.com/juancopi81) |
28
  | GlueGen Stable Diffusion | Stable Diffusion Pipeline that supports prompts in different languages using GlueGen adapter. | [GlueGen Stable Diffusion](#gluegen-stable-diffusion-pipeline) | - | [Phạm Hồng Vinh](https://github.com/rootonchair) |
29
  | Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
30
+ | Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting | [Text Based Inpainting Stable Diffusion](#text-based-inpainting-stable-diffusion) | - | [Dhruv Karan](https://github.com/unography) |
31
  | Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
32
  | K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
33
  | Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
 
40
  | CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
41
  | TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
42
  | EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | - | [Joqsan Azocar](https://github.com/Joqsan) |
43
+ | Stable Diffusion RePaint | Stable Diffusion pipeline using [RePaint](https://arxiv.org/abs/2201.09865) for inpainting. | [Stable Diffusion RePaint](#stable-diffusion-repaint ) | - | [Markus Pobitzer](https://github.com/Markus-Pobitzer) |
44
  | TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Image to Image Pipeline](#tensorrt-image2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
45
  | Stable Diffusion IPEX Pipeline | Accelerate Stable Diffusion inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion on IPEX](#stable-diffusion-on-ipex) | - | [Yingjie Han](https://github.com/yingjie-han/) |
46
  | CLIP Guided Images Mixing Stable Diffusion Pipeline | Сombine images using usual diffusion models. | [CLIP Guided Images Mixing Using Stable Diffusion](#clip-guided-images-mixing-with-stable-diffusion) | - | [Karachev Denis](https://github.com/TheDenk) |
 
192
  init_image = load_image("https://raw.githubusercontent.com/Picsart-AI-Research/HD-Painter/main/__assets__/samples/images/2.jpg")
193
  mask_image = load_image("https://raw.githubusercontent.com/Picsart-AI-Research/HD-Painter/main/__assets__/samples/masks/2.png")
194
 
195
+ image = pipe(prompt, init_image, mask_image, use_rasg=True, use_painta=True, generator=torch.manual_seed(12345)).images[0]
196
 
197
  make_image_grid([init_image, mask_image, image], rows=1, cols=3)
 
198
  ```
199
 
200
  ### Marigold Depth Estimation
 
222
 
223
  # (New) LCM version (faster speed)
224
  pipe = DiffusionPipeline.from_pretrained(
225
+ "prs-eth/marigold-depth-lcm-v1-0",
226
  custom_pipeline="marigold_depth_estimation"
227
  # torch_dtype=torch.float16, # (optional) Run with half-precision (16-bit float).
228
  # variant="fp16", # (optional) Use with `torch_dtype=torch.float16`, to directly load fp16 checkpoint
 
365
  custom_pipeline="clip_guided_stable_diffusion",
366
  clip_model=clip_model,
367
  feature_extractor=feature_extractor,
 
368
  torch_dtype=torch.float16,
369
  )
370
  guided_pipeline.enable_attention_slicing()
 
392
  ```
393
 
394
  The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
395
+ Generated images tend to be of higher quality than natively using stable diffusion. E.g. the above script generates the following images:
396
 
397
  ![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg).
398
 
 
466
 
467
 
468
  ### Text-to-Image
 
469
  images = pipe.text2img("An astronaut riding a horse").images
470
 
471
  ### Image-to-Image
 
472
  init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
473
 
474
  prompt = "A fantasy landscape, trending on artstation"
 
476
  images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
477
 
478
  ### Inpainting
 
479
  img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
480
  mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
481
  init_image = download_image(img_url).resize((512, 512))
 
492
  Features of this custom pipeline:
493
 
494
  - Input a prompt without the 77 token length limit.
495
+ - Includes tx2img, img2img, and inpainting pipelines.
496
  - Emphasize/weigh part of your prompt with parentheses as so: `a baby deer with (big eyes)`
497
  - De-emphasize part of your prompt as so: `a [baby] deer with big eyes`
498
  - Precisely weigh part of your prompt as so: `a baby deer with (big eyes:1.3)`
 
506
 
507
  You can run this custom pipeline as so:
508
 
509
+ #### PyTorch
510
 
511
  ```python
512
  from diffusers import DiffusionPipeline
 
515
  pipe = DiffusionPipeline.from_pretrained(
516
  'hakurei/waifu-diffusion',
517
  custom_pipeline="lpw_stable_diffusion",
 
518
  torch_dtype=torch.float16
519
  )
520
+ pipe = pipe.to("cuda")
521
 
522
  prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
523
  neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
524
 
525
+ pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
 
526
  ```
527
 
528
  #### onnxruntime
 
541
  prompt = "a photo of an astronaut riding a horse on mars, best quality"
542
  neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
543
 
544
+ pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
 
545
  ```
546
 
547
+ If you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal.
548
 
549
  ### Speech to Image
550
 
 
579
  custom_pipeline="speech_to_image_diffusion",
580
  speech_model=model,
581
  speech_processor=processor,
 
582
  torch_dtype=torch.float16,
583
  )
584
 
 
638
  pipe = DiffusionPipeline.from_pretrained(
639
  "CompVis/stable-diffusion-v1-4",
640
  custom_pipeline="wildcard_stable_diffusion",
 
641
  torch_dtype=torch.float16,
642
  )
643
  prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
 
697
  images.append(th.from_numpy(np.array(image)).permute(2, 0, 1) / 255.)
698
  grid = tvu.make_grid(th.stack(images, dim=0), nrow=4, padding=0)
699
  tvu.save_image(grid, f'{prompt}_{args.weights}' + '.png')
 
700
  ```
701
 
702
  ### Imagic Stable Diffusion
 
710
  import torch
711
  import os
712
  from diffusers import DiffusionPipeline, DDIMScheduler
713
+
714
  has_cuda = torch.cuda.is_available()
715
  device = torch.device('cpu' if not has_cuda else 'cuda')
716
  pipe = DiffusionPipeline.from_pretrained(
717
  "CompVis/stable-diffusion-v1-4",
718
+ safety_checker=None,
719
  custom_pipeline="imagic_stable_diffusion",
720
+ scheduler=DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
721
  ).to(device)
722
  generator = torch.Generator("cuda").manual_seed(0)
723
  seed = 0
 
827
 
828
  ### Multilingual Stable Diffusion Pipeline
829
 
830
+ The following code can generate images from texts in different languages using the pre-trained [mBART-50 many-to-one multilingual machine translation model](https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt) and Stable Diffusion.
831
 
832
  ```python
833
  from PIL import Image
 
871
  detection_pipeline=language_detection_pipeline,
872
  translation_model=trans_model,
873
  translation_tokenizer=trans_tokenizer,
 
874
  torch_dtype=torch.float16,
875
  )
876
 
 
894
 
895
  ### GlueGen Stable Diffusion Pipeline
896
 
897
+ GlueGen is a minimal adapter that allows alignment between any encoder (Text Encoder of different language, Multilingual Roberta, AudioClip) and CLIP text encoder used in standard Stable Diffusion model. This method allows easy language adaptation to available english Stable Diffusion checkpoints without the need of an image captioning dataset as well as long training hours.
898
 
899
+ Make sure you downloaded `gluenet_French_clip_overnorm_over3_noln.ckpt` for French (there are also pre-trained weights for Chinese, Italian, Japanese, Spanish or train your own) at [GlueGen's official repo](https://github.com/salesforce/GlueGen/tree/main).
900
 
901
  ```python
902
  from PIL import Image
 
963
  pipe = DiffusionPipeline.from_pretrained(
964
  "runwayml/stable-diffusion-inpainting",
965
  custom_pipeline="img2img_inpainting",
 
966
  torch_dtype=torch.float16
967
  )
968
  pipe = pipe.to("cuda")
 
1007
 
1008
  ### Bit Diffusion
1009
 
1010
+ Based <https://arxiv.org/abs/2208.04202>, this is used for diffusion on discrete data - eg, discrete image data, DNA sequence data. An unconditional discrete image can be generated like this:
1011
 
1012
  ```python
1013
  from diffusers import DiffusionPipeline
1014
+
1015
  pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="bit_diffusion")
1016
  image = pipe().images[0]
 
1017
  ```
1018
 
1019
  ### Stable Diffusion with K Diffusion
 
1079
 
1080
  ### Checkpoint Merger Pipeline
1081
 
1082
+ Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges up to 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format.
1083
 
1084
+ The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect at least 13GB RAM usage on Kaggle GPU kernels and
1085
+ on Colab you might run out of the 12GB memory even while merging two checkpoints.
1086
 
1087
  Usage:-
1088
 
1089
  ```python
1090
  from diffusers import DiffusionPipeline
1091
 
1092
+ # Return a CheckpointMergerPipeline class that allows you to merge checkpoints.
1093
+ # The checkpoint passed here is ignored. But still pass one of the checkpoints you plan to
1094
+ # merge for convenience
1095
  pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger")
1096
 
1097
+ # There are multiple possible scenarios:
1098
+ # The pipeline with the merged checkpoints is returned in all the scenarios
1099
 
1100
+ # Compatible checkpoints a.k.a matched model_index.json files. Ignores the meta attributes in model_index.json during comparison.( attrs with _ as prefix )
1101
+ merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4"," CompVis/stable-diffusion-v1-2"], interp="sigmoid", alpha=0.4)
1102
 
1103
+ # Incompatible checkpoints in model_index.json but merge might be possible. Use force=True to ignore model_index.json compatibility
1104
+ merged_pipe_1 = pipe.merge(["CompVis/stable-diffusion-v1-4", "hakurei/waifu-diffusion"], force=True, interp="sigmoid", alpha=0.4)
1105
 
1106
+ # Three checkpoint merging. Only "add_difference" method actually works on all three checkpoints. Using any other options will ignore the 3rd checkpoint.
1107
+ merged_pipe_2 = pipe.merge(["CompVis/stable-diffusion-v1-4", "hakurei/waifu-diffusion", "prompthero/openjourney"], force=True, interp="add_difference", alpha=0.4)
1108
 
1109
  prompt = "An astronaut riding a horse on Mars"
1110
 
1111
  image = merged_pipe(prompt).images[0]
 
1112
  ```
1113
 
1114
  Some examples along with the merge details:
 
1119
 
1120
  2. "hakurei/waifu-diffusion" + "prompthero/openjourney" ; Inverse Sigmoid interpolation; alpha = 0.8
1121
 
1122
+ ![Waifu plus openjourney Sigmoid 0.8](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/waifu_openjourney_inv_sig_0.8.png)
1123
 
1124
  3. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" + "prompthero/openjourney"; Add Difference interpolation; alpha = 0.5
1125
 
 
1184
  pipe = DiffusionPipeline.from_pretrained(
1185
  "CompVis/stable-diffusion-v1-4",
1186
  custom_pipeline="magic_mix",
1187
+ scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
1188
  ).to('cuda')
1189
 
1190
  img = Image.open('phone.jpg')
1191
  mix_img = pipe(
1192
  img,
1193
+ prompt='bed',
1194
+ kmin=0.3,
1195
+ kmax=0.5,
1196
+ mix_factor=0.5,
1197
  )
1198
  mix_img.save('phone_bed_mix.jpg')
1199
  ```
 
1214
 
1215
  ### Stable UnCLIP
1216
 
1217
+ UnCLIPPipeline("kakaobrain/karlo-v1-alpha") provides a prior model that can generate clip image embedding from text.
1218
+ StableDiffusionImageVariationPipeline("lambdalabs/sd-image-variations-diffusers") provides a decoder model than can generate images from clip image embedding.
1219
 
1220
  ```python
1221
  import torch
 
1256
  print(pipeline.decoder_pipe.__class__)
1257
  # <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_image_variation.StableDiffusionImageVariationPipeline'>
1258
 
1259
+ # this pipeline only uses prior module in "kakaobrain/karlo-v1-alpha"
1260
  # It is used to convert clip text embedding to clip image embedding.
1261
  print(pipeline)
1262
  # StableUnCLIPPipeline {
 
1316
 
1317
  start_prompt = "A photograph of an adult lion"
1318
  end_prompt = "A photograph of a lion cub"
1319
+ # For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
1320
  generator = torch.Generator(device=device).manual_seed(42)
1321
 
1322
+ output = pipe(start_prompt, end_prompt, steps=6, generator=generator, enable_sequential_cpu_offload=False)
1323
 
1324
  for i,image in enumerate(output.images):
1325
  img.save('result%s.jpg' % i)
 
1354
  pipe.to(device)
1355
 
1356
  images = [Image.open('./starry_night.jpg'), Image.open('./flowers.jpg')]
1357
+ # For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
1358
  generator = torch.Generator(device=device).manual_seed(42)
1359
 
1360
+ output = pipe(image=images, steps=6, generator=generator)
1361
 
1362
  for i,image in enumerate(output.images):
1363
  image.save('starry_to_flowers_%s.jpg' % i)
 
1379
 
1380
  ### DDIM Noise Comparative Analysis Pipeline
1381
 
1382
+ #### **Research question: What visual concepts do the diffusion models learn from each noise level during training?**
1383
 
1384
  The [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227) paper proposed an approach to answer the above question, which is their second contribution.
1385
  The approach consists of the following steps:
 
1396
  from PIL import Image
1397
  import numpy as np
1398
 
1399
+ image_path = "path/to/your/image" # images from CelebA-HQ might be better
1400
  image_pil = Image.open(image_path)
1401
  image_name = image_path.split("/")[-1].split(".")[0]
1402
 
 
1435
  from diffusers import DiffusionPipeline
1436
  from PIL import Image
1437
  from transformers import CLIPFeatureExtractor, CLIPModel
1438
+
1439
  feature_extractor = CLIPFeatureExtractor.from_pretrained(
1440
  "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
1441
  )
 
1610
  import torch
1611
  from io import BytesIO
1612
  from diffusers import StableDiffusionPipeline, RePaintScheduler
1613
+
1614
  def download_image(url):
1615
  response = requests.get(url)
1616
  return PIL.Image.open(BytesIO(response.content)).convert("RGB")
 
1668
  ```
1669
 
1670
  ### Stable Diffusion BoxDiff
1671
+ BoxDiff is a training-free method for controlled generation with bounding box coordinates. It should work with any Stable Diffusion model. Below shows an example with `stable-diffusion-2-1-base`.
1672
  ```py
1673
  import torch
1674
  from PIL import Image, ImageDraw
 
1828
 
1829
  ### Stable Diffusion on IPEX
1830
 
1831
+ This diffusion pipeline aims to accelerate the inference of Stable-Diffusion on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
1832
 
1833
  To use this pipeline, you need to:
1834
 
1835
  1. Install [IPEX](https://github.com/intel/intel-extension-for-pytorch)
1836
 
1837
+ **Note:** For each PyTorch release, there is a corresponding release of the IPEX. Here is the mapping relationship. It is recommended to install PyTorch/IPEX2.0 to get the best performance.
1838
 
1839
  |PyTorch Version|IPEX Version|
1840
  |--|--|
 
1853
  python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
1854
  ```
1855
 
1856
+ 2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX acceleration. Supported inference datatypes are Float32 and BFloat16.
1857
 
1858
  **Note:** The setting of generated image height/width for `prepare_for_ipex()` should be same as the setting of pipeline inference.
1859
 
1860
  ```python
1861
  pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_ipex")
1862
  # For Float32
1863
+ pipe.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512) # value of image height/width should be consistent with the pipeline inference
1864
  # For BFloat16
1865
+ pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512) # value of image height/width should be consistent with the pipeline inference
1866
  ```
1867
 
1868
  Then you can use the ipex pipeline in a similar way to the default stable diffusion pipeline.
1869
 
1870
  ```python
1871
  # For Float32
1872
+ image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] # value of image height/width should be consistent with 'prepare_for_ipex()'
1873
  # For BFloat16
1874
  with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
1875
+ image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] # value of image height/width should be consistent with 'prepare_for_ipex()'
1876
  ```
1877
 
1878
  The following code compares the performance of the original stable diffusion pipeline with the ipex-optimized pipeline.
 
1890
  # warmup
1891
  for _ in range(2):
1892
  images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images
1893
+ # time evaluation
1894
  start = time.time()
1895
  for _ in range(nb_pass):
1896
  pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512)
 
1911
  latency = elapsed_time(pipe)
1912
  print("Latency of StableDiffusionIPEXPipeline--bf16", latency)
1913
  latency = elapsed_time(pipe2)
1914
+ print("Latency of StableDiffusionPipeline--bf16", latency)
1915
 
1916
  ############## fp32 inference performance ###############
1917
 
 
1926
  latency = elapsed_time(pipe3)
1927
  print("Latency of StableDiffusionIPEXPipeline--fp32", latency)
1928
  latency = elapsed_time(pipe4)
1929
+ print("Latency of StableDiffusionPipeline--fp32", latency)
 
1930
  ```
1931
 
1932
  ### Stable Diffusion XL on IPEX
1933
 
1934
+ This diffusion pipeline aims to accelerate the inference of Stable-Diffusion XL on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
1935
 
1936
  To use this pipeline, you need to:
1937
 
 
1956
  python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
1957
  ```
1958
 
1959
+ 2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX acceleration. Supported inference datatypes are Float32 and BFloat16.
1960
 
1961
  **Note:** The values of `height` and `width` used during preparation with `prepare_for_ipex()` should be the same when running inference with the prepared pipeline.
1962
 
 
1999
  # warmup
2000
  for _ in range(2):
2001
  images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=0.0).images
2002
+ # time evaluation
2003
  start = time.time()
2004
  for _ in range(nb_pass):
2005
  pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=0.0)
 
2035
  latency = elapsed_time(pipe3, num_inference_steps=steps)
2036
  print("Latency of StableDiffusionXLPipelineIpex--fp32", latency, "s for total", steps, "steps")
2037
  latency = elapsed_time(pipe4, num_inference_steps=steps)
2038
+ print("Latency of StableDiffusionXLPipeline--fp32", latency, "s for total", steps, "steps")
 
2039
  ```
2040
 
2041
  ### CLIP Guided Images Mixing With Stable Diffusion
 
2048
 
2049
  ### Stable Diffusion XL Long Weighted Prompt Pipeline
2050
 
2051
+ This SDXL pipeline supports unlimited length prompt and negative prompt, compatible with A1111 prompt weighted style.
2052
 
2053
  You can provide both `prompt` and `prompt_2`. If only one prompt is provided, `prompt_2` will be a copy of the provided `prompt`. Here is a sample code to use this pipeline.
2054
 
 
2076
  t2i_images = pipe(
2077
  prompt=prompt,
2078
  negative_prompt=neg_prompt,
2079
+ ).images # alternatively, you can call the .text2img() function
2080
 
2081
  # img2img
2082
+ input_image = load_image("/path/to/local/image.png") # or URL to your input image
2083
  i2i_images = pipe.img2img(
2084
  prompt=prompt,
2085
  negative_prompt=neg_prompt,
2086
  image=input_image,
2087
+ strength=0.8, # higher strength will result in more variation compared to original image
2088
  ).images
2089
 
2090
  # inpaint
2091
+ input_mask = load_image("/path/to/local/mask.png") # or URL to your input inpainting mask
2092
  inpaint_images = pipe.inpaint(
2093
  prompt="photo of a cute (black) cat running on the grass" * 20,
2094
  negative_prompt=neg_prompt,
2095
  image=input_image,
2096
  mask=input_mask,
2097
+ strength=0.6, # higher strength will result in more variation compared to original image
2098
  ).images
2099
 
2100
  pipe.to("cpu")
2101
  torch.cuda.empty_cache()
2102
 
2103
+ from IPython.display import display # assuming you are using this code in a notebook
2104
  display(t2i_images[0])
2105
  display(i2i_images[0])
2106
  display(inpaint_images[0])
 
2140
  coca_model.dtype = torch.float16
2141
  coca_transform = open_clip.image_transform(
2142
  coca_model.visual.image_size,
2143
+ is_train=False,
2144
+ mean=getattr(coca_model.visual, 'image_mean', None),
2145
+ std=getattr(coca_model.visual, 'image_std', None),
2146
  )
2147
  coca_tokenizer = SimpleTokenizer()
2148
 
 
2194
  ```python
2195
  from diffusers import LMSDiscreteScheduler, DiffusionPipeline
2196
 
2197
+ # Create scheduler and model (similar to StableDiffusionPipeline)
2198
  scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
2199
  pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler, custom_pipeline="mixture_tiling")
2200
  pipeline.to("cuda")
 
2235
  # Use the PNDMScheduler scheduler here instead
2236
  scheduler = PNDMScheduler.from_pretrained("stabilityai/stable-diffusion-2-inpainting", subfolder="scheduler")
2237
 
 
2238
  pipe = StableDiffusionInpaintPipeline.from_pretrained("stabilityai/stable-diffusion-2-inpainting",
2239
  custom_pipeline="stable_diffusion_tensorrt_inpaint",
2240
  variant='fp16',
 
2273
  # Load and preprocess guide image
2274
  iic_image = preprocess_image(Image.open("input_image.png").convert("RGB"))
2275
 
2276
+ # Create scheduler and model (similar to StableDiffusionPipeline)
2277
  scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
2278
  pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda:0", custom_pipeline="mixture_canvas")
2279
  pipeline.to("cuda")
 
2284
  canvas_width=352,
2285
  regions=[
2286
  Text2ImageRegion(0, 800, 0, 352, guidance_scale=8,
2287
+ prompt=f"best quality, masterpiece, WLOP, sakimichan, art contest winner on pixiv, 8K, intricate details, wet effects, rain drops, ethereal, mysterious, futuristic, UHD, HDR, cinematic lighting, in a beautiful forest, rainy day, award winning, trending on artstation, beautiful confident cheerful young woman, wearing a futuristic sleeveless dress, ultra beautiful detailed eyes, hyper-detailed face, complex, perfect, model, textured, chiaroscuro, professional make-up, realistic, figure in frame, "),
2288
  Image2ImageRegion(352-800, 352, 0, 352, reference_image=iic_image, strength=1.0),
2289
  ],
2290
  num_inference_steps=100,
 
2303
  The following code shows how to use the IADB pipeline to generate images using a pretrained celebahq-256 model.
2304
 
2305
  ```python
 
2306
  pipeline_iadb = DiffusionPipeline.from_pretrained("thomasc4/iadb-celebahq-256", custom_pipeline='iadb')
2307
 
2308
  pipeline_iadb = pipeline_iadb.to('cuda')
2309
 
2310
+ output = pipeline_iadb(batch_size=4, num_inference_steps=128)
2311
  for i in range(len(output[0])):
2312
  plt.imshow(output[0][i])
2313
  plt.show()
 
2314
  ```
2315
 
2316
  Sampling with the IADB formulation is easy, and can be done in a few lines (the pipeline already implements it):
2317
 
2318
  ```python
 
2319
  def sample_iadb(model, x0, nb_step):
2320
  x_alpha = x0
2321
  for t in range(nb_step):
 
2326
  x_alpha = x_alpha + (alpha_next-alpha)*d
2327
 
2328
  return x_alpha
 
2329
  ```
2330
 
2331
  The training loop is also straightforward:
2332
 
2333
  ```python
 
2334
  # Training loop
2335
  while True:
2336
  x0 = sample_noise()
 
2361
  from pipeline_zero1to3 import Zero1to3StableDiffusionPipeline
2362
  from diffusers.utils import load_image
2363
 
2364
+ model_id = "kxic/zero123-165000" # zero123-105000, zero123-165000, zero123-xl
2365
 
2366
  pipe = Zero1to3StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
2367
 
 
2382
  # H, W = (256, 256) # H, W = (512, 512) # zero123 training is 256,256
2383
 
2384
  # for batch input
2385
+ input_image1 = load_image("./demo/4_blackarm.png") # load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/4_blackarm.png")
2386
+ input_image2 = load_image("./demo/8_motor.png") # load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/8_motor.png")
2387
+ input_image3 = load_image("./demo/7_london.png") # load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/7_london.png")
2388
  input_images = [input_image1, input_image2, input_image3]
2389
  query_poses = [query_pose1, query_pose2, query_pose3]
2390
 
 
2415
  images = pipe(input_imgs=input_images, prompt_imgs=input_images, poses=query_poses, height=H, width=W,
2416
  guidance_scale=3.0, num_images_per_prompt=num_images_per_prompt, num_inference_steps=50).images
2417
 
 
2418
  # save imgs
2419
  log_dir = "logs"
2420
  os.makedirs(log_dir, exist_ok=True)
 
2424
  for idx in range(num_images_per_prompt):
2425
  images[i].save(os.path.join(log_dir,f"obj{obj}_{idx}.jpg"))
2426
  i += 1
 
2427
  ```
2428
 
2429
  ### Stable Diffusion XL Reference
2430
 
2431
+ This pipeline uses the Reference. Refer to the [stable_diffusion_reference](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#stable-diffusion-reference).
2432
 
2433
  ```py
2434
  import torch
 
2436
  from diffusers.utils import load_image
2437
  from diffusers import DiffusionPipeline
2438
  from diffusers.schedulers import UniPCMultistepScheduler
2439
+
2440
  input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
2441
 
2442
  # pipe = DiffusionPipeline.from_pretrained(
 
2509
  # load the pipeline
2510
  # make sure you're logged in with `huggingface-cli login`
2511
  model_id_or_path = "runwayml/stable-diffusion-v1-5"
2512
+ # can also be used with dreamlike-art/dreamlike-photoreal-2.0
2513
  pipe = DiffusionPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16, custom_pipeline="pipeline_fabric").to("cuda")
2514
 
2515
  # let's specify a prompt
 
2540
  image = pipe(
2541
  prompt=prompt,
2542
  negative_prompt=negative_prompt,
2543
+ liked=liked,
2544
  num_inference_steps=20,
2545
  ).images[0]
2546
 
 
2710
  ```py
2711
  prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
2712
 
2713
+ # Can be set to 1~50 steps. LCM supports fast inference even <= 4 steps. Recommend: 1~8 steps.
2714
  num_inference_steps = 4
2715
 
2716
  images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
 
2742
 
2743
  input_image=Image.open("myimg.png")
2744
 
2745
+ strength = 0.5 # strength =0 (no change) strength=1 (completely overwrite image)
2746
 
2747
+ # Can be set to 1~50 steps. LCM supports fast inference even <= 4 steps. Recommend: 1~8 steps.
2748
  num_inference_steps = 4
2749
 
2750
  images = pipe(prompt=prompt, image=input_image, strength=strength, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
 
2788
  guidance_scale=8.0,
2789
  embedding_interpolation_type="lerp",
2790
  latent_interpolation_type="slerp",
2791
+ process_batch_size=4, # Make it higher or lower based on your GPU memory
2792
  generator=torch.Generator(seed),
2793
  )
2794
 
 
2807
  - [ldm3d-pano](https://huggingface.co/Intel/ldm3d-pano). This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used.
2808
  - [ldm3d-sr](https://huggingface.co/Intel/ldm3d-sr). This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline pipeline.
2809
 
2810
+ ```py
2811
  from PIL import Image
2812
  import os
2813
  import torch
 
2818
  pipe_ldm3d = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c")
2819
  pipe_ldm3d.to("cuda")
2820
 
2821
+ prompt = "A picture of some lemons on a table"
2822
  output = pipe_ldm3d(prompt)
2823
  rgb_image, depth_image = output.rgb, output.depth
2824
+ rgb_image[0].save("lemons_ldm3d_rgb.jpg")
2825
+ depth_image[0].save("lemons_ldm3d_depth.png")
2826
 
2827
  # Upscale the previous output to a resolution of (1024, 1024)
2828
 
 
2830
 
2831
  pipe_ldm3d_upscale.to("cuda")
2832
 
2833
+ low_res_img = Image.open("lemons_ldm3d_rgb.jpg").convert("RGB")
2834
+ low_res_depth = Image.open("lemons_ldm3d_depth.png").convert("L")
2835
  outputs = pipe_ldm3d_upscale(prompt="high quality high resolution uhd 4k image", rgb=low_res_img, depth=low_res_depth, num_inference_steps=50, target_res=[1024, 1024])
2836
 
2837
+ upscaled_rgb, upscaled_depth = outputs.rgb[0], outputs.depth[0]
2838
+ upscaled_rgb.save("upscaled_lemons_rgb.png")
2839
+ upscaled_depth.save("upscaled_lemons_depth.png")
2840
+ ```
2841
 
2842
  ### ControlNet + T2I Adapter Pipeline
2843
 
2844
+ This pipeline combines both ControlNet and T2IAdapter into a single pipeline, where the forward pass is executed once.
2845
+ It receives `control_image` and `adapter_image`, as well as `controlnet_conditioning_scale` and `adapter_conditioning_scale`, for the ControlNet and Adapter modules, respectively. Whenever `adapter_conditioning_scale=0` or `controlnet_conditioning_scale=0`, it will act as a full ControlNet module or as a full T2IAdapter module, respectively.
2846
 
2847
  ```py
2848
  import cv2
 
2905
  adapter_conditioning_scale=strength,
2906
  ).images
2907
  images[0].save("controlnet_and_adapter.png")
 
2908
  ```
2909
 
2910
  ### ControlNet + T2I Adapter + Inpainting Pipeline
 
2975
  strength=0.7,
2976
  ).images
2977
  images[0].save("controlnet_and_adapter_inpaint.png")
 
2978
  ```
2979
 
2980
  ### Regional Prompting Pipeline
2981
 
2982
+ This pipeline is a port of the [Regional Prompter extension](https://github.com/hako-mikan/sd-webui-regional-prompter) for [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to `diffusers`.
2983
  This code implements a pipeline for the Stable Diffusion model, enabling the division of the canvas into multiple regions, with different prompts applicable to each region. Users can specify regions in two ways: using `Cols` and `Rows` modes for grid-like divisions, or the `Prompt` mode for regions calculated based on prompts.
2984
 
2985
  ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/rp_pipeline1.png)
 
2990
 
2991
  ```py
2992
  from examples.community.regional_prompting_stable_diffusion import RegionalPromptingStableDiffusionPipeline
2993
+
2994
  pipe = RegionalPromptingStableDiffusionPipeline.from_single_file(model_path, vae=vae)
2995
 
2996
  rp_args = {
 
2998
  "div": "1;1;1"
2999
  }
3000
 
3001
+ prompt = """
3002
  green hair twintail BREAK
3003
  red blouse BREAK
3004
  blue skirt
 
3008
  prompt=prompt,
3009
  negative_prompt=negative_prompt,
3010
  guidance_scale=7.5,
3011
+ height=768,
3012
+ width=512,
3013
+ num_inference_steps=20,
3014
+ num_images_per_prompt=1,
3015
+ rp_args=rp_args
3016
+ ).images
3017
 
3018
  time = time.strftime(r"%Y%m%d%H%M%S")
3019
  i = 1
 
3038
 
3039
  ### 2-Dimentional division
3040
 
3041
+ The prompt consists of instructions separated by the term `BREAK` and is assigned to different regions of a two-dimensional space. The image is initially split in the main splitting direction, which in this case is rows, due to the presence of a single semicolon `;`, dividing the space into an upper and a lower section. Additional sub-splitting is then applied, indicated by commas. The upper row is split into ratios of `2:1:1`, while the lower row is split into a ratio of `4:6`. Rows themselves are split in a `1:2` ratio. According to the reference image, the blue sky is designated as the first region, green hair as the second, the bookshelf as the third, and so on, in a sequence based on their position from the top left. The terrarium is placed on the desk in the fourth region, and the orange dress and sofa are in the fifth region, conforming to their respective splits.
3042
 
3043
+ ```py
3044
  rp_args = {
3045
  "mode":"rows",
3046
  "div": "1,2,1,1;2,4,6"
3047
  }
3048
 
3049
+ prompt = """
3050
  blue sky BREAK
3051
  green hair BREAK
3052
  book shelf BREAK
3053
+ terrarium on the desk BREAK
3054
  orange dress and sofa
3055
  """
3056
  ```
 
3059
 
3060
  ### Prompt Mode
3061
 
3062
+ There are limitations to methods of specifying regions in advance. This is because specifying regions can be a hindrance when designating complex shapes or dynamic compositions. In the region specified by the prompt, the region is determined after the image generation has begun. This allows us to accommodate compositions and complex regions.
3063
  For further infomagen, see [here](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/prompt_en.md).
3064
 
3065
+ ### Syntax
3066
 
3067
  ```
3068
  baseprompt target1 target2 BREAK
 
3084
 
3085
  In this example, masks are calculated for shirt, tie, skirt, and color prompts are specified only for those regions.
3086
 
3087
+ ```py
3088
  rp_args = {
3089
+ "mode": "prompt-ex",
3090
+ "save_mask": True,
3091
  "th": "0.4,0.6,0.6",
3092
  }
3093
 
3094
+ prompt = """
3095
  a girl in street with shirt, tie, skirt BREAK
3096
  red, shirt BREAK
3097
  green, tie BREAK
 
3101
 
3102
  ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/rp_pipeline3.png)
3103
 
3104
+ ### Threshold
3105
 
3106
  The threshold used to determine the mask created by the prompt. This can be set as many times as there are masks, as the range varies widely depending on the target prompt. If multiple regions are used, enter them separated by commas. For example, hair tends to be ambiguous and requires a small value, while face tends to be large and requires a small value. These should be ordered by BREAK.
3107
 
 
3120
 
3121
  ### Accuracy
3122
 
3123
+ In the case of a 512x512 image, Attention mode reduces the size of the region to about 8x8 pixels deep in the U-Net, so that small regions get mixed up; Latent mode calculates 64*64, so that the region is exact.
3124
 
3125
  ```
3126
  girl hair twintail frills,ribbons, dress, face BREAK
 
3133
 
3134
  ### Use common prompt
3135
 
3136
+ You can attach the prompt up to ADDCOMM to all prompts by separating it first with ADDCOMM. This is useful when you want to include elements common to all regions. For example, when generating pictures of three people with different appearances, it's necessary to include the instruction of 'three people' in all regions. It's also useful when inserting quality tags and other things. "For example, if you write as follows:
3137
 
3138
  ```
3139
  best quality, 3persons in garden, ADDCOMM
 
3156
 
3157
  ### Parameters
3158
 
3159
+ To activate Regional Prompter, it is necessary to enter settings in `rp_args`. The items that can be set are as follows. `rp_args` is a dictionary type.
3160
 
3161
  ### Input Parameters
3162
 
3163
  Parameters are specified through the `rp_arg`(dictionary type).
3164
 
3165
+ ```py
3166
  rp_args = {
3167
  "mode":"rows",
3168
  "div": "1;1;1"
3169
  }
3170
 
3171
+ pipe(prompt=prompt, rp_args=rp_args)
3172
  ```
3173
 
3174
  ### Required Parameters
3175
 
3176
+ - `mode`: Specifies the method for defining regions. Choose from `Cols`, `Rows`, `Prompt`, or `Prompt-Ex`. This parameter is case-insensitive.
3177
  - `divide`: Used in `Cols` and `Rows` modes. Details on how to specify this are provided under the respective `Cols` and `Rows` sections.
3178
  - `th`: Used in `Prompt` mode. The method of specification is detailed under the `Prompt` section.
3179
 
 
3187
 
3188
  - Reference paper
3189
 
3190
+ ```bibtex
3191
  @article{chung2022diffusion,
3192
  title={Diffusion posterior sampling for general noisy inverse problems},
3193
  author={Chung, Hyungjin and Kim, Jeongsol and Mccann, Michael T and Klasky, Marc L and Ye, Jong Chul},
 
3199
  - This pipeline allows zero-shot conditional sampling from the posterior distribution $p(x|y)$, given observation on $y$, unconditional generative model $p(x)$ and differentiable operator $y=f(x)$.
3200
 
3201
  - For example, $f(.)$ can be downsample operator, then $y$ is a downsampled image, and the pipeline becomes a super-resolution pipeline.
3202
+ - To use this pipeline, you need to know your operator $f(.)$ and corrupted image $y$, and pass them during the call. For example, as in the main function of `dps_pipeline.py`, you need to first define the Gaussian blurring operator $f(.)$. The operator should be a callable `nn.Module`, with all the parameter gradient disabled:
3203
 
3204
  ```python
3205
  import torch.nn.functional as F
 
3229
  def weights_init(self):
3230
  if self.blur_type == "gaussian":
3231
  n = np.zeros((self.kernel_size, self.kernel_size))
3232
+ n[self.kernel_size // 2, self.kernel_size // 2] = 1
3233
  k = scipy.ndimage.gaussian_filter(n, sigma=self.std)
3234
  k = torch.from_numpy(k)
3235
  self.k = k
 
3259
  self.conv.update_weights(self.kernel.type(torch.float32))
3260
 
3261
  for param in self.parameters():
3262
+ param.requires_grad = False
3263
 
3264
  def forward(self, data, **kwargs):
3265
  return self.conv(data)
 
3296
  - ![sample](https://github.com/tongdaxu/Images/assets/22267548/4d2a1216-08d1-4aeb-9ce3-7a2d87561d65)
3297
  - Gaussian blurred image:
3298
  - ![ddpm_generated_image](https://github.com/tongdaxu/Images/assets/22267548/65076258-344b-4ed8-b704-a04edaade8ae)
3299
+ - You can download those images to run the example on your own.
3300
 
3301
  - Next, we need to define a loss function used for diffusion posterior sample. For most of the cases, the RMSE is fine:
3302
 
 
3305
  return torch.sqrt(torch.sum((yhat-y)**2))
3306
  ```
3307
 
3308
+ - And next, as any other diffusion models, we need the score estimator and scheduler. As we are working with $256x256$ face images, we use ddpm-celebahq-256:
3309
 
3310
  ```python
3311
  # set up scheduler
 
3322
  # finally, the pipeline
3323
  dpspipe = DPSPipeline(model, scheduler)
3324
  image = dpspipe(
3325
+ measurement=measurement,
3326
+ operator=operator,
3327
+ loss_fn=RMSELoss,
3328
+ zeta=1.0,
3329
  ).images[0]
3330
  image.save("dps_generated_image.png")
3331
  ```
3332
 
3333
+ - The `zeta` is a hyperparameter that is in range of $[0,1]$. It needs to be tuned for best effect. By setting `zeta=1`, you should be able to have the reconstructed result:
3334
  - Reconstructed image:
3335
  - ![sample](https://github.com/tongdaxu/Images/assets/22267548/0ceb5575-d42e-4f0b-99c0-50e69c982209)
3336
 
3337
  - The reconstruction is perceptually similar to the source image, but different in details.
3338
+ - In `dps_pipeline.py`, we also provide a super-resolution example, which should produce:
3339
  - Downsampled image:
3340
  - ![dps_mea](https://github.com/tongdaxu/Images/assets/22267548/ff6a33d6-26f0-42aa-88ce-f8a76ba45a13)
3341
  - Reconstructed image:
 
3347
 
3348
  ```py
3349
  import torch
3350
+ from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter, DiffusionPipeline, DPMSolverMultistepScheduler
3351
+ from diffusers.utils import export_to_gif
 
3352
  from PIL import Image
3353
 
3354
  motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
 
3363
  controlnet=controlnet,
3364
  vae=vae,
3365
  custom_pipeline="pipeline_animatediff_controlnet",
3366
+ torch_dtype=torch.float16,
3367
+ ).to(device="cuda")
3368
  pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
3369
  model_id, subfolder="scheduler", beta_schedule="linear", clip_sample=False, timestep_spacing="linspace", steps_offset=1
3370
  )
 
3385
  num_inference_steps=20,
3386
  ).frames[0]
3387
 
 
3388
  export_to_gif(result.frames[0], "result.gif")
3389
  ```
3390
 
 
3409
 
3410
  ```python
3411
  import torch
3412
+ from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter, DiffusionPipeline, DPMSolverMultistepScheduler
3413
+ from diffusers.utils import export_to_gif
 
3414
  from PIL import Image
3415
 
3416
  motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
 
3426
  controlnet=[controlnet1, controlnet2],
3427
  vae=vae,
3428
  custom_pipeline="pipeline_animatediff_controlnet",
3429
+ torch_dtype=torch.float16,
3430
+ ).to(device="cuda")
3431
  pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
3432
  model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1, beta_schedule="linear",
3433
  )
 
3474
  num_inference_steps=20,
3475
  )
3476
 
 
3477
  export_to_gif(result.frames[0], "result.gif")
3478
  ```
3479
 
 
3602
  output = pipe(prompt, image, mask_image, source_points, target_points)
3603
  output_image = PIL.Image.fromarray(output)
3604
  output_image.save("./output.png")
 
3605
  ```
3606
 
3607
  ### Instaflow Pipeline
 
3650
 
3651
  - Reference paper
3652
 
3653
+ ```bibtex
3654
+ @article{hertz2022prompt,
3655
+ title={Prompt-to-prompt image editing with cross attention control},
3656
+ author={Hertz, Amir and Mokady, Ron and Tenenbaum, Jay and Aberman, Kfir and Pritch, Yael and Cohen-Or, Daniel},
3657
+ booktitle={arXiv preprint arXiv:2208.01626},
3658
+ year={2022}
3659
  ```}
3660
 
3661
  ```py
3662
+ from diffusers import DDIMScheduler
3663
  from examples.community.pipeline_null_text_inversion import NullTextPipeline
3664
  import torch
3665
 
 
3666
  device = "cuda"
3667
  # Provide invert_prompt and the image for null-text optimization.
3668
  invert_prompt = "A lying cat"
 
3674
  # or different if editing.
3675
  prompt = "A lying dog"
3676
 
3677
+ # Float32 is essential to a well optimization
3678
  model_path = "runwayml/stable-diffusion-v1-5"
3679
  scheduler = DDIMScheduler(num_train_timesteps=1000, beta_start=0.00085, beta_end=0.0120, beta_schedule="scaled_linear")
3680
+ pipeline = NullTextPipeline.from_pretrained(model_path, scheduler=scheduler, torch_dtype=torch.float32).to(device)
3681
 
3682
+ # Saves the inverted_latent to save time
3683
+ inverted_latent, uncond = pipeline.invert(input_image, invert_prompt, num_inner_steps=10, early_stop_epsilon=1e-5, num_inference_steps=steps)
3684
  pipeline(prompt, uncond, inverted_latent, guidance_scale=7.5, num_inference_steps=steps).images[0].save(input_image+".output.jpg")
3685
  ```
3686
 
 
3737
  controlnet = ControlNetModel.from_pretrained(
3738
  "lllyasviel/sd-controlnet-canny").to('cuda')
3739
 
3740
+ # You can use any finetuned SD here
3741
  pipe = DiffusionPipeline.from_pretrained(
3742
  "runwayml/stable-diffusion-v1-5", controlnet=controlnet, custom_pipeline='rerender_a_video').to('cuda')
3743
 
 
3779
  from typing import List
3780
 
3781
  import torch
3782
+ from diffusers import DiffusionPipeline
3783
  from PIL import Image
3784
 
3785
  model_id = "a-r-r-o-w/dreamshaper-xl-turbo"
 
3848
  image=image,
3849
  prompt="A snail moving on the ground",
3850
  strength=0.8,
3851
+ latent_interpolation_method="slerp", # can be lerp, slerp, or your own callback
3852
  )
3853
  frames = output.frames[0]
3854
  export_to_gif(frames, "animation.gif")
 
3858
 
3859
  IP Adapter FaceID is an experimental IP Adapter model that uses image embeddings generated by `insightface`, so no image encoder needs to be loaded.
3860
  You need to install `insightface` and all its requirements to use this model.
3861
+ You must pass the image embedding tensor as `image_embeds` to the `DiffusionPipeline` instead of `ip_adapter_image`.
3862
  You can find more results [here](https://github.com/huggingface/diffusers/pull/6276).
3863
 
3864
  ```py
 
3865
  import torch
3866
  from diffusers.utils import load_image
3867
  import cv2
 
3891
  pipeline.to("cuda")
3892
 
3893
  generator = torch.Generator(device="cpu").manual_seed(42)
3894
+ num_images = 2
3895
 
3896
  image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png")
3897
 
 
3914
 
3915
  ### InstantID Pipeline
3916
 
3917
+ InstantID is a new state-of-the-art tuning-free method to achieve ID-Preserving generation with only single image, supporting various downstream tasks. For any usage question, please refer to the [official implementation](https://github.com/InstantID/InstantID).
3918
 
3919
  ```py
3920
+ # !pip install diffusers opencv-python transformers accelerate insightface
3921
  import diffusers
3922
  from diffusers.utils import load_image
3923
+ from diffusers import ControlNetModel
3924
 
3925
  import cv2
3926
  import torch
 
3938
  # prepare models under ./checkpoints
3939
  # https://huggingface.co/InstantX/InstantID
3940
  from huggingface_hub import hf_hub_download
3941
+
3942
  hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/config.json", local_dir="./checkpoints")
3943
  hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/diffusion_pytorch_model.safetensors", local_dir="./checkpoints")
3944
  hf_hub_download(repo_id="InstantX/InstantID", filename="ip-adapter.bin", local_dir="./checkpoints")
3945
 
3946
+ face_adapter = './checkpoints/ip-adapter.bin'
3947
+ controlnet_path = './checkpoints/ControlNetModel'
3948
 
3949
  # load IdentityNet
3950
  controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
 
3955
  controlnet=controlnet,
3956
  torch_dtype=torch.float16
3957
  )
3958
+ pipe.to("cuda")
3959
 
3960
  # load adapter
3961
  pipe.load_ip_adapter_instantid(face_adapter)
 
4022
  import torch
4023
  import numpy as np
4024
 
4025
+ from diffusers import ControlNetModel, DDIMScheduler, DiffusionPipeline
4026
  import sys
4027
+
4028
  gmflow_dir = "/path/to/gmflow"
4029
  sys.path.insert(0, gmflow_dir)
4030
 
 
4052
  input_video_path = 'https://github.com/williamyang1991/FRESCO/raw/main/data/car-turn.mp4'
4053
  output_video_path = 'car.gif'
4054
 
4055
+ # You can use any finetuned SD here
4056
  model_path = 'SG161222/Realistic_Vision_V2.0'
4057
 
4058
  prompt = 'a red car turns in the winter'
 
4097
 
4098
  output_frames[0].save(output_video_path, save_all=True,
4099
  append_images=output_frames[1:], duration=100, loop=0)
 
4100
  ```
4101
 
4102
  # Perturbed-Attention Guidance
4103
 
4104
  [Project](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) / [arXiv](https://arxiv.org/abs/2403.17377) / [GitHub](https://github.com/KU-CVLAB/Perturbed-Attention-Guidance)
4105
 
4106
+ This implementation is based on [Diffusers](https://huggingface.co/docs/diffusers/index). `StableDiffusionPAGPipeline` is a modification of `StableDiffusionPipeline` to support Perturbed-Attention Guidance (PAG).
4107
 
4108
  ## Example Usage
4109
 
 
4123
  torch_dtype=torch.float16
4124
  )
4125
 
4126
+ device = "cuda"
4127
  pipe = pipe.to(device)
4128
 
4129
  pag_scale = 5.0
4130
  pag_applied_layers_index = ['m0']
4131
 
4132
  batch_size = 4
4133
+ seed = 10
4134
 
4135
  base_dir = "./results/"
4136
  grid_dir = base_dir + "/pag" + str(pag_scale) + "/"
 
4140
 
4141
  set_seed(seed)
4142
 
4143
+ latent_input = randn_tensor(shape=(batch_size,4,64,64), generator=None, device=device, dtype=torch.float16)
4144
 
4145
  output_baseline = pipe(
4146
  "",
 
4172
 
4173
  ## PAG Parameters
4174
 
4175
+ `pag_scale` : guidance scale of PAG (ex: 5.0)
4176
 
4177
+ `pag_applied_layers_index` : index of the layer to apply perturbation (ex: ['m0'])
main/lpw_stable_diffusion_xl.py CHANGED
@@ -2,7 +2,7 @@
2
  # A SDXL pipeline can take unlimited weighted prompt
3
  #
4
  # Author: Andrew Zhu
5
- # Github: https://github.com/xhinker
6
  # Medium: https://medium.com/@xhinker
7
  ## -----------------------------------------------------------
8
 
 
2
  # A SDXL pipeline can take unlimited weighted prompt
3
  #
4
  # Author: Andrew Zhu
5
+ # GitHub: https://github.com/xhinker
6
  # Medium: https://medium.com/@xhinker
7
  ## -----------------------------------------------------------
8