junnyu's picture
Upload with huggingface_hub
8c1c4dc

A newer version of the Gradio SDK is available: 4.42.0

Upgrade

PPDiffusers Pipelines

Pipelines提供了一种对各种SOTA扩散模型进行各种下游任务推理的简单方式。 大多数扩散模型系统由多个独立训练的模型和高度自适应的调度器(scheduler)组成,通过pipeline我们可以很方便的对这些扩散模型系统进行端到端的推理。

举例来说, Stable Diffusion由以下组件构成:

  • Autoencoder
  • Conditional Unet
  • CLIP text encoder
  • Scheduler
  • CLIPFeatureExtractor
  • Safety checker

这些组件之间是独立训练或创建的,同时在Stable Diffusion的推理运行中也是必需的,我们可以通过pipelines来对整个系统进行封装,从而提供一个简洁的推理接口。

我们通过pipelines在统一的API下提供所有开源且SOTA的扩散模型系统的推理能力。具体来说,我们的pipelines能够提供以下功能:

  1. 可以加载官方发布的权重,并根据相应的论文复现出与原始实现相同的输出
  2. 提供一个简单的用户界面来推理运行扩散模型系统,参见Pipelines API部分
  3. 提供易于理解的代码实现,可以与官方文档一起阅读,参见Pipelines汇总部分
  4. 支持多种模态下的10+种任务,参见任务展示部分
  5. 可以很容易地与社区建立联系

【注意】 Pipelines不(也不应该)提供任何训练功能。 如果您正在寻找训练的相关示例,请查看examples.

Pipelines汇总

下表总结了所有支持的Pipelines,以及相应的来源、任务、推理脚本。

Pipeline 源链接 任务 推理脚本
alt_diffusion Alt Diffusion Text-to-Image Generation link
alt_diffusion Alt Diffusion Image-to-Image Text-Guided Generation link
audio_diffusion Audio Diffusion Unconditional Audio Generation link
dance_diffusion Dance Diffusion Unconditional Audio Generation link
ddpm Denoising Diffusion Probabilistic Models Unconditional Image Generation link
ddim Denoising Diffusion Implicit Models Unconditional Image Generation link
latent_diffusion High-Resolution Image Synthesis with Latent Diffusion Models Text-to-Image Generation link
latent_diffusion High-Resolution Image Synthesis with Latent Diffusion Models Super Superresolution link
latent_diffusion_uncond High-Resolution Image Synthesis with Latent Diffusion Models Unconditional Image Generation link
paint_by_example Paint by Example: Exemplar-based Image Editing with Diffusion Models Image-Guided Image Inpainting link
pndm Pseudo Numerical Methods for Diffusion Models on Manifolds Unconditional Image Generation link
repaint Repaint Image Inpainting link
score_sde_ve Score-Based Generative Modeling through Stochastic Differential Equations Unconditional Image Generation link
stable_diffusion Stable Diffusion Text-to-Image Generation link
stable_diffusion Stable Diffusion Image-to-Image Text-Guided Generation link
stable_diffusion Stable Diffusion Text-Guided Image Inpainting link
stable_diffusion_2 Stable Diffusion 2 Text-to-Image Generation link
stable_diffusion_2 Stable Diffusion 2 Image-to-Image Text-Guided Generation link
stable_diffusion_2 Stable Diffusion 2 Text-Guided Image Inpainting link
stable_diffusion_2 Stable Diffusion 2 Text-Guided Image Upscaling link
stable_diffusion_2 Stable Diffusion 2 Text-Guided Image Upscaling link
stable_diffusion_safe Safe Stable Diffusion Text-to-Image Generation link
stochastic_karras_ve Elucidating the Design Space of Diffusion-Based Generative Models Unconditional Image Generation link
unclip UnCLIP Text-to-Image Generation link
versatile_diffusion Versatile Diffusion Text-to-Image Generation link
versatile_diffusion Versatile Diffusion Image Variation link
versatile_diffusion Versatile Diffusion Dual Text and Image Guided Generation link
vq_diffusion VQ Diffusion Text-to-Image Generation link

【注意】 Pipelines可以端到端的展示相应论文中描述的扩散模型系统。然而,大多数Pipelines可以使用不同的调度器组件,甚至不同的模型组件。

Pipelines API

扩散模型系统通常由多个独立训练的模型以及调度器等其他组件构成。 其中每个模型都是在不同的任务上独立训练的,调度器可以很容易地进行替换。 然而,在推理过程中,我们希望能够轻松地加载所有组件并在推理中使用它们,即使某个组件来自不同的库, 为此,所有pipeline都提供以下功能:

  • from_pretrained 该方法接收PaddleNLP模型库id(例如runwayml/stable-diffusion-v1-5)或本地目录路径。为了能够准确加载相应的模型和组件,相应目录下必须提供model_index.json文件。

  • save_pretrained 该方法接受一个本地目录路径,Pipelines的所有模型或组件都将被保存到该目录下。对于每个模型或组件,都会在给定目录下创建一个子文件夹。同时model_index.json文件将会创建在本地目录路径的根目录下,以便可以再次从本地路径实例化整个Pipelines。

  • __call__ Pipelines在推理时将调用该方法。该方法定义了Pipelines的推理逻辑,它应该包括预处理、张量在不同模型之间的前向传播、后处理等整个推理流程。

任务展示

文本图像多模态

 文图生成(Text-to-Image Generation)
  • stable_diffusion
from ppdiffusers import StableDiffusionPipeline

# 加载模型和scheduler
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")

# 执行pipeline进行推理
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]

# 保存图片
image.save("astronaut_rides_horse_sd.png")
image
 文本引导的图像放大(Text-Guided Image Upscaling)
  • stable_diffusion_2
from ppdiffusers import StableDiffusionUpscalePipeline
from ppdiffusers.utils import load_image

pipe = StableDiffusionUpscalePipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler")

url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/low_res_cat.png"
low_res_img = load_image(url).resize((128, 128))

prompt = "a white cat"
upscaled_image = pipe(prompt=prompt, image=low_res_img).images[0]
upscaled_image.save("upsampled_cat_sd2.png")
image
原图像
image
生成图像
 文本引导的图像编辑(Text-Guided Image Inpainting)
  • stable_diffusion_2
from ppdiffusers import StableDiffusionUpscalePipeline
from ppdiffusers.utils import load_image

pipe = StableDiffusionUpscalePipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler")

url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/low_res_cat.png"
low_res_img = load_image(url).resize((128, 128))

prompt = "a white cat"
upscaled_image = pipe(prompt=prompt, image=low_res_img).images[0]
upscaled_image.save("upsampled_cat_sd2.png")
image
原图像
image
生成图像
 文本引导的图像变换(Image-to-Image Text-Guided Generation)
  • stable_diffusion
import paddle

from ppdiffusers import StableDiffusionImg2ImgPipeline
from ppdiffusers.utils import load_image

# 加载pipeline
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")

# 下载初始图片
url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/stable-diffusion-v1-4/sketch-mountains-input.png"

init_image = load_image(url).resize((768, 512))

prompt = "A fantasy landscape, trending on artstation"
# 使用fp16加快生成速度
with paddle.amp.auto_cast(True):
    image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images[0]

image.save("fantasy_landscape.png")
image
原图像
image
生成图像
 文本图像双引导图像生成(Dual Text and Image Guided Generation)
  • versatile_diffusion
from ppdiffusers import VersatileDiffusionDualGuidedPipeline
from ppdiffusers.utils import load_image

url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/benz.jpg"
image = load_image(url)
text = "a red car in the sun"

pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained("shi-labs/versatile-diffusion")
pipe.remove_unused_weights()

text_to_image_strength = 0.75
image = pipe(prompt=text, image=image, text_to_image_strength=text_to_image_strength).images[0]
image.save("versatile-diffusion-red_car.png")
image
原图像
image
生成图像

图像

 无条件图像生成(Unconditional Image Generation)
  • latent_diffusion_uncond
from ppdiffusers import LDMPipeline

# 加载模型和scheduler
pipe = LDMPipeline.from_pretrained("CompVis/ldm-celebahq-256")

# 执行pipeline进行推理
image = pipe(num_inference_steps=200).images[0]

# 保存图片
image.save("ldm_generated_image.png")
image
 超分(Super Superresolution)
  • latent_diffusion
import paddle

from ppdiffusers import LDMSuperResolutionPipeline
from ppdiffusers.utils import load_image

# 加载pipeline
pipe = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages")

# 下载初始图片
url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/stable-diffusion-v1-4/overture-creations.png"

init_image = load_image(url).resize((128, 128))
init_image.save("original-image.png")

# 使用fp16加快生成速度
with paddle.amp.auto_cast(True):
    image = pipe(init_image, num_inference_steps=100, eta=1).images[0]

image.save("super-resolution-image.png")
image
原图像
image
生成图像
 图像编辑(Image Inpainting)
  • repaint
from ppdiffusers import RePaintPipeline, RePaintScheduler
from ppdiffusers.utils import load_image

img_url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/celeba_hq_256.png"
mask_url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/mask_256.png"

# Load the original image and the mask as PIL images
original_image = load_image(img_url).resize((256, 256))
mask_image = load_image(mask_url).resize((256, 256))

scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256", subfolder="scheduler")
pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler)

output = pipe(
    original_image=original_image,
    mask_image=mask_image,
    num_inference_steps=250,
    eta=0.0,
    jump_length=10,
    jump_n_sample=10,
)
inpainted_image = output.images[0]

inpainted_image.save("repaint-image.png")
image
原图像
image
mask图像
image
生成图像
 图像变化(Image Variation)
  • versatile_diffusion
from ppdiffusers import VersatileDiffusionImageVariationPipeline
from ppdiffusers.utils import load_image

url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/benz.jpg"
image = load_image(url)

pipe = VersatileDiffusionImageVariationPipeline.from_pretrained("shi-labs/versatile-diffusion")

image = pipe(image).images[0]
image.save("versatile-diffusion-car_variation.png")
image
原图像
image
生成图像

音频

 无条件音频生成(Unconditional Audio Generation)
  • audio_diffusion
from scipy.io.wavfile import write
from ppdiffusers import AudioDiffusionPipeline
import paddle

# 加载模型和scheduler
pipe = AudioDiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256")
pipe.set_progress_bar_config(disable=None)
generator = paddle.Generator().manual_seed(42)

output = pipe(generator=generator)
audio = output.audios[0]
image = output.images[0]

# 保存音频到本地
for i, audio in enumerate(audio):
    write(f"audio_diffusion_test{i}.wav", pipe.mel.sample_rate, audio.transpose())

# 保存图片
image.save("audio_diffusion_test.png")
image