text
stringlengths
0
5.54k
EulerDiscreteScheduler,
DPMSolverMultistepScheduler,
)
repo_id = "runwayml/stable-diffusion-v1-5"
ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler")
ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler")
pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler")
lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler")
# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler`
pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline
repo_id = "runwayml/stable-diffusion-v1-5"
pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from πŸ€— Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from πŸ€— Transformers. "tokenizer": a CLIPTokenizer from πŸ€— Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline {
"feature_extractor": [
"transformers",
"CLIPImageProcessor"
],
"safety_checker": [
"stable_diffusion",
"StableDiffusionSafetyChecker"
],
"scheduler": [
"diffusers",
"PNDMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied .
β”œβ”€β”€ feature_extractor
β”‚Β Β  └── preprocessor_config.json
β”œβ”€β”€ model_index.json
β”œβ”€β”€ safety_checker
β”‚Β Β  β”œβ”€β”€ config.json
| β”œβ”€β”€ model.fp16.safetensors
β”‚ β”œβ”€β”€ model.safetensors
β”‚ β”œβ”€β”€ pytorch_model.bin
| └── pytorch_model.fp16.bin
β”œβ”€β”€ scheduler
β”‚Β Β  └── scheduler_config.json
β”œβ”€β”€ text_encoder
β”‚Β Β  β”œβ”€β”€ config.json
| β”œβ”€β”€ model.fp16.safetensors
β”‚ β”œβ”€β”€ model.safetensors
β”‚ |── pytorch_model.bin
| └── pytorch_model.fp16.bin
β”œβ”€β”€ tokenizer
β”‚Β Β  β”œβ”€β”€ merges.txt
β”‚Β Β  β”œβ”€β”€ special_tokens_map.json
β”‚Β Β  β”œβ”€β”€ tokenizer_config.json
β”‚Β Β  └── vocab.json
β”œβ”€β”€ unet
β”‚Β Β  β”œβ”€β”€ config.json
β”‚Β Β  β”œβ”€β”€ diffusion_pytorch_model.bin
| |── diffusion_pytorch_model.fp16.bin
β”‚ |── diffusion_pytorch_model.f16.safetensors
β”‚ |── diffusion_pytorch_model.non_ema.bin
β”‚ |── diffusion_pytorch_model.non_ema.safetensors
β”‚ └── diffusion_pytorch_model.safetensors
|── vae
. β”œβ”€β”€ config.json
. β”œβ”€β”€ diffusion_pytorch_model.bin
β”œβ”€β”€ diffusion_pytorch_model.fp16.bin
β”œβ”€β”€ diffusion_pytorch_model.fp16.safetensors
└── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer
CLIPTokenizer(
name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer",
vocab_size=49408,
model_max_length=77,
is_fast=False,
padding_side="right",
truncation_side="right",
special_tokens={
"bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
"eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
"unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
"pad_token": "<|endoftext|>",
},
clean_up_tokenization_spaces=True
) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied {