dome272 commited on
Commit
e52e889
1 Parent(s): c1ad2c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -18
README.md CHANGED
@@ -1,25 +1,101 @@
1
  ---
2
  license: mit
3
  ---
 
4
 
5
- ## How to run
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
- **Note**: This is only a single prior model checkpoint and has to be run with https://huggingface.co/warp-diffusion/wuerstchen
 
8
 
9
- ```python
10
  import torch
11
- from diffusers import AutoPipelineForText2Image
12
- from diffusers.pipelines.wuerstchen import WuerstchenPrior
13
-
14
- prior_model = WuerstchenPrior.from_pretrained("warp-diffusion/wuerstchen-prior-model-interpolated", torch_dtype=torch.float16)
15
- pipe = AutoPipelineForText2Image.from_pretrained("warp-diffusion/wuerstchen", prior_prior=prior_model, torch_dtype=torch.float16).to("cuda")
16
-
17
- prompt = [
18
- "An old destroyed car standing on a cliff in norway, cinematic photography",
19
- "Western movie, closeup cinematic photography",
20
- "Pink nike shoe commercial, closeup cinematic photography",
21
- "Croatia, closeup cinematic photography",
22
- "South Tyrol mountains at sunset, closeup cinematic photography",
23
- ]
24
- images = pipe(prompt, guidance_scale=8.0, width=1024, height=1024).images
25
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
5
 
6
+ ## Würstchen - Overview
7
+ Würstchen is diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
8
+ computational costs for both training and inference by magnitudes. Training on 1024x1024 images, is way more expensive than training at 32x32. Usually, other works make
9
+ use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through it's novel design, we achieve a 42x spatial
10
+ compression. This was unseen before, because common methods fail to faithfully reconstruct detailed images after 16x spatial compression already. Würstchen employs a
11
+ two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
12
+ A third model, Stage C, is learnt in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
13
+ also cheaper and faster inference.
14
+
15
+ ## Würstchen - Prior
16
+ The Prior is what we refer to as "Stage C". It is the text-conditional model, operating in the small latent space that Stage A and Stage B encode images into. During
17
+ inference it's job is to generate the image latents given text. These image latents are then sent to Stage A & B to decode the latents into pixel space.
18
+
19
+ ### Prior - Model - Interpolated
20
+ The interpolated model is our current best Prior (Stage C) checkpoint. It is an interpolation between our [base model](https://huggingface.co/warp-ai/wuerstchen-prior-model-base) and the [finetuned model](https://huggingface.co/warp-ai/wuerstchen-prior-model-finetuned).
21
+ We created this interpolation, because the finetuned model became too artistic and often only generates artistic images. The base model however, usually is very photorealistic.
22
+ As a result, we combined both by interpolating their weights by 50%, so the middle between the base and finetuned model (`0.5 * base_weights + 0.5 * finetuned_weights`).
23
+ You can also interpolate the [base model](https://huggingface.co/warp-ai/wuerstchen-prior-model-base) and the [finetuned model](https://huggingface.co/warp-ai/wuerstchen-prior-model-finetuned)
24
+ as you want and maybe find an interpolation that fits your needs better than this checkpoint.
25
+
26
+ ### Image Sizes
27
+ Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
28
+ We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
29
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/IfVsUDcP15OY-5wyLYKnQ.jpeg" width=1000>
30
 
31
+ ## How to run
32
+ This pipeline should be run together with https://huggingface.co/warp-diffusion/wuerstchen:
33
 
34
+ ```py
35
  import torch
36
+ from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
37
+ from diffusers.pipelines.wuerstchen import default_stage_c_timesteps
38
+
39
+ device = "cuda"
40
+ dtype = torch.float16
41
+ num_images_per_prompt = 2
42
+
43
+ prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
44
+ "warp-ai/wuerstchen-prior", torch_dtype=dtype
45
+ ).to(device)
46
+ decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
47
+ "warp-ai/wuerstchen", torch_dtype=dtype
48
+ ).to(device)
49
+
50
+ caption = "Anthropomorphic cat dressed as a fire fighter"
51
+ negative_prompt = ""
52
+
53
+ prior_output = prior_pipeline(
54
+ prompt=caption,
55
+ height=1024,
56
+ width=1536,
57
+ timesteps=default_stage_c_timesteps,
58
+ negative_prompt=negative_prompt,
59
+ guidance_scale=4.0,
60
+ num_images_per_prompt=num_images_per_prompt,
61
+ )
62
+ decoder_output = decoder_pipeline(
63
+ image_embeddings=prior_output.image_embeddings,
64
+ prompt=caption,
65
+ negative_prompt=negative_prompt,
66
+ num_images_per_prompt=num_images_per_prompt,
67
+ guidance_scale=0.0,
68
+ output_type="pil",
69
+ ).images
70
+ ```
71
+
72
+ ## Model Details
73
+ - **Developed by:** Pablo Pernias, Dominic Rampas
74
+ - **Model type:** Diffusion-based text-to-image generation model
75
+ - **Language(s):** English
76
+ - **License:** MIT
77
+ - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
78
+ - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2306.00637).
79
+ - **Cite as:**
80
+
81
+ @misc{pernias2023wuerstchen,
82
+ title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
83
+ author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
84
+ year={2023},
85
+ eprint={2306.00637},
86
+ archivePrefix={arXiv},
87
+ primaryClass={cs.CV}
88
+ }
89
+
90
+ ## Environmental Impact
91
+
92
+ **Würstchen v2** **Estimated Emissions**
93
+ Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
94
+
95
+ - **Hardware Type:** A100 PCIe 40GB
96
+ - **Hours used:** 24602
97
+ - **Cloud Provider:** AWS
98
+ - **Compute Region:** US-east
99
+ - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq.
100
+
101
+