gvecchio commited on
Commit
1ad8ca8
1 Parent(s): fe4ac38

Upload MatForgerPipeline

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ img/MatForger_gen-text.png filter=lfs diff=lfs merge=lfs -text
37
+ img/MatForger_gen-img.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: diffusers
3
+ datasets:
4
+ - gvecchio/MatSynth
5
+ language:
6
+ - en
7
+ tags:
8
+ - pbr
9
+ - materials
10
+ - svbrdf
11
+ - 3d
12
+ - textures
13
+ license: openrail
14
+ ---
15
+
16
+ <!-- # ⚒️ MatForger -->
17
+ ![alt text](./img/MatForger.png)
18
+
19
+
20
+ > Three Textures for the Designers under the sky, \
21
+ > Seven for the Artists in their studios of light, \
22
+ > Nine for the Architects doomed to try, \
23
+ > One for the Developers on their screens so bright \
24
+ > In the Land of Graphics where the Pixels lie. \
25
+ > One Forge to craft them all, One Code to find them, \
26
+ > One Model to bring them all and to the mesh bind them \
27
+ > In the Land of Graphics where the Pixels lie.
28
+
29
+ <sup><sub>Our deep apologies to J. R. R. Tolkien</sub></sup>
30
+
31
+
32
+ ## 🤖 Model Details
33
+
34
+ ### Overview
35
+
36
+ **MatForger** is a generative diffusion model designed specifically for generating Physically Based Rendering (PBR) materials. Inspired by the [MatFuse](https://arxiv.org/abs/2308.11408) model and trained on the comprehensive [MatSynth](https://huggingface.co/datasets/gvecchio/MatSynth) dataset, MatForger pushes the boundaries of material synthesis.
37
+ It employs the noise rolling technique, derived from [ControlMat](https://arxiv.org/abs/2309.01700), to produce tileable maps. The model generates multiple maps, including basecolor, normal, height, roughness, and metallic, catering to a wide range of material design needs.
38
+
39
+ ### Features
40
+ - **High-Quality PBR Material Generation:** Produces detailed and realistic materials suited for various applications.
41
+ - **Tileable Textures:** Utilizes a noise rolling approach to ensure textures are tileable, enhancing their usability in larger scenes.
42
+ - **Versatile Outputs:** Generates multiple texture maps (basecolor, normal, height, roughness, metallic) to meet the requirements of complex material designs.
43
+ - **Text and Image Conditioning:** Can be conditioned with either images or text inputs to guide material generation, offering flexibility in creative workflows.
44
+
45
+ ### Model Description
46
+ MatForger architecture is based on **MatFuse**. It differs from it in using a continuous VAE instead of a vector quantized autoencoder (VQ-VAE). Additionally we distilled the multiencoder VAE into a single-encoder model, thus reducing the model complexity but retaining the disentangled latent representation of MatFuse.
47
+
48
+ ## ⚒️ MatForger at work
49
+
50
+ MatForger can be conditioned via text prompts or images to generate high-quality materials. Following some examples of materials generated using MatForge. For each sample we report the prompt, the generated maps (basecolor, normal, height, roughness, metallic) and the resulting rendering.
51
+
52
+ <details>
53
+ <summary>Text2Mat samples</summary>
54
+ <img src="./img/MatForger_gen-text.png" alt="Text2Mat generation samples">
55
+ </details>
56
+
57
+ <details>
58
+ <summary>Image2Mat samples</summary>
59
+ <img src="./img/MatForger_gen-img.png" alt="Image2Mat generation samples">
60
+ </details>
61
+
62
+ ## 🧑‍💻 How to use
63
+
64
+ MatForger requires a custom pipeline due to the data type.
65
+
66
+ You can use it in [🧨 diffusers](https://github.com/huggingface/diffusers):
67
+
68
+ ```python
69
+ import torch
70
+
71
+ from PIL import Image
72
+
73
+ from diffusers import DiffusionPipeline
74
+
75
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
76
+
77
+ pipe = DiffusionPipeline.from_pretrained(
78
+ "gvecchio/MatForger",
79
+ trust_remote_code=True,
80
+ )
81
+
82
+ pipe.enable_vae_tiling()
83
+
84
+ pipe.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2)
85
+ pipe.to(device)
86
+
87
+ # model prompting with image
88
+ prompt = Image.open("bricks.png")
89
+ image = pipe(
90
+ prompt,
91
+ guidance_scale=6.0,
92
+ height=512,
93
+ width=512,
94
+ num_inference_steps=25,
95
+ ).images[0]
96
+
97
+ # model promptiong with text
98
+ prompt = "terracotta brick wall"
99
+ image = pipe(
100
+ prompt,
101
+ guidance_scale=6.0,
102
+ height=512,
103
+ width=512,
104
+ num_inference_steps=25,
105
+ ).images[0]
106
+
107
+ # get maps from prediction
108
+ basecolor = image.basecolor
109
+ normal = image.normal
110
+ height = image.height
111
+ roughness = image.roughness
112
+ metallic = image.metallic
113
+
114
+ ```
115
+
116
+ ## 📉 Bias and Limitations
117
+
118
+ The model was trained on a variety of synthetic and real data from the [MatSynth](https://huggingface.co/datasets/gvecchio/MatSynth) dataset.
119
+ However, it might fail to generate complex materials or patterns that differ significantly from the training data distribution.
120
+ Additionally, the model can be conditioned using either images or text, however it might give unexpected results when promped with text as it was mainly trained to do img2material.
121
+
122
+ **Note:** MatForge is a home-trained model, with limited resources. We will try to keepe it regularly updated and improve its performances. \
123
+ We welcome contributions, feedback, and suggestions to enhance its capabilities and address its limitations. Please be patient as we work towards making MatForger an even more powerful tool for the creative community.
124
+
125
+ ## 💡 Upcoming features ideas
126
+
127
+ As MatForger continues to evolve, we're working on several features aimed at enhancing its utility and effectiveness. As we continue to refine and expand its capabilities, here are some of the possible upcoming enhancements:
128
+
129
+ - **Opacity**: Generate opacity map for materials requiring transparency.
130
+
131
+ - **Material Inpainting**: A feature designed to allow users to modify and enhance materials by filling in gaps or correcting imperfections directly within the generated textures.
132
+
133
+ - **Sketch-Based Material Generation**: We're exploring ways to convert simple sketches into detailed materials. This aims to simplify the material creation process, making it more accessible to users without in-depth technical expertise.
134
+
135
+ - **Color Palette Conditioning**: Future updates will offer improved control over the color palette of generated materials, enabling users to achieve more precise color matching for their projects.
136
+
137
+ - **Material Estimation from Photographs**: We aim to refine the model's ability to interpret and recreate the material properties observed in photographs, facilitating the creation of materials that closely mimic real-world textures.
138
+
139
+ ### 🎯 Ongoing Development and Openness to Feedback
140
+ MatForger is a RESEARCH TOOL, thus its development is an ongoing process and highly subsceptible to our research agenda.
141
+ However, we are committed to improving MatForger's capabilities and addressing any limitations and implementing suggestions we receive from our users.
142
+
143
+ ### 🤝 How to Contribute
144
+ **Feature Suggestions**: If you have ideas for new features or improvements, we're eager to hear them. Reach out to us! Your suggestions play a crucial role in guiding the direction of MatForger's development.
145
+
146
+ **Dataset Contributions**: Enhancing the diversity of our training data can significantly improve the model's performance. If you have access to textures, materials, or data that could benefit MatForger, consider contributing.
147
+
148
+ **Feedback**: User feedback is invaluable for identifying areas for improvement. Whether it's through reporting issues or sharing your experiences, your insights help us make MatForger better.
149
+
150
+ ## Terms of Use:
151
+ We hope that the release of this model will make community-based research efforts more accessible. This model is governed by a Openrail License and is intended for research purposes.
img/MatForger.png ADDED
img/MatForger_gen-img.png ADDED

Git LFS Details

  • SHA256: 766def9b526c737a9a55ff1f82322b7d0b5dddd7650b1bd209efcc1b5b189799
  • Pointer size: 132 Bytes
  • Size of remote file: 3.42 MB
img/MatForger_gen-text.png ADDED

Git LFS Details

  • SHA256: b9875d76c93e20d3cfa5e44f911b7ddfab0fdb5a9c0d3a7a165be49f4a5e8822
  • Pointer size: 132 Bytes
  • Size of remote file: 2.97 MB
model_index.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": ["pipeline", "MatForgerPipeline"],
3
+ "_diffusers_version": "0.26.3",
4
+ "prompt_encoder": [
5
+ "encoder",
6
+ "MaterialPromptEncoder"
7
+ ],
8
+ "scheduler": [
9
+ "diffusers",
10
+ "DDIMScheduler"
11
+ ],
12
+ "unet": [
13
+ "diffusers",
14
+ "UNet2DConditionModel"
15
+ ],
16
+ "vae": [
17
+ "diffusers",
18
+ "AutoencoderKL"
19
+ ]
20
+ }
pipeline.py ADDED
@@ -0,0 +1,877 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import Any, Dict, List, Optional, Tuple, Union
3
+
4
+ import numpy as np
5
+ import torch
6
+ import torch.nn as nn
7
+ import torch.nn.functional as F
8
+ import torchvision.transforms.functional as TF
9
+ from diffusers.image_processor import PipelineImageInput, VaeImageProcessor
10
+ from diffusers.loaders import FromSingleFileMixin
11
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import (
12
+ EXAMPLE_DOC_STRING,
13
+ rescale_noise_cfg,
14
+ retrieve_timesteps,
15
+ )
16
+ from diffusers.schedulers import KarrasDiffusionSchedulers
17
+ from diffusers.utils import (
18
+ USE_PEFT_BACKEND,
19
+ BaseOutput,
20
+ deprecate,
21
+ logging,
22
+ replace_example_docstring,
23
+ )
24
+ from diffusers.utils.torch_utils import randn_tensor
25
+ from PIL import Image
26
+
27
+ from diffusers import AutoencoderKL, DiffusionPipeline, UNet2DConditionModel
28
+
29
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
30
+ from dataclasses import dataclass
31
+
32
+
33
+ def postprocess(
34
+ image: torch.FloatTensor,
35
+ output_type: str = "pil",
36
+ ):
37
+ """
38
+ Postprocess the image output from tensor to `output_type`.
39
+
40
+ Args:
41
+ image (`torch.FloatTensor`):
42
+ The image input, should be a pytorch tensor with shape `B x C x H x W`.
43
+ output_type (`str`, *optional*, defaults to `pil`):
44
+ The output type of the image, can be one of `pil`, `np`, `pt`, `latent`.
45
+
46
+ Returns:
47
+ `PIL.Image.Image`, `np.ndarray` or `torch.FloatTensor`:
48
+ The postprocessed image.
49
+ """
50
+ if not isinstance(image, torch.Tensor):
51
+ raise ValueError(
52
+ f"Input for postprocessing is in incorrect format: {type(image)}. We only support pytorch tensor"
53
+ )
54
+ if output_type not in ["latent", "pt", "np", "pil"]:
55
+ deprecation_message = (
56
+ f"the output_type {output_type} is outdated and has been set to `np`. Please make sure to set it to one of these instead: "
57
+ "`pil`, `np`, `pt`, `latent`"
58
+ )
59
+ deprecate(
60
+ "Unsupported output_type", "1.0.0", deprecation_message, standard_warn=False
61
+ )
62
+ output_type = "np"
63
+
64
+ image = image.detach().cpu()
65
+
66
+ if output_type == "latent":
67
+ return image
68
+
69
+ # denormalize the image
70
+ image = image.clamp(-1, 1) * 0.5 + 0.5
71
+
72
+ materials = []
73
+ for i in range(image.shape[0]):
74
+
75
+ material = MatForgerMaterial()
76
+ material.init_from_tensor(image[i])
77
+
78
+ if output_type == "pt":
79
+ material.to_pt()
80
+
81
+ if output_type == "np":
82
+ material.to_np()
83
+
84
+ if output_type == "pil":
85
+ material.to_pil()
86
+
87
+ materials.append(material)
88
+
89
+ return materials
90
+
91
+
92
+ @dataclass
93
+ class MatForgerMaterial:
94
+ def __init__(
95
+ self,
96
+ basecolor: Optional[Union[Image.Image, np.ndarray, torch.FloatTensor]] = None,
97
+ normal: Optional[Union[Image.Image, np.ndarray, torch.FloatTensor]] = None,
98
+ height: Optional[Union[Image.Image, np.ndarray, torch.FloatTensor]] = None,
99
+ roughness: Optional[Union[Image.Image, np.ndarray, torch.FloatTensor]] = None,
100
+ metallic: Optional[Union[Image.Image, np.ndarray, torch.FloatTensor]] = None,
101
+ ):
102
+ self.basecolor = basecolor
103
+ self.normal = normal
104
+ self.height = height
105
+ self.roughness = roughness
106
+ self.metallic = metallic
107
+
108
+ def _to_numpy(self, image):
109
+ if image is None:
110
+ return None
111
+
112
+ if isinstance(image, Image.Image):
113
+ image = np.array(image)
114
+ elif isinstance(image, torch.FloatTensor):
115
+ image = image.cpu().numpy()
116
+ return image
117
+
118
+ def _to_pil(self, image):
119
+ if image is None:
120
+ return None
121
+
122
+ if isinstance(image, np.ndarray):
123
+ image = Image.fromarray(image)
124
+ elif isinstance(image, torch.FloatTensor):
125
+ image = TF.to_pil_image(image)
126
+ return image
127
+
128
+ def _to_pt(self, image):
129
+ if image is None:
130
+ return None
131
+
132
+ if isinstance(image, np.ndarray):
133
+ image = torch.from_numpy(image)
134
+ elif isinstance(image, Image.Image):
135
+ image = TF.to_tensor(image)
136
+ return image
137
+
138
+ def compute_normal_map_z_component(self, normal: torch.FloatTensor):
139
+ """
140
+ Compute the z-component of the normal map for a tensor of shape (2, H, W).
141
+
142
+ Parameters:
143
+ - normal_map (torch.Tensor): A tensor of shape (2, H, W) containing the x and y components of the normal map.
144
+
145
+ Returns:
146
+ - A tensor of shape (1, H, W) containing the z-component of the normal map.
147
+ """
148
+ # Normalize the normal map to the range [-1, 1]
149
+ normal = normal * 2 - 1
150
+
151
+ # Square the x and y components
152
+ squared = normal**2
153
+
154
+ # Sum along the first dimension (x^2 + y^2)
155
+ sum_squared = squared.sum(dim=0, keepdim=True)
156
+
157
+ # Compute z-component: sqrt(1 - (x^2 + y^2))
158
+ z_component = torch.sqrt(1 - sum_squared).clamp(
159
+ min=0
160
+ ) # Clamp to avoid negative values under sqrt
161
+
162
+ normal = torch.cat([normal, z_component], dim=0)
163
+ normal = normal * 0.5 + 0.5 # Denormalize to [0, 1]
164
+ return normal
165
+
166
+ def init_from_tensor(self, image: torch.FloatTensor):
167
+ assert image.shape[0] >= 8, "Input tensor should have at least 8 channels"
168
+ self.basecolor = image[:3]
169
+ self.normal = self.compute_normal_map_z_component(image[3:5])
170
+ self.height = image[5:6]
171
+ self.roughness = image[6:7]
172
+ self.metallic = image[7:8]
173
+
174
+ def to_pt(self):
175
+ # convert to pytorch tensor
176
+ self.basecolor = self._to_pt(self.basecolor)
177
+ self.normal = self._to_pt(self.normal)
178
+ self.height = self._to_pt(self.height)
179
+ self.roughness = self._to_pt(self.roughness)
180
+ self.metallic = self._to_pt(self.metallic)
181
+
182
+ def to_np(self):
183
+ # convert to numpy
184
+ self.basecolor = self._to_numpy(self.basecolor)
185
+ self.normal = self._to_numpy(self.normal)
186
+ self.height = self._to_numpy(self.height)
187
+ self.roughness = self._to_numpy(self.roughness)
188
+ self.metallic = self._to_numpy(self.metallic)
189
+
190
+ def to_pil(self):
191
+ # convert to PIL image
192
+ self.basecolor = self._to_pil(self.basecolor)
193
+ self.normal = self._to_pil(self.normal)
194
+ self.height = self._to_pil(self.height)
195
+ self.roughness = self._to_pil(self.roughness)
196
+ self.metallic = self._to_pil(self.metallic)
197
+
198
+ def as_dict(self):
199
+ return {
200
+ "basecolor": self.basecolor,
201
+ "normal": self.normal,
202
+ "height": self.height,
203
+ "roughness": self.roughness,
204
+ "metallic": self.metallic,
205
+ }
206
+
207
+
208
+ @dataclass
209
+ class MatForgerPipelineOutput(BaseOutput):
210
+ """
211
+ Output class for Stable Diffusion pipelines.
212
+
213
+ Args:
214
+ images (`List[PIL.Image.Image]` or `np.ndarray`)
215
+ List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
216
+ num_channels)`.
217
+ """
218
+
219
+ images: List[MatForgerMaterial]
220
+
221
+
222
+ class MatForgerPipeline(DiffusionPipeline, FromSingleFileMixin):
223
+
224
+ model_cpu_offload_seq = "prompt_encoder->unet->vae"
225
+
226
+ def __init__(
227
+ self,
228
+ vae: AutoencoderKL,
229
+ unet: UNet2DConditionModel,
230
+ prompt_encoder: nn.Module,
231
+ scheduler: KarrasDiffusionSchedulers,
232
+ ):
233
+ super().__init__()
234
+
235
+ self.register_modules(
236
+ vae=vae,
237
+ unet=unet,
238
+ prompt_encoder=prompt_encoder,
239
+ scheduler=scheduler,
240
+ )
241
+
242
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
243
+ self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
244
+
245
+ def enable_vae_slicing(self):
246
+ r"""
247
+ Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
248
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
249
+ """
250
+ self.vae.enable_slicing()
251
+
252
+ def disable_vae_slicing(self):
253
+ r"""
254
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
255
+ computing decoding in one step.
256
+ """
257
+ self.vae.disable_slicing()
258
+
259
+ def enable_vae_tiling(self):
260
+ r"""
261
+ Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
262
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
263
+ processing larger images.
264
+ """
265
+ self.vae.enable_tiling()
266
+
267
+ def disable_vae_tiling(self):
268
+ r"""
269
+ Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
270
+ computing decoding in one step.
271
+ """
272
+ self.vae.disable_tiling()
273
+
274
+ def encode_prompt(
275
+ self,
276
+ prompt,
277
+ device,
278
+ num_images_per_prompt,
279
+ do_classifier_free_guidance,
280
+ negative_prompt=None,
281
+ prompt_embeds: Optional[torch.FloatTensor] = None,
282
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
283
+ ):
284
+ r"""
285
+ Encodes the prompt into text encoder hidden states.
286
+
287
+ Args:
288
+ prompt (`str` or `List[str]`, *optional*):
289
+ prompt to be encoded
290
+ device: (`torch.device`):
291
+ torch device
292
+ num_images_per_prompt (`int`):
293
+ number of images that should be generated per prompt
294
+ do_classifier_free_guidance (`bool`):
295
+ whether to use classifier free guidance or not
296
+ negative_prompt (`str` or `List[str]`, *optional*):
297
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
298
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
299
+ less than `1`).
300
+ prompt_embeds (`torch.FloatTensor`, *optional*):
301
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
302
+ provided, text embeddings will be generated from `prompt` input argument.
303
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
304
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
305
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
306
+ argument.
307
+ """
308
+ if (
309
+ prompt is not None
310
+ and isinstance(prompt, str)
311
+ or isinstance(prompt, Image.Image)
312
+ ):
313
+ batch_size = 1
314
+ elif prompt is not None and isinstance(prompt, list):
315
+ batch_size = len(prompt)
316
+ else:
317
+ batch_size = prompt_embeds.shape[0]
318
+
319
+ if prompt_embeds is None:
320
+ prompt_embeds = self.prompt_encoder.encode_prompt(prompt)
321
+
322
+ if self.prompt_encoder is not None:
323
+ prompt_embeds_dtype = self.prompt_encoder.dtype
324
+ elif self.unet is not None:
325
+ prompt_embeds_dtype = self.unet.dtype
326
+ else:
327
+ prompt_embeds_dtype = prompt_embeds.dtype
328
+
329
+ prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device)
330
+
331
+ bs_embed, seq_len, _ = prompt_embeds.shape
332
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
333
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
334
+ prompt_embeds = prompt_embeds.view(
335
+ bs_embed * num_images_per_prompt, seq_len, -1
336
+ )
337
+
338
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
339
+ negative_prompt_embeds = self.prompt_encoder.encode_prompt(
340
+ [""] * batch_size # TODO: Make this customizable
341
+ )
342
+ # get unconditional embeddings for classifier free guidance
343
+ if do_classifier_free_guidance:
344
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
345
+ seq_len = negative_prompt_embeds.shape[1]
346
+
347
+ negative_prompt_embeds = negative_prompt_embeds.to(
348
+ dtype=prompt_embeds_dtype, device=device
349
+ )
350
+
351
+ negative_prompt_embeds = negative_prompt_embeds.repeat(
352
+ 1, num_images_per_prompt, 1
353
+ )
354
+ negative_prompt_embeds = negative_prompt_embeds.view(
355
+ batch_size * num_images_per_prompt, seq_len, -1
356
+ )
357
+
358
+ return prompt_embeds, negative_prompt_embeds
359
+
360
+ def decode_latents(self, latents):
361
+ deprecation_message = "The decode_latents method is deprecated and will be removed in 1.0.0. Please use VaeImageProcessor.postprocess(...) instead"
362
+ deprecate("decode_latents", "1.0.0", deprecation_message, standard_warn=False)
363
+
364
+ latents = 1 / self.vae.config.scaling_factor * latents
365
+ image = self.vae.decode(latents, return_dict=False)[0]
366
+ image = (image / 2 + 0.5).clamp(0, 1)
367
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
368
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
369
+ return image
370
+
371
+ def prepare_extra_step_kwargs(self, generator, eta):
372
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
373
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
374
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
375
+ # and should be between [0, 1]
376
+
377
+ accepts_eta = "eta" in set(
378
+ inspect.signature(self.scheduler.step).parameters.keys()
379
+ )
380
+ extra_step_kwargs = {}
381
+ if accepts_eta:
382
+ extra_step_kwargs["eta"] = eta
383
+
384
+ # check if the scheduler accepts generator
385
+ accepts_generator = "generator" in set(
386
+ inspect.signature(self.scheduler.step).parameters.keys()
387
+ )
388
+ if accepts_generator:
389
+ extra_step_kwargs["generator"] = generator
390
+ return extra_step_kwargs
391
+
392
+ def check_inputs(
393
+ self,
394
+ prompt,
395
+ height,
396
+ width,
397
+ negative_prompt=None,
398
+ prompt_embeds=None,
399
+ negative_prompt_embeds=None,
400
+ ):
401
+ if height % 8 != 0 or width % 8 != 0:
402
+ raise ValueError(
403
+ f"`height` and `width` have to be divisible by 8 but are {height} and {width}."
404
+ )
405
+
406
+ if prompt is not None and prompt_embeds is not None:
407
+ raise ValueError(
408
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
409
+ " only forward one of the two."
410
+ )
411
+ elif prompt is None and prompt_embeds is None:
412
+ raise ValueError(
413
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
414
+ )
415
+ elif prompt is not None and (
416
+ not isinstance(prompt, str) and not isinstance(prompt, list)
417
+ ):
418
+ raise ValueError(
419
+ f"`prompt` has to be of type `str` or `list` but is {type(prompt)}"
420
+ )
421
+
422
+ if negative_prompt is not None and negative_prompt_embeds is not None:
423
+ raise ValueError(
424
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
425
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
426
+ )
427
+
428
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
429
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
430
+ raise ValueError(
431
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
432
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
433
+ f" {negative_prompt_embeds.shape}."
434
+ )
435
+
436
+ def prepare_latents(
437
+ self,
438
+ batch_size,
439
+ num_channels_latents,
440
+ height,
441
+ width,
442
+ dtype,
443
+ device,
444
+ generator,
445
+ latents=None,
446
+ ):
447
+ shape = (
448
+ batch_size,
449
+ num_channels_latents,
450
+ height // self.vae_scale_factor,
451
+ width // self.vae_scale_factor,
452
+ )
453
+ if isinstance(generator, list) and len(generator) != batch_size:
454
+ raise ValueError(
455
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
456
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
457
+ )
458
+
459
+ if latents is None:
460
+ latents = randn_tensor(
461
+ shape, generator=generator, device=device, dtype=dtype
462
+ )
463
+ else:
464
+ latents = latents.to(device)
465
+
466
+ # scale the initial noise by the standard deviation required by the scheduler
467
+ latents = latents * self.scheduler.init_noise_sigma
468
+ return latents
469
+
470
+ def enable_freeu(self, s1: float, s2: float, b1: float, b2: float):
471
+ r"""Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497.
472
+
473
+ The suffixes after the scaling factors represent the stages where they are being applied.
474
+
475
+ Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of the values
476
+ that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL.
477
+
478
+ Args:
479
+ s1 (`float`):
480
+ Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
481
+ mitigate "oversmoothing effect" in the enhanced denoising process.
482
+ s2 (`float`):
483
+ Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
484
+ mitigate "oversmoothing effect" in the enhanced denoising process.
485
+ b1 (`float`): Scaling factor for stage 1 to amplify the contributions of backbone features.
486
+ b2 (`float`): Scaling factor for stage 2 to amplify the contributions of backbone features.
487
+ """
488
+ if not hasattr(self, "unet"):
489
+ raise ValueError("The pipeline must have `unet` for using FreeU.")
490
+ self.unet.enable_freeu(s1=s1, s2=s2, b1=b1, b2=b2)
491
+
492
+ def disable_freeu(self):
493
+ """Disables the FreeU mechanism if enabled."""
494
+ self.unet.disable_freeu()
495
+
496
+ # Copied from diffusers.pipelines.latent_consistency_models.pipeline_latent_consistency_text2img.LatentConsistencyModelPipeline.get_guidance_scale_embedding
497
+ def get_guidance_scale_embedding(self, w, embedding_dim=512, dtype=torch.float32):
498
+ """
499
+ See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298
500
+
501
+ Args:
502
+ timesteps (`torch.Tensor`):
503
+ generate embedding vectors at these timesteps
504
+ embedding_dim (`int`, *optional*, defaults to 512):
505
+ dimension of the embeddings to generate
506
+ dtype:
507
+ data type of the generated embeddings
508
+
509
+ Returns:
510
+ `torch.FloatTensor`: Embedding vectors with shape `(len(timesteps), embedding_dim)`
511
+ """
512
+ assert len(w.shape) == 1
513
+ w = w * 1000.0
514
+
515
+ half_dim = embedding_dim // 2
516
+ emb = torch.log(torch.tensor(10000.0)) / (half_dim - 1)
517
+ emb = torch.exp(torch.arange(half_dim, dtype=dtype) * -emb)
518
+ emb = w.to(dtype)[:, None] * emb[None, :]
519
+ emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)
520
+ if embedding_dim % 2 == 1: # zero pad
521
+ emb = torch.nn.functional.pad(emb, (0, 1))
522
+ assert emb.shape == (w.shape[0], embedding_dim)
523
+ return emb
524
+
525
+ # def patch image
526
+ def patch_image(
527
+ self,
528
+ image: torch.FloatTensor,
529
+ patch_size: int,
530
+ overlap: float = 0.5,
531
+ ) -> torch.FloatTensor:
532
+ r"""
533
+ Patch the input image into smaller patches.
534
+
535
+ Args:
536
+ image (`torch.Tensor`):
537
+ The input image tensor to be patched. The tensor should have shape `(B, C, H, W)`.
538
+ patch_size (`int`):
539
+ The size of the patch.
540
+ overlap (`float`, *optional*, defaults to `0.25`):
541
+ The overlap between patches.
542
+
543
+ Returns:
544
+ `torch.Tensor`:
545
+ The patched image tensor.
546
+ """
547
+ # Get the number of channels
548
+ B, C, H, W = image.shape
549
+
550
+ # Calculate the stride for unfolding
551
+ stride = int(patch_size * (1 - overlap))
552
+
553
+ # Calculate required padding for height and width
554
+ pad_height = (H - patch_size) % stride
555
+ pad_width = (W - patch_size) % stride
556
+
557
+ # Adjust padding to fully cover the image dimensions
558
+ if pad_height > 0:
559
+ pad_height = stride - pad_height
560
+ if pad_width > 0:
561
+ pad_width = stride - pad_width
562
+
563
+ # Apply padding symmetrically to the bottom and right sides
564
+ image = F.pad(image, (0, pad_width, 0, pad_height), mode="circular", value=0)
565
+ H_padded, W_padded = image.shape[-2:]
566
+
567
+ # Unfold the padded image tensor into patches
568
+ image = image.unfold(2, patch_size, stride).unfold(3, patch_size, stride)
569
+
570
+ image = image.permute(0, 2, 3, 1, 4, 5)
571
+ image = image.reshape(-1, C, patch_size, patch_size)
572
+ return image, (H_padded, W_padded)
573
+
574
+ # def unpatch image with overlap
575
+ def unpatch_image(
576
+ self,
577
+ patches: torch.FloatTensor,
578
+ batch_size: int,
579
+ output_size: Tuple[int, int],
580
+ patch_size: int,
581
+ crop_size: Optional[Tuple[int, int]] = None,
582
+ overlap: float = 0.25,
583
+ ) -> torch.FloatTensor:
584
+ """
585
+ Reconstruct the original image from its patches using fold, averaging the overlaps.
586
+
587
+ Args:
588
+ patches (torch.Tensor): The patches of the image with shape `(B, C, H, W)`,
589
+ where `B` is the effective batch size (number of patches),
590
+ `C` is the channel depth, and `H`, `W` are the patch height and width.
591
+ batch_size (int): The effective batch size (number of patches).
592
+ output_size (tuple): The height and width of the original image before patching.
593
+ patch_size (int): The height and width of each patch (assuming square patches).
594
+ crop_size (tuple, *optional*): The height and width of the cropped image.
595
+ overlap (`float`, *optional*, defaults to `0.25`):
596
+ The overlap between patches.
597
+
598
+ Returns:
599
+ torch.Tensor: The reconstructed images of shape `(B, C, H, W)`.
600
+ """
601
+ # Set crop size if not provided
602
+ if crop_size is None:
603
+ crop_size = output_size
604
+
605
+ # Calculate the stride for folding
606
+ stride = int(patch_size * (1 - overlap))
607
+
608
+ # Calculate the number of patches per image
609
+ num_patches_per_image = patches.shape[0] // batch_size
610
+
611
+ patches = patches.view(
612
+ batch_size, num_patches_per_image, patches.shape[1], patch_size, patch_size
613
+ )
614
+ patches = patches.permute(0, 2, 3, 4, 1).contiguous()
615
+ patches = patches.view(
616
+ batch_size, patches.shape[1] * patch_size * patch_size, -1
617
+ )
618
+
619
+ # Use fold to reconstruct the images
620
+ reconstructed = F.fold(
621
+ patches, output_size=output_size, kernel_size=patch_size, stride=stride
622
+ )
623
+
624
+ # For averaging the overlaps, create a tensor of ones and fold it
625
+ mask = torch.ones_like(patches)
626
+ mask = F.fold(
627
+ mask, output_size=output_size, kernel_size=patch_size, stride=stride
628
+ )
629
+
630
+ # Average the accumulated values in the overlaps
631
+ reconstructed /= mask
632
+
633
+ # Crop the reconstructed image to the desired size
634
+ reconstructed = reconstructed[..., : crop_size[0], : crop_size[1]]
635
+
636
+ return reconstructed
637
+
638
+ @property
639
+ def guidance_scale(self):
640
+ return self._guidance_scale
641
+
642
+ @property
643
+ def guidance_rescale(self):
644
+ return self._guidance_rescale
645
+
646
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
647
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
648
+ # corresponds to doing no classifier free guidance.
649
+ @property
650
+ def do_classifier_free_guidance(self):
651
+ return self._guidance_scale > 1 and self.unet.config.time_cond_proj_dim is None
652
+
653
+ @property
654
+ def cross_attention_kwargs(self):
655
+ return self._cross_attention_kwargs
656
+
657
+ @property
658
+ def num_timesteps(self):
659
+ return self._num_timesteps
660
+
661
+ @property
662
+ def interrupt(self):
663
+ return self._interrupt
664
+
665
+ @torch.no_grad()
666
+ # @replace_example_docstring(EXAMPLE_DOC_STRING)
667
+ def __call__(
668
+ self,
669
+ prompt: Union[
670
+ str, List[str], PipelineImageInput, List[PipelineImageInput]
671
+ ] = None,
672
+ height: Optional[int] = None,
673
+ width: Optional[int] = None,
674
+ tileable: bool = True,
675
+ patched: bool = True,
676
+ num_inference_steps: int = 50,
677
+ timesteps: List[int] = None,
678
+ guidance_scale: float = 7.5,
679
+ negative_prompt: Optional[Union[str, List[str]]] = None,
680
+ num_images_per_prompt: Optional[int] = 1,
681
+ eta: float = 0.0,
682
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
683
+ latents: Optional[torch.FloatTensor] = None,
684
+ prompt_embeds: Optional[torch.FloatTensor] = None,
685
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
686
+ output_type: Optional[str] = "pil",
687
+ return_dict: bool = True,
688
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
689
+ guidance_rescale: float = 0.0,
690
+ **kwargs,
691
+ ):
692
+
693
+ # 0. Default height and width to unet
694
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
695
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
696
+
697
+ # 1. Check inputs. Raise error if not correct
698
+ self.check_inputs(
699
+ prompt,
700
+ height,
701
+ width,
702
+ negative_prompt,
703
+ prompt_embeds,
704
+ negative_prompt_embeds,
705
+ )
706
+
707
+ self._guidance_scale = guidance_scale
708
+ self._guidance_rescale = guidance_rescale
709
+ self._cross_attention_kwargs = cross_attention_kwargs
710
+ self._interrupt = False
711
+
712
+ # 2. Define call parameters
713
+ if prompt is not None and (
714
+ isinstance(prompt, str) or isinstance(prompt, Image.Image)
715
+ ):
716
+ batch_size = 1
717
+ elif prompt is not None and isinstance(prompt, list):
718
+ batch_size = len(prompt)
719
+ else:
720
+ batch_size = prompt_embeds.shape[0]
721
+
722
+ device = self._execution_device
723
+
724
+ # 3. Encode input prompt
725
+ prompt_embeds, negative_prompt_embeds = self.encode_prompt(
726
+ prompt,
727
+ device,
728
+ num_images_per_prompt,
729
+ self.do_classifier_free_guidance,
730
+ negative_prompt,
731
+ prompt_embeds=prompt_embeds,
732
+ negative_prompt_embeds=negative_prompt_embeds,
733
+ )
734
+
735
+ # For classifier free guidance, we need to do two forward passes.
736
+ # Here we concatenate the unconditional and text embeddings into a single batch
737
+ # to avoid doing two forward passes
738
+ if self.do_classifier_free_guidance:
739
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
740
+
741
+ # 4. Prepare timesteps
742
+ timesteps, num_inference_steps = retrieve_timesteps(
743
+ self.scheduler, num_inference_steps, device, timesteps
744
+ )
745
+
746
+ # 5. Prepare latent variables
747
+ num_channels_latents = self.unet.config.in_channels
748
+ latents = self.prepare_latents(
749
+ batch_size * num_images_per_prompt,
750
+ num_channels_latents,
751
+ height,
752
+ width,
753
+ prompt_embeds.dtype,
754
+ device,
755
+ generator,
756
+ latents,
757
+ )
758
+
759
+ # 6. Prepare extra step kwargs.
760
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
761
+
762
+ # 6.2 Optionally get Guidance Scale Embedding
763
+ timestep_cond = None
764
+ if self.unet.config.time_cond_proj_dim is not None:
765
+ guidance_scale_tensor = torch.tensor(self.guidance_scale - 1).repeat(
766
+ batch_size * num_images_per_prompt
767
+ )
768
+ timestep_cond = self.get_guidance_scale_embedding(
769
+ guidance_scale_tensor, embedding_dim=self.unet.config.time_cond_proj_dim
770
+ ).to(device=device, dtype=latents.dtype)
771
+
772
+ # 7. Denoising loop
773
+ self._num_timesteps = len(timesteps)
774
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
775
+ for i, t in enumerate(timesteps):
776
+ if self.interrupt:
777
+ continue
778
+
779
+ # If patched diffusion
780
+ if patched:
781
+ B = latents.shape[0]
782
+ # patch the latents
783
+ latents, size_padded = self.patch_image(
784
+ latents, patch_size=32, overlap=0.0
785
+ )
786
+ # TODO: Improve prompt repeat when patching
787
+ Bp = latents.shape[0]
788
+ if prompt_embeds.shape[0] != Bp * 2:
789
+ prompt_embeds = prompt_embeds.repeat_interleave(Bp // B, dim=0)
790
+
791
+ # expand the latents if we are doing classifier free guidance
792
+ latent_model_input = (
793
+ torch.cat([latents] * 2)
794
+ if self.do_classifier_free_guidance
795
+ else latents
796
+ )
797
+ latent_model_input = self.scheduler.scale_model_input(
798
+ latent_model_input, t
799
+ )
800
+
801
+ # predict the noise residual
802
+ noise_pred = self.unet(
803
+ latent_model_input,
804
+ t,
805
+ encoder_hidden_states=prompt_embeds,
806
+ timestep_cond=timestep_cond,
807
+ cross_attention_kwargs=self.cross_attention_kwargs,
808
+ return_dict=False,
809
+ )[0]
810
+
811
+ # perform guidance
812
+ if self.do_classifier_free_guidance:
813
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
814
+ noise_pred = noise_pred_uncond + self.guidance_scale * (
815
+ noise_pred_text - noise_pred_uncond
816
+ )
817
+
818
+ if self.do_classifier_free_guidance and self.guidance_rescale > 0.0:
819
+ # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
820
+ noise_pred = rescale_noise_cfg(
821
+ noise_pred,
822
+ noise_pred_text,
823
+ guidance_rescale=self.guidance_rescale,
824
+ )
825
+
826
+ # compute the previous noisy sample x_t -> x_t-1
827
+ latents = self.scheduler.step(
828
+ noise_pred, t, latents, **extra_step_kwargs, return_dict=False
829
+ )[0]
830
+
831
+ if patched:
832
+ # unpatch the latents
833
+ latents = self.unpatch_image(
834
+ latents, B, size_padded, patch_size=32, overlap=0.0
835
+ )
836
+
837
+ # noise rolling, baby!
838
+ # Based on 5.1. in https://arxiv.org/pdf/2309.01700.pdf
839
+ if tileable:
840
+ roll_h = torch.randint(0, height, (1,)).item()
841
+ roll_w = torch.randint(0, width, (1,)).item()
842
+ latents = torch.roll(latents, shifts=(roll_h, roll_w), dims=(2, 3))
843
+
844
+ # call the callback, if provided
845
+ if i == len(timesteps) - 1 or (i + 1) % self.scheduler.order == 0:
846
+ progress_bar.update()
847
+
848
+ if not output_type == "latent":
849
+ if tileable:
850
+ # decode padded latent to preserve tileability
851
+ l_height = height // self.vae_scale_factor
852
+ l_width = width // self.vae_scale_factor
853
+ latents = TF.center_crop(
854
+ latents.repeat(1, 1, 3, 3), (l_height + 4, l_width + 4)
855
+ )
856
+
857
+ # decode the latents
858
+ image = self.vae.decode(
859
+ latents / self.vae.config.scaling_factor,
860
+ return_dict=False,
861
+ generator=generator,
862
+ )[0]
863
+
864
+ # crop to original size
865
+ image = TF.center_crop(image, (height, width))
866
+ else:
867
+ image = latents
868
+
869
+ image = postprocess(image, output_type=output_type)
870
+
871
+ # Offload all models
872
+ self.maybe_free_model_hooks()
873
+
874
+ if not return_dict:
875
+ return image
876
+
877
+ return MatForgerPipelineOutput(images=image)
prompt_encoder/config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "_class_name": "MaterialPromptEncoder",
3
+ "_diffusers_version": "0.26.3"
4
+ }
prompt_encoder/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46470236d5adee6faf6bf0011dc6f67a0ece61041ed342b57301db02d5c58ff7
3
+ size 1710544320
prompt_encoder/encoder.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Optional
2
+
3
+ from diffusers.configuration_utils import ConfigMixin
4
+ from diffusers.models.modeling_utils import ModelMixin
5
+ from PIL import Image
6
+ from transformers import (
7
+ AutoProcessor,
8
+ AutoTokenizer,
9
+ CLIPTextModelWithProjection,
10
+ CLIPVisionModelWithProjection,
11
+ )
12
+
13
+
14
+ class BasePromptEncoder(ModelMixin, ConfigMixin):
15
+ def __init__(self):
16
+ super().__init__()
17
+
18
+ def encode_text(self, text):
19
+ raise NotImplementedError
20
+
21
+ def encode_image(self, image):
22
+ raise NotImplementedError
23
+
24
+ def forward(
25
+ self,
26
+ prompt,
27
+ negative_prompt=None,
28
+ ):
29
+ raise NotImplementedError
30
+
31
+
32
+ class MaterialPromptEncoder(BasePromptEncoder):
33
+ def __init__(self):
34
+ super(MaterialPromptEncoder, self).__init__()
35
+
36
+ self.processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14")
37
+ self.tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-large-patch14")
38
+ self.clip_vision = CLIPVisionModelWithProjection.from_pretrained(
39
+ "openai/clip-vit-large-patch14"
40
+ )
41
+ self.clip_text = CLIPTextModelWithProjection.from_pretrained(
42
+ "openai/clip-vit-large-patch14"
43
+ )
44
+
45
+ def encode_text(self, text):
46
+ inputs = self.tokenizer(text, padding=True, return_tensors="pt")
47
+ inputs["input_ids"] = inputs["input_ids"].to(self.device)
48
+ inputs["attention_mask"] = inputs["attention_mask"].to(self.device)
49
+ outputs = self.clip_text(**inputs)
50
+ return outputs.text_embeds.unsqueeze(1)
51
+
52
+ def encode_image(self, image):
53
+ inputs = self.processor(images=image, return_tensors="pt")
54
+ inputs["pixel_values"] = inputs["pixel_values"].to(self.device)
55
+ outputs = self.clip_vision(**inputs)
56
+ return outputs.image_embeds.unsqueeze(1)
57
+
58
+ def encode_prompt(
59
+ self,
60
+ prompt,
61
+ ):
62
+ dtype = type(prompt)
63
+ if dtype == list:
64
+ dtype = type(prompt[0])
65
+
66
+ if dtype == str:
67
+ return self.encode_text(prompt)
68
+ elif dtype == Image.Image:
69
+ return self.encode_image(prompt)
70
+ else:
71
+ raise NotImplementedError
72
+
73
+ def forward(
74
+ self,
75
+ prompt,
76
+ negative_prompt=None,
77
+ ):
78
+ prompt = self.encode_prompt(prompt)
79
+ negative_prompt = self.encode_prompt(negative_prompt)
80
+ return prompt, negative_prompt
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "DDIMScheduler",
3
+ "_diffusers_version": "0.26.3",
4
+ "beta_end": 0.012,
5
+ "beta_schedule": "scaled_linear",
6
+ "beta_start": 0.00085,
7
+ "clip_sample": false,
8
+ "clip_sample_range": 1.0,
9
+ "dynamic_thresholding_ratio": 0.995,
10
+ "interpolation_type": "linear",
11
+ "num_train_timesteps": 1000,
12
+ "prediction_type": "epsilon",
13
+ "rescale_betas_zero_snr": false,
14
+ "sample_max_value": 1.0,
15
+ "set_alpha_to_one": false,
16
+ "skip_prk_steps": true,
17
+ "steps_offset": 1,
18
+ "thresholding": false,
19
+ "timestep_spacing": "leading",
20
+ "trained_betas": null
21
+ }
unet/config.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.26.3",
4
+ "act_fn": "silu",
5
+ "addition_embed_type": null,
6
+ "addition_embed_type_num_heads": 64,
7
+ "addition_time_embed_dim": null,
8
+ "attention_head_dim": 8,
9
+ "attention_type": "default",
10
+ "block_out_channels": [
11
+ 256,
12
+ 512,
13
+ 1024,
14
+ 1024
15
+ ],
16
+ "center_input_sample": false,
17
+ "class_embed_type": null,
18
+ "class_embeddings_concat": false,
19
+ "conv_in_kernel": 3,
20
+ "conv_out_kernel": 3,
21
+ "cross_attention_dim": 768,
22
+ "cross_attention_norm": null,
23
+ "down_block_types": [
24
+ "CrossAttnDownBlock2D",
25
+ "CrossAttnDownBlock2D",
26
+ "CrossAttnDownBlock2D",
27
+ "DownBlock2D"
28
+ ],
29
+ "downsample_padding": 1,
30
+ "dropout": 0.0,
31
+ "dual_cross_attention": false,
32
+ "encoder_hid_dim": null,
33
+ "encoder_hid_dim_type": null,
34
+ "flip_sin_to_cos": true,
35
+ "freq_shift": 0,
36
+ "in_channels": 18,
37
+ "layers_per_block": 2,
38
+ "mid_block_only_cross_attention": null,
39
+ "mid_block_scale_factor": 1,
40
+ "mid_block_type": "UNetMidBlock2DCrossAttn",
41
+ "norm_eps": 1e-05,
42
+ "norm_num_groups": 32,
43
+ "num_attention_heads": null,
44
+ "num_class_embeds": null,
45
+ "only_cross_attention": false,
46
+ "out_channels": 18,
47
+ "projection_class_embeddings_input_dim": null,
48
+ "resnet_out_scale_factor": 1.0,
49
+ "resnet_skip_time_act": false,
50
+ "resnet_time_scale_shift": "default",
51
+ "reverse_transformer_layers_per_block": null,
52
+ "sample_size": 64,
53
+ "time_cond_proj_dim": null,
54
+ "time_embedding_act_fn": null,
55
+ "time_embedding_dim": null,
56
+ "time_embedding_type": "positional",
57
+ "timestep_post_act": null,
58
+ "transformer_layers_per_block": 1,
59
+ "up_block_types": [
60
+ "UpBlock2D",
61
+ "CrossAttnUpBlock2D",
62
+ "CrossAttnUpBlock2D",
63
+ "CrossAttnUpBlock2D"
64
+ ],
65
+ "upcast_attention": false,
66
+ "use_linear_projection": false
67
+ }
unet/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67f2d6cf5583e3fc2559fbc587a9f01d09db8765a5b1ef38dc5d34e436b82143
3
+ size 2213282432
vae/config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.26.3",
4
+ "act_fn": "silu",
5
+ "block_out_channels": [
6
+ 128,
7
+ 256,
8
+ 512,
9
+ 512
10
+ ],
11
+ "down_block_types": [
12
+ "DownEncoderBlock2D",
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D"
16
+ ],
17
+ "force_upcast": true,
18
+ "in_channels": 9,
19
+ "latent_channels": 18,
20
+ "layers_per_block": 2,
21
+ "norm_num_groups": 32,
22
+ "out_channels": 9,
23
+ "sample_size": 512,
24
+ "scaling_factor": 0.18215,
25
+ "up_block_types": [
26
+ "UpDecoderBlock2D",
27
+ "UpDecoderBlock2D",
28
+ "UpDecoderBlock2D",
29
+ "UpDecoderBlock2D"
30
+ ]
31
+ }
vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79ef4da42cd429353b034bf07cfc7cff98c9214421e9c2291c0a945a17afc290
3
+ size 335479204