Datasets:

ArXiv:
Tags:
art
License:
schirrmacher commited on
Commit
58ed4f8
1 Parent(s): 54e7ec8

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -11,28 +11,21 @@ pretty_name: Human Segmentation Dataset
11
 
12
  This dataset was created **for developing the best fully open-source background remover** of images with humans. It was crafted with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse), a Stable Diffusion extension for generating transparent images. After creating segmented humans, [IC-Light](https://github.com/lllyasviel/IC-Light) was used for embedding them into realistic scenarios.
13
 
14
- The dataset covers a diverse set of segmented humans: various skin tones, clothes, hair styles etc. Since Stable Diffusion is not perfect, the dataset contains images with flaws. Still the dataset is good enough for training background remover models.
15
 
16
- It contains transparent images of humans (`/humans`) which are randomly combined with backgrounds (`/backgrounds`) with an augmentation script.
17
-
18
- I created more than 7.000 images with people and diverse backgrounds.
19
-
20
- # Create Training Dataset
21
 
22
- 1. [Download human segmentations and backgrounds](https://drive.google.com/drive/folders/1K1lK6nSoaQ7PLta-bcfol3XSGZA1b9nt?usp=drive_link)
23
 
24
- 2. Execute the following script for creating training and validation data:
25
 
26
- ```
27
- ./create_dataset.sh
28
- ```
29
 
30
- # Examples
31
 
32
- Here you can see an augmented image and the resulting ground truth:
33
 
34
- ![](example_image.png)
35
- ![](example_ground_truth.png)
36
 
37
  # Support
38
 
 
11
 
12
  This dataset was created **for developing the best fully open-source background remover** of images with humans. It was crafted with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse), a Stable Diffusion extension for generating transparent images. After creating segmented humans, [IC-Light](https://github.com/lllyasviel/IC-Light) was used for embedding them into realistic scenarios.
13
 
14
+ The dataset covers a diverse set of segmented humans: various skin tones, clothes, hair styles etc. Since Stable Diffusion is not perfect, the dataset contains images with flaws. Still the dataset is good enough for training background remover models. I created more than 7.000 images with people and diverse backgrounds.
15
 
16
+ # Examples
 
 
 
 
17
 
18
+ [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse) output:
19
 
20
+ ![](layer_diffuse_example.png)
21
 
22
+ [IC-Light](https://github.com/lllyasviel/IC-Light) applied to segmented image:
 
 
23
 
24
+ ![](final_image_example.png)
25
 
26
+ Ground truth:
27
 
28
+ ![](ground_truth_example.png)
 
29
 
30
  # Support
31
 
final_image_example.png ADDED

Git LFS Details

  • SHA256: 8c98c81aed7b06df720a05351dfbf05b43697c5f53845c16d8faf7356b7906b0
  • Pointer size: 132 Bytes
  • Size of remote file: 1.53 MB
ground_truth_example.png ADDED

Git LFS Details

  • SHA256: 069100469dd02158b67b50008aab42d99105226ebad79ab96e3c68108bd84d0b
  • Pointer size: 130 Bytes
  • Size of remote file: 50.1 kB
layer_diffuse_example.png ADDED

Git LFS Details

  • SHA256: e803a4300beecb9e8dc7cca8effaa99e2899c878fe3d7d215a125aa5a2cfdabf
  • Pointer size: 132 Bytes
  • Size of remote file: 1.16 MB
util/ic-light.py CHANGED
@@ -10,27 +10,46 @@ import cv2
10
  from diffusers.utils import load_image
11
 
12
  from PIL import Image, ImageFilter, ImageOps
13
- from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, StableDiffusionLatentUpscalePipeline
14
- from diffusers import AutoencoderKL, UNet2DConditionModel, DDIMScheduler, EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler
 
 
 
 
 
 
 
 
 
 
15
  from diffusers.models.attention_processor import AttnProcessor2_0
16
  from transformers import CLIPTextModel, CLIPTokenizer
17
  from enum import Enum
 
18
  # from torch.hub import download_url_to_file
19
 
20
 
21
  # 'stablediffusionapi/realistic-vision-v51'
22
  # 'runwayml/stable-diffusion-v1-5'
23
- sd15_name = 'stablediffusionapi/realistic-vision-v51'
24
  tokenizer = CLIPTokenizer.from_pretrained(sd15_name, subfolder="tokenizer")
25
  text_encoder = CLIPTextModel.from_pretrained(sd15_name, subfolder="text_encoder")
26
  vae = AutoencoderKL.from_pretrained(sd15_name, subfolder="vae")
27
  unet = UNet2DConditionModel.from_pretrained(sd15_name, subfolder="unet")
28
- upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained("stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16)
 
 
29
 
30
  # Change UNet
31
 
32
  with torch.no_grad():
33
- new_conv_in = torch.nn.Conv2d(8, unet.conv_in.out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding)
 
 
 
 
 
 
34
  new_conv_in.weight.zero_()
35
  new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight)
36
  new_conv_in.bias = unet.conv_in.bias
@@ -40,10 +59,10 @@ unet_original_forward = unet.forward
40
 
41
 
42
  def hooked_unet_forward(sample, timestep, encoder_hidden_states, **kwargs):
43
- c_concat = kwargs['cross_attention_kwargs']['concat_conds'].to(sample)
44
  c_concat = torch.cat([c_concat] * (sample.shape[0] // c_concat.shape[0]), dim=0)
45
  new_sample = torch.cat([sample, c_concat], dim=1)
46
- kwargs['cross_attention_kwargs'] = {}
47
  return unet_original_forward(new_sample, timestep, encoder_hidden_states, **kwargs)
48
 
49
 
@@ -51,7 +70,7 @@ unet.forward = hooked_unet_forward
51
 
52
  # Load
53
 
54
- model_path = './models/iclight_sd15_fc.safetensors'
55
  # download_url_to_file(url='https://huggingface.co/lllyasviel/ic-light/resolve/main/iclight_sd15_fc.safetensors', dst=model_path)
56
  sd_offset = sf.load_file(model_path)
57
  sd_origin = unet.state_dict()
@@ -62,7 +81,7 @@ del sd_offset, sd_origin, sd_merged, keys
62
 
63
  # Device
64
 
65
- device = torch.device('cuda')
66
  text_encoder = text_encoder.to(device=device, dtype=torch.float16)
67
  vae = vae.to(device=device, dtype=torch.bfloat16)
68
  unet = unet.to(device=device, dtype=torch.float16)
@@ -85,10 +104,7 @@ ddim_scheduler = DDIMScheduler(
85
  )
86
 
87
  euler_a_scheduler = EulerAncestralDiscreteScheduler(
88
- num_train_timesteps=1000,
89
- beta_start=0.00085,
90
- beta_end=0.012,
91
- steps_offset=1
92
  )
93
 
94
  dpmpp_2m_sde_karras_scheduler = DPMSolverMultistepScheduler(
@@ -97,7 +113,7 @@ dpmpp_2m_sde_karras_scheduler = DPMSolverMultistepScheduler(
97
  beta_end=0.012,
98
  algorithm_type="sde-dpmsolver++",
99
  use_karras_sigmas=True,
100
- steps_offset=1
101
  )
102
 
103
  # Pipelines
@@ -111,7 +127,7 @@ t2i_pipe = StableDiffusionPipeline(
111
  safety_checker=None,
112
  requires_safety_checker=False,
113
  feature_extractor=None,
114
- image_encoder=None
115
  )
116
 
117
  i2i_pipe = StableDiffusionImg2ImgPipeline(
@@ -123,7 +139,7 @@ i2i_pipe = StableDiffusionImg2ImgPipeline(
123
  safety_checker=None,
124
  requires_safety_checker=False,
125
  feature_extractor=None,
126
- image_encoder=None
127
  )
128
 
129
 
@@ -139,7 +155,10 @@ def encode_prompt_inner(txt: str):
139
  return x[:i] if len(x) >= i else x + [p] * (i - len(x))
140
 
141
  tokens = tokenizer(txt, truncation=False, add_special_tokens=False)["input_ids"]
142
- chunks = [[id_start] + tokens[i: i + chunk_length] + [id_end] for i in range(0, len(tokens), chunk_length)]
 
 
 
143
  chunks = [pad(ck, id_pad, max_length) for ck in chunks]
144
 
145
  token_ids = torch.tensor(chunks).to(device=device, dtype=torch.int64)
@@ -188,7 +207,9 @@ def pytorch2numpy(imgs, quant=True):
188
 
189
  @torch.inference_mode()
190
  def numpy2pytorch(imgs):
191
- h = torch.from_numpy(np.stack(imgs, axis=0)).float() / 127.0 - 1.0 # so that 127 must be strictly 0.0
 
 
192
  h = h.movedim(-1, 1)
193
  return h
194
 
@@ -213,14 +234,31 @@ def resize_without_crop(image, target_width, target_height):
213
  resized_image = pil_image.resize((target_width, target_height), Image.LANCZOS)
214
  return np.array(resized_image)
215
 
 
216
  def remove_alpha_threshold(image, alpha_threshold=160):
217
  # This function removes artifacts created by LayerDiffusion
218
  mask = image[:, :, 3] < alpha_threshold
219
  image[mask] = [0, 0, 0, 0]
220
  return image
221
 
 
222
  @torch.inference_mode()
223
- def process(input_fg, prompt, image_width, image_height, num_samples, seed, steps, a_prompt, n_prompt, cfg, highres_scale, highres_denoise, lowres_denoise, bg_source):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
224
  bg_source = BGSource(bg_source)
225
  input_bg = None
226
 
@@ -243,56 +281,69 @@ def process(input_fg, prompt, image_width, image_height, num_samples, seed, step
243
  image = np.tile(gradient, (1, image_width))
244
  input_bg = np.stack((image,) * 3, axis=-1).astype(np.uint8)
245
  else:
246
- raise 'Wrong initial latent!'
247
 
248
  rng = torch.Generator(device=device).manual_seed(int(seed))
249
 
250
  fg = resize_and_center_crop(input_fg, image_width, image_height)
251
 
252
  concat_conds = numpy2pytorch([fg]).to(device=vae.device, dtype=vae.dtype)
253
- concat_conds = vae.encode(concat_conds).latent_dist.mode() * vae.config.scaling_factor
 
 
254
 
255
- conds, unconds = encode_prompt_pair(positive_prompt=prompt + ', ' + a_prompt, negative_prompt=n_prompt)
 
 
256
 
257
  if input_bg is None:
258
- latents = t2i_pipe(
259
- prompt_embeds=conds,
260
- negative_prompt_embeds=unconds,
261
- width=image_width,
262
- height=image_height,
263
- num_inference_steps=steps,
264
- num_images_per_prompt=num_samples,
265
- generator=rng,
266
- output_type='latent',
267
- guidance_scale=cfg,
268
- cross_attention_kwargs={'concat_conds': concat_conds},
269
- ).images.to(vae.dtype) / vae.config.scaling_factor
 
 
 
270
  else:
271
  bg = resize_and_center_crop(input_bg, image_width, image_height)
272
  bg_latent = numpy2pytorch([bg]).to(device=vae.device, dtype=vae.dtype)
273
  bg_latent = vae.encode(bg_latent).latent_dist.mode() * vae.config.scaling_factor
274
- latents = i2i_pipe(
275
- image=bg_latent,
276
- strength=lowres_denoise,
277
- prompt_embeds=conds,
278
- negative_prompt_embeds=unconds,
279
- width=image_width,
280
- height=image_height,
281
- num_inference_steps=int(round(steps / lowres_denoise)),
282
- num_images_per_prompt=num_samples,
283
- generator=rng,
284
- output_type='latent',
285
- guidance_scale=cfg,
286
- cross_attention_kwargs={'concat_conds': concat_conds},
287
- ).images.to(vae.dtype) / vae.config.scaling_factor
 
 
 
288
 
289
  pixels = vae.decode(latents).sample
290
  pixels = pytorch2numpy(pixels)
291
- pixels = [resize_without_crop(
292
- image=p,
293
- target_width=int(round(image_width * highres_scale / 64.0) * 64),
294
- target_height=int(round(image_height * highres_scale / 64.0) * 64))
295
- for p in pixels]
 
 
 
296
 
297
  pixels = numpy2pytorch(pixels).to(device=vae.device, dtype=vae.dtype)
298
  latents = vae.encode(pixels).latent_dist.mode() * vae.config.scaling_factor
@@ -302,22 +353,27 @@ def process(input_fg, prompt, image_width, image_height, num_samples, seed, step
302
 
303
  fg = resize_and_center_crop(input_fg, image_width, image_height)
304
  concat_conds = numpy2pytorch([fg]).to(device=vae.device, dtype=vae.dtype)
305
- concat_conds = vae.encode(concat_conds).latent_dist.mode() * vae.config.scaling_factor
306
-
307
- latents = i2i_pipe(
308
- image=latents,
309
- strength=highres_denoise,
310
- prompt_embeds=conds,
311
- negative_prompt_embeds=unconds,
312
- width=image_width,
313
- height=image_height,
314
- num_inference_steps=int(round(steps / highres_denoise)),
315
- num_images_per_prompt=num_samples,
316
- generator=rng,
317
- output_type='latent',
318
- guidance_scale=cfg,
319
- cross_attention_kwargs={'concat_conds': concat_conds},
320
- ).images.to(vae.dtype) / vae.config.scaling_factor
 
 
 
 
 
321
 
322
  pixels = vae.decode(latents).sample
323
 
@@ -335,16 +391,18 @@ def augment(image):
335
  else:
336
  target_height, target_width = 512 * 2, 640 * 2
337
 
338
- left_right_padding = (max(target_width, image_width) - min(target_width, image_width)) // 2
 
 
339
 
340
  original = cv2.copyMakeBorder(
341
- original,
342
- top=max(target_height, image_height) - min(target_height, image_height),
343
  bottom=0,
344
- left=left_right_padding,
345
- right=left_right_padding,
346
- borderType=cv2.BORDER_CONSTANT,
347
- value=(0, 0, 0)
348
  )
349
 
350
  transform = A.Compose(
@@ -363,6 +421,7 @@ def augment(image):
363
 
364
  return transform(image=original)["image"]
365
 
 
366
  class BGSource(Enum):
367
  NONE = "None"
368
  LEFT = "Left Light"
@@ -380,8 +439,7 @@ prompts = [
380
  "sunshine, cafe, chilled",
381
  "exhibition, paintings",
382
  "beach",
383
- "winter, snow"
384
- "forrest, cloudy",
385
  "party, people",
386
  "cozy living room, sofa, shelf",
387
  "mountains",
@@ -392,7 +450,7 @@ prompts = [
392
  "appartment, soft light",
393
  "garden",
394
  "school",
395
- "art exhibition with paintings in background"
396
  ]
397
 
398
  os.makedirs(ground_truth_dir, exist_ok=True)
@@ -402,7 +460,9 @@ all_images = os.listdir(input_dir)
402
  random.shuffle(all_images)
403
 
404
  for filename in all_images:
405
- if filename.lower().endswith(('.png', '.jpg', '.jpeg')): # Check if the file is an image
 
 
406
 
407
  letters = string.ascii_lowercase
408
  random_string = "".join(random.choice(letters) for i in range(13))
@@ -418,7 +478,9 @@ for filename in all_images:
418
  image = np.array(image)
419
 
420
  image_augmented = augment(image)
421
- Image.fromarray(image_augmented).getchannel("A").save(os.path.join(ground_truth_dir, random_filename))
 
 
422
 
423
  image_augmented = image_augmented[:, :, :3]
424
 
@@ -427,7 +489,7 @@ for filename in all_images:
427
  image_height, image_width, _ = image_augmented.shape
428
 
429
  num_samples = 1
430
- seed = random.randint(1,123456789012345678901234567890)
431
  steps = 25
432
  constant_prompt = "details, high quality"
433
  prompt = random.choice(prompts)
@@ -437,7 +499,22 @@ for filename in all_images:
437
  highres_denoise = 0.7
438
  lowres_denoise = 0.5
439
  bg_source = BGSource.NONE
440
-
441
- results = process(image_augmented, constant_prompt, image_width, image_height, num_samples, seed, steps, prompt, n_prompt, cfg, highres_scale, highres_denoise, lowres_denoise, bg_source)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
442
  result_image = Image.fromarray(results[0])
443
- result_image.save(os.path.join(image_dir, random_filename))
 
10
  from diffusers.utils import load_image
11
 
12
  from PIL import Image, ImageFilter, ImageOps
13
+ from diffusers import (
14
+ StableDiffusionPipeline,
15
+ StableDiffusionImg2ImgPipeline,
16
+ StableDiffusionLatentUpscalePipeline,
17
+ )
18
+ from diffusers import (
19
+ AutoencoderKL,
20
+ UNet2DConditionModel,
21
+ DDIMScheduler,
22
+ EulerAncestralDiscreteScheduler,
23
+ DPMSolverMultistepScheduler,
24
+ )
25
  from diffusers.models.attention_processor import AttnProcessor2_0
26
  from transformers import CLIPTextModel, CLIPTokenizer
27
  from enum import Enum
28
+
29
  # from torch.hub import download_url_to_file
30
 
31
 
32
  # 'stablediffusionapi/realistic-vision-v51'
33
  # 'runwayml/stable-diffusion-v1-5'
34
+ sd15_name = "stablediffusionapi/realistic-vision-v51"
35
  tokenizer = CLIPTokenizer.from_pretrained(sd15_name, subfolder="tokenizer")
36
  text_encoder = CLIPTextModel.from_pretrained(sd15_name, subfolder="text_encoder")
37
  vae = AutoencoderKL.from_pretrained(sd15_name, subfolder="vae")
38
  unet = UNet2DConditionModel.from_pretrained(sd15_name, subfolder="unet")
39
+ upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(
40
+ "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16
41
+ )
42
 
43
  # Change UNet
44
 
45
  with torch.no_grad():
46
+ new_conv_in = torch.nn.Conv2d(
47
+ 8,
48
+ unet.conv_in.out_channels,
49
+ unet.conv_in.kernel_size,
50
+ unet.conv_in.stride,
51
+ unet.conv_in.padding,
52
+ )
53
  new_conv_in.weight.zero_()
54
  new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight)
55
  new_conv_in.bias = unet.conv_in.bias
 
59
 
60
 
61
  def hooked_unet_forward(sample, timestep, encoder_hidden_states, **kwargs):
62
+ c_concat = kwargs["cross_attention_kwargs"]["concat_conds"].to(sample)
63
  c_concat = torch.cat([c_concat] * (sample.shape[0] // c_concat.shape[0]), dim=0)
64
  new_sample = torch.cat([sample, c_concat], dim=1)
65
+ kwargs["cross_attention_kwargs"] = {}
66
  return unet_original_forward(new_sample, timestep, encoder_hidden_states, **kwargs)
67
 
68
 
 
70
 
71
  # Load
72
 
73
+ model_path = "./models/iclight_sd15_fc.safetensors"
74
  # download_url_to_file(url='https://huggingface.co/lllyasviel/ic-light/resolve/main/iclight_sd15_fc.safetensors', dst=model_path)
75
  sd_offset = sf.load_file(model_path)
76
  sd_origin = unet.state_dict()
 
81
 
82
  # Device
83
 
84
+ device = torch.device("cuda")
85
  text_encoder = text_encoder.to(device=device, dtype=torch.float16)
86
  vae = vae.to(device=device, dtype=torch.bfloat16)
87
  unet = unet.to(device=device, dtype=torch.float16)
 
104
  )
105
 
106
  euler_a_scheduler = EulerAncestralDiscreteScheduler(
107
+ num_train_timesteps=1000, beta_start=0.00085, beta_end=0.012, steps_offset=1
 
 
 
108
  )
109
 
110
  dpmpp_2m_sde_karras_scheduler = DPMSolverMultistepScheduler(
 
113
  beta_end=0.012,
114
  algorithm_type="sde-dpmsolver++",
115
  use_karras_sigmas=True,
116
+ steps_offset=1,
117
  )
118
 
119
  # Pipelines
 
127
  safety_checker=None,
128
  requires_safety_checker=False,
129
  feature_extractor=None,
130
+ image_encoder=None,
131
  )
132
 
133
  i2i_pipe = StableDiffusionImg2ImgPipeline(
 
139
  safety_checker=None,
140
  requires_safety_checker=False,
141
  feature_extractor=None,
142
+ image_encoder=None,
143
  )
144
 
145
 
 
155
  return x[:i] if len(x) >= i else x + [p] * (i - len(x))
156
 
157
  tokens = tokenizer(txt, truncation=False, add_special_tokens=False)["input_ids"]
158
+ chunks = [
159
+ [id_start] + tokens[i : i + chunk_length] + [id_end]
160
+ for i in range(0, len(tokens), chunk_length)
161
+ ]
162
  chunks = [pad(ck, id_pad, max_length) for ck in chunks]
163
 
164
  token_ids = torch.tensor(chunks).to(device=device, dtype=torch.int64)
 
207
 
208
  @torch.inference_mode()
209
  def numpy2pytorch(imgs):
210
+ h = (
211
+ torch.from_numpy(np.stack(imgs, axis=0)).float() / 127.0 - 1.0
212
+ ) # so that 127 must be strictly 0.0
213
  h = h.movedim(-1, 1)
214
  return h
215
 
 
234
  resized_image = pil_image.resize((target_width, target_height), Image.LANCZOS)
235
  return np.array(resized_image)
236
 
237
+
238
  def remove_alpha_threshold(image, alpha_threshold=160):
239
  # This function removes artifacts created by LayerDiffusion
240
  mask = image[:, :, 3] < alpha_threshold
241
  image[mask] = [0, 0, 0, 0]
242
  return image
243
 
244
+
245
  @torch.inference_mode()
246
+ def process(
247
+ input_fg,
248
+ prompt,
249
+ image_width,
250
+ image_height,
251
+ num_samples,
252
+ seed,
253
+ steps,
254
+ a_prompt,
255
+ n_prompt,
256
+ cfg,
257
+ highres_scale,
258
+ highres_denoise,
259
+ lowres_denoise,
260
+ bg_source,
261
+ ):
262
  bg_source = BGSource(bg_source)
263
  input_bg = None
264
 
 
281
  image = np.tile(gradient, (1, image_width))
282
  input_bg = np.stack((image,) * 3, axis=-1).astype(np.uint8)
283
  else:
284
+ raise "Wrong initial latent!"
285
 
286
  rng = torch.Generator(device=device).manual_seed(int(seed))
287
 
288
  fg = resize_and_center_crop(input_fg, image_width, image_height)
289
 
290
  concat_conds = numpy2pytorch([fg]).to(device=vae.device, dtype=vae.dtype)
291
+ concat_conds = (
292
+ vae.encode(concat_conds).latent_dist.mode() * vae.config.scaling_factor
293
+ )
294
 
295
+ conds, unconds = encode_prompt_pair(
296
+ positive_prompt=prompt + ", " + a_prompt, negative_prompt=n_prompt
297
+ )
298
 
299
  if input_bg is None:
300
+ latents = (
301
+ t2i_pipe(
302
+ prompt_embeds=conds,
303
+ negative_prompt_embeds=unconds,
304
+ width=image_width,
305
+ height=image_height,
306
+ num_inference_steps=steps,
307
+ num_images_per_prompt=num_samples,
308
+ generator=rng,
309
+ output_type="latent",
310
+ guidance_scale=cfg,
311
+ cross_attention_kwargs={"concat_conds": concat_conds},
312
+ ).images.to(vae.dtype)
313
+ / vae.config.scaling_factor
314
+ )
315
  else:
316
  bg = resize_and_center_crop(input_bg, image_width, image_height)
317
  bg_latent = numpy2pytorch([bg]).to(device=vae.device, dtype=vae.dtype)
318
  bg_latent = vae.encode(bg_latent).latent_dist.mode() * vae.config.scaling_factor
319
+ latents = (
320
+ i2i_pipe(
321
+ image=bg_latent,
322
+ strength=lowres_denoise,
323
+ prompt_embeds=conds,
324
+ negative_prompt_embeds=unconds,
325
+ width=image_width,
326
+ height=image_height,
327
+ num_inference_steps=int(round(steps / lowres_denoise)),
328
+ num_images_per_prompt=num_samples,
329
+ generator=rng,
330
+ output_type="latent",
331
+ guidance_scale=cfg,
332
+ cross_attention_kwargs={"concat_conds": concat_conds},
333
+ ).images.to(vae.dtype)
334
+ / vae.config.scaling_factor
335
+ )
336
 
337
  pixels = vae.decode(latents).sample
338
  pixels = pytorch2numpy(pixels)
339
+ pixels = [
340
+ resize_without_crop(
341
+ image=p,
342
+ target_width=int(round(image_width * highres_scale / 64.0) * 64),
343
+ target_height=int(round(image_height * highres_scale / 64.0) * 64),
344
+ )
345
+ for p in pixels
346
+ ]
347
 
348
  pixels = numpy2pytorch(pixels).to(device=vae.device, dtype=vae.dtype)
349
  latents = vae.encode(pixels).latent_dist.mode() * vae.config.scaling_factor
 
353
 
354
  fg = resize_and_center_crop(input_fg, image_width, image_height)
355
  concat_conds = numpy2pytorch([fg]).to(device=vae.device, dtype=vae.dtype)
356
+ concat_conds = (
357
+ vae.encode(concat_conds).latent_dist.mode() * vae.config.scaling_factor
358
+ )
359
+
360
+ latents = (
361
+ i2i_pipe(
362
+ image=latents,
363
+ strength=highres_denoise,
364
+ prompt_embeds=conds,
365
+ negative_prompt_embeds=unconds,
366
+ width=image_width,
367
+ height=image_height,
368
+ num_inference_steps=int(round(steps / highres_denoise)),
369
+ num_images_per_prompt=num_samples,
370
+ generator=rng,
371
+ output_type="latent",
372
+ guidance_scale=cfg,
373
+ cross_attention_kwargs={"concat_conds": concat_conds},
374
+ ).images.to(vae.dtype)
375
+ / vae.config.scaling_factor
376
+ )
377
 
378
  pixels = vae.decode(latents).sample
379
 
 
391
  else:
392
  target_height, target_width = 512 * 2, 640 * 2
393
 
394
+ left_right_padding = (
395
+ max(target_width, image_width) - min(target_width, image_width)
396
+ ) // 2
397
 
398
  original = cv2.copyMakeBorder(
399
+ original,
400
+ top=max(target_height, image_height) - min(target_height, image_height),
401
  bottom=0,
402
+ left=left_right_padding,
403
+ right=left_right_padding,
404
+ borderType=cv2.BORDER_CONSTANT,
405
+ value=(0, 0, 0),
406
  )
407
 
408
  transform = A.Compose(
 
421
 
422
  return transform(image=original)["image"]
423
 
424
+
425
  class BGSource(Enum):
426
  NONE = "None"
427
  LEFT = "Left Light"
 
439
  "sunshine, cafe, chilled",
440
  "exhibition, paintings",
441
  "beach",
442
+ "winter, snow" "forrest, cloudy",
 
443
  "party, people",
444
  "cozy living room, sofa, shelf",
445
  "mountains",
 
450
  "appartment, soft light",
451
  "garden",
452
  "school",
453
+ "art exhibition with paintings in background",
454
  ]
455
 
456
  os.makedirs(ground_truth_dir, exist_ok=True)
 
460
  random.shuffle(all_images)
461
 
462
  for filename in all_images:
463
+ if filename.lower().endswith(
464
+ (".png", ".jpg", ".jpeg")
465
+ ): # Check if the file is an image
466
 
467
  letters = string.ascii_lowercase
468
  random_string = "".join(random.choice(letters) for i in range(13))
 
478
  image = np.array(image)
479
 
480
  image_augmented = augment(image)
481
+ Image.fromarray(image_augmented).getchannel("A").save(
482
+ os.path.join(ground_truth_dir, random_filename)
483
+ )
484
 
485
  image_augmented = image_augmented[:, :, :3]
486
 
 
489
  image_height, image_width, _ = image_augmented.shape
490
 
491
  num_samples = 1
492
+ seed = random.randint(1, 123456789012345678901234567890)
493
  steps = 25
494
  constant_prompt = "details, high quality"
495
  prompt = random.choice(prompts)
 
499
  highres_denoise = 0.7
500
  lowres_denoise = 0.5
501
  bg_source = BGSource.NONE
502
+
503
+ results = process(
504
+ image_augmented,
505
+ constant_prompt,
506
+ image_width,
507
+ image_height,
508
+ num_samples,
509
+ seed,
510
+ steps,
511
+ prompt,
512
+ n_prompt,
513
+ cfg,
514
+ highres_scale,
515
+ highres_denoise,
516
+ lowres_denoise,
517
+ bg_source,
518
+ )
519
  result_image = Image.fromarray(results[0])
520
+ result_image.save(os.path.join(image_dir, random_filename))