schirrmacher
commited on
Commit
•
9c206a9
1
Parent(s):
d30ccb8
Upload folder using huggingface_hub
Browse files- .gitignore +6 -0
- README.md +21 -13
- backgrounds/background01.png +3 -0
- backgrounds/background02.png +3 -0
- backgrounds/background03.png +3 -0
- create_dataset.sh +36 -0
- humans/example01.png +3 -0
- humans/example02.png +3 -0
- humans/example03.png +3 -0
- requirements.txt +2 -0
- util/merge_images.py +265 -0
.gitignore
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
gt/*
|
2 |
+
im/*
|
3 |
+
out/*
|
4 |
+
training/*
|
5 |
+
validation/*
|
6 |
+
dataset/*
|
README.md
CHANGED
@@ -1,33 +1,41 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
-
- art
|
5 |
pretty_name: Human Segmentation Dataset
|
6 |
---
|
|
|
7 |
# Human Segmentation Dataset
|
8 |
|
9 |
This dataset was created **for developing the best fully open-source background remover** of images with humans.
|
10 |
The dataset was crafted with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse), a Stable Diffusion extension for generating transparent images.
|
11 |
|
|
|
|
|
|
|
12 |
The resulting model will be similar to [RMBG-1.4](https://huggingface.co/briaai/RMBG-1.4), but with open training data/process and commercially free to use.
|
13 |
|
14 |
-
|
15 |
-
|
|
|
16 |
|
17 |
-
|
18 |
-
Then the groundtruth (`/gt`) for segmentation was computed based on the transparent images. The results are written to a training and validation dataset.
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
# Support
|
23 |
|
24 |
If you identify weaknesses in the data, please contact me.
|
25 |
-
I had some trouble with this huge file upload on huggingface. If files are missing use: [GDrvie Download: 61.1 GB](https://drive.google.com/drive/folders/1K1lK6nSoaQ7PLta-bcfol3XSGZA1b9nt?usp=drive_link)
|
26 |
-
|
27 |
-
# Examples
|
28 |
|
29 |
-
|
30 |
-
![](training/gt/aiznxclmqmkvi_tmpzjukj8v6.png)
|
31 |
|
32 |
-
|
33 |
-
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
+
- art
|
5 |
pretty_name: Human Segmentation Dataset
|
6 |
---
|
7 |
+
|
8 |
# Human Segmentation Dataset
|
9 |
|
10 |
This dataset was created **for developing the best fully open-source background remover** of images with humans.
|
11 |
The dataset was crafted with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse), a Stable Diffusion extension for generating transparent images.
|
12 |
|
13 |
+
The dataset covers a diverse set of humans: various skin tones, clothes, hair styles etc.
|
14 |
+
Since Stable Diffusion is not perfect, the dataset contains images with flaws. Still the dataset is good enough for training background remover models.
|
15 |
+
|
16 |
The resulting model will be similar to [RMBG-1.4](https://huggingface.co/briaai/RMBG-1.4), but with open training data/process and commercially free to use.
|
17 |
|
18 |
+
I had some trouble with the Hugging Face file upload. You can find the data here: [GDrvie](https://drive.google.com/drive/folders/1K1lK6nSoaQ7PLta-bcfol3XSGZA1b9nt?usp=drive_link)
|
19 |
+
|
20 |
+
The dataset contains transparent images of humans (`/humans`) which are randomly combined with backgrounds (`/backgrounds`). Then the ground truth (`/gt`) for segmentation was computed based on the transparent images. The results are written to a training and validation dataset.
|
21 |
|
22 |
+
I created more than 5.000 images with people and more than 5.000 diverse backgrounds.
|
|
|
23 |
|
24 |
+
# Create Training Dataset
|
25 |
+
|
26 |
+
The following scripts created training and validation data. Adding to this data is augmented.
|
27 |
+
|
28 |
+
Notice: download the dataset from [GDrvie](https://drive.google.com/drive/folders/1K1lK6nSoaQ7PLta-bcfol3XSGZA1b9nt?usp=drive_link).
|
29 |
+
|
30 |
+
```
|
31 |
+
./create_dataset.sh
|
32 |
+
```
|
33 |
|
34 |
# Support
|
35 |
|
36 |
If you identify weaknesses in the data, please contact me.
|
|
|
|
|
|
|
37 |
|
38 |
+
# Changelog
|
|
|
39 |
|
40 |
+
- Added more diverse backgrounds (natural landscapes, streets, houses)
|
41 |
+
- Added more close-up images
|
backgrounds/background01.png
ADDED
Git LFS Details
|
backgrounds/background02.png
ADDED
Git LFS Details
|
backgrounds/background03.png
ADDED
Git LFS Details
|
create_dataset.sh
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
|
3 |
+
random_merge() {
|
4 |
+
local backgrounds_dir="backgrounds"
|
5 |
+
local overlays_dir="humans"
|
6 |
+
|
7 |
+
local image_path="$1"
|
8 |
+
local groundtruth_path="$2"
|
9 |
+
|
10 |
+
background=$(find "$backgrounds_dir" -type f | shuf -n 1)
|
11 |
+
overlay=$(find "$overlays_dir" -type f | shuf -n 1)
|
12 |
+
echo "Processing iteration $i: $overlay + $background"
|
13 |
+
python3 "util/merge_images.py" \
|
14 |
+
-b "$background" -o "$overlay" \
|
15 |
+
-gt "$groundtruth_path" -im "$image_path"
|
16 |
+
}
|
17 |
+
|
18 |
+
main() {
|
19 |
+
local max_iterations=2000
|
20 |
+
for ((i = 0 ; i <= $max_iterations ; i++)); do
|
21 |
+
# For quicker creation some parallelization
|
22 |
+
# Notice: last iteration if for validation set
|
23 |
+
random_merge dataset/training/im dataset/training/gt &
|
24 |
+
random_merge dataset/training/im dataset/training/gt &
|
25 |
+
random_merge dataset/training/im dataset/training/gt &
|
26 |
+
random_merge dataset/training/im dataset/training/gt &
|
27 |
+
random_merge dataset/training/im dataset/training/gt &
|
28 |
+
random_merge dataset/training/im dataset/training/gt &
|
29 |
+
random_merge dataset/training/im dataset/training/gt &
|
30 |
+
random_merge dataset/training/im dataset/training/gt &
|
31 |
+
random_merge dataset/training/im dataset/training/gt &
|
32 |
+
random_merge dataset/validation/im dataset/validation/g
|
33 |
+
done
|
34 |
+
}
|
35 |
+
|
36 |
+
main
|
humans/example01.png
ADDED
Git LFS Details
|
humans/example02.png
ADDED
Git LFS Details
|
humans/example03.png
ADDED
Git LFS Details
|
requirements.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
albumentations==1.4.6
|
2 |
+
opencv-python==4.9.0.80
|
util/merge_images.py
ADDED
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import cv2
|
3 |
+
import argparse
|
4 |
+
import random
|
5 |
+
import string
|
6 |
+
import albumentations as A
|
7 |
+
|
8 |
+
|
9 |
+
def apply_scale_and_move(image):
|
10 |
+
transform = A.Compose(
|
11 |
+
[
|
12 |
+
A.HorizontalFlip(p=0.5),
|
13 |
+
A.ShiftScaleRotate(
|
14 |
+
shift_limit_x=(-0.3, 0.3),
|
15 |
+
shift_limit_y=(0.0, 0.2),
|
16 |
+
scale_limit=(1.0, 1.5),
|
17 |
+
border_mode=cv2.BORDER_CONSTANT,
|
18 |
+
rotate_limit=(-3, 3),
|
19 |
+
p=0.7,
|
20 |
+
),
|
21 |
+
]
|
22 |
+
)
|
23 |
+
return transform(image=image)["image"]
|
24 |
+
|
25 |
+
|
26 |
+
def apply_transform(image):
|
27 |
+
has_alpha = image.shape[2] == 4
|
28 |
+
if has_alpha:
|
29 |
+
alpha_channel = image[:, :, 3]
|
30 |
+
color_channels = image[:, :, :3]
|
31 |
+
else:
|
32 |
+
color_channels = image
|
33 |
+
|
34 |
+
# Define the transformation
|
35 |
+
transform = A.Compose(
|
36 |
+
[
|
37 |
+
A.RandomBrightnessContrast(
|
38 |
+
brightness_limit=(-0.1, 0.1), contrast_limit=(-0.4, 0), p=0.8
|
39 |
+
)
|
40 |
+
]
|
41 |
+
)
|
42 |
+
|
43 |
+
# Apply the transformation only to the color channels
|
44 |
+
transformed = transform(image=color_channels)
|
45 |
+
transformed_image = transformed["image"]
|
46 |
+
|
47 |
+
# Merge the alpha channel back if it was separated
|
48 |
+
if has_alpha:
|
49 |
+
final_image = cv2.merge(
|
50 |
+
(
|
51 |
+
transformed_image[:, :, 0],
|
52 |
+
transformed_image[:, :, 1],
|
53 |
+
transformed_image[:, :, 2],
|
54 |
+
alpha_channel,
|
55 |
+
)
|
56 |
+
)
|
57 |
+
else:
|
58 |
+
final_image = transformed_image
|
59 |
+
return final_image
|
60 |
+
|
61 |
+
|
62 |
+
def apply_noise(image):
|
63 |
+
transform = A.Compose(
|
64 |
+
[
|
65 |
+
A.MotionBlur(blur_limit=(5, 11), p=1.0),
|
66 |
+
A.GaussNoise(var_limit=(10, 150), p=1.0),
|
67 |
+
A.RandomBrightnessContrast(
|
68 |
+
brightness_limit=(-0.1, 0.1), contrast_limit=(-0.1, 0.1), p=0.5
|
69 |
+
),
|
70 |
+
A.RandomFog(
|
71 |
+
fog_coef_lower=0.05,
|
72 |
+
fog_coef_upper=0.2,
|
73 |
+
alpha_coef=0.08,
|
74 |
+
always_apply=False,
|
75 |
+
p=0.5,
|
76 |
+
),
|
77 |
+
A.RandomShadow(
|
78 |
+
shadow_roi=(0, 0.5, 1, 1),
|
79 |
+
num_shadows_limit=(1, 2),
|
80 |
+
num_shadows_lower=None,
|
81 |
+
num_shadows_upper=None,
|
82 |
+
shadow_dimension=5,
|
83 |
+
always_apply=False,
|
84 |
+
p=0.5,
|
85 |
+
),
|
86 |
+
A.RandomToneCurve(scale=0.1, always_apply=False, p=0.5),
|
87 |
+
]
|
88 |
+
)
|
89 |
+
return transform(image=image)["image"]
|
90 |
+
|
91 |
+
|
92 |
+
def remove_alpha(image, alpha_threshold=200):
|
93 |
+
|
94 |
+
mask = image[:, :, 3] < alpha_threshold
|
95 |
+
image[mask] = [0, 0, 0, 0]
|
96 |
+
|
97 |
+
return image
|
98 |
+
|
99 |
+
|
100 |
+
def merge_images(
|
101 |
+
background_path, overlay_path, output_path, groundtruth_path, width, height
|
102 |
+
):
|
103 |
+
letters = string.ascii_lowercase
|
104 |
+
random_string = "".join(random.choice(letters) for i in range(13))
|
105 |
+
file_name = random_string + "_" + os.path.basename(overlay_path)
|
106 |
+
|
107 |
+
# Read the background image and resize it to the specified dimensions
|
108 |
+
background = cv2.imread(background_path, cv2.IMREAD_COLOR)
|
109 |
+
|
110 |
+
height, width = background.shape[:2]
|
111 |
+
|
112 |
+
height = int(1.5 * height)
|
113 |
+
width = int(1.5 * width)
|
114 |
+
|
115 |
+
resized_background = cv2.resize(
|
116 |
+
background, (width, height), interpolation=cv2.INTER_AREA
|
117 |
+
)
|
118 |
+
|
119 |
+
# Read the overlay image with alpha channel
|
120 |
+
overlay = cv2.imread(overlay_path, cv2.IMREAD_UNCHANGED)
|
121 |
+
|
122 |
+
# Ensure overlay has an alpha channel
|
123 |
+
if overlay.shape[2] < 4:
|
124 |
+
raise Exception("Overlay image does not have an alpha channel.")
|
125 |
+
|
126 |
+
# Apply transformations to the overlay
|
127 |
+
overlay = expand_image_borders_rgba(overlay, width, height)
|
128 |
+
overlay = apply_scale_and_move(overlay)
|
129 |
+
|
130 |
+
# store ground truth
|
131 |
+
extract_alpha_channel_as_bw(overlay, os.path.join(groundtruth_path, file_name))
|
132 |
+
|
133 |
+
overlay = apply_transform(overlay)
|
134 |
+
|
135 |
+
# Overlay placement on the resized background
|
136 |
+
x_offset = (width - overlay.shape[1]) // 2
|
137 |
+
y_offset = (height - overlay.shape[0]) // 2
|
138 |
+
|
139 |
+
# Preventing overlay from exceeding the background dimensions
|
140 |
+
x_offset = max(0, x_offset)
|
141 |
+
y_offset = max(0, y_offset)
|
142 |
+
|
143 |
+
# Calculate the normalized alpha mask
|
144 |
+
alpha_overlay = overlay[..., 3] / 255.0
|
145 |
+
region_of_interest = resized_background[
|
146 |
+
y_offset : y_offset + overlay.shape[0],
|
147 |
+
x_offset : x_offset + overlay.shape[1],
|
148 |
+
:,
|
149 |
+
]
|
150 |
+
|
151 |
+
# Blend the images
|
152 |
+
for c in range(0, 3):
|
153 |
+
region_of_interest[..., c] = (
|
154 |
+
alpha_overlay * overlay[..., c]
|
155 |
+
+ (1 - alpha_overlay) * region_of_interest[..., c]
|
156 |
+
)
|
157 |
+
|
158 |
+
resized_background[
|
159 |
+
y_offset : y_offset + overlay.shape[0], x_offset : x_offset + overlay.shape[1]
|
160 |
+
] = region_of_interest
|
161 |
+
|
162 |
+
resized_background = apply_noise(resized_background)
|
163 |
+
|
164 |
+
cv2.imwrite(os.path.join(output_path, file_name), resized_background)
|
165 |
+
|
166 |
+
|
167 |
+
def expand_image_borders_rgba(
|
168 |
+
image, final_width, final_height, border_color=(0, 0, 0, 0)
|
169 |
+
):
|
170 |
+
# Check if image has an alpha channel
|
171 |
+
if image.shape[2] < 4:
|
172 |
+
raise ValueError(
|
173 |
+
"Loaded image does not contain an alpha channel. Make sure the input image is RGBA."
|
174 |
+
)
|
175 |
+
|
176 |
+
# Current dimensions
|
177 |
+
height, width = image.shape[:2]
|
178 |
+
|
179 |
+
# Calculate padding needed
|
180 |
+
top = bottom = (final_height - height) // 2
|
181 |
+
left = right = (final_width - width) // 2
|
182 |
+
|
183 |
+
# To handle cases where the new dimensions are odd and original dimensions are even (or vice versa)
|
184 |
+
if (final_height - height) % 2 != 0:
|
185 |
+
bottom += 1
|
186 |
+
if (final_width - width) % 2 != 0:
|
187 |
+
right += 1
|
188 |
+
|
189 |
+
# Apply make border with an RGBA color
|
190 |
+
new_image = cv2.copyMakeBorder(
|
191 |
+
image, top, bottom, left, right, cv2.BORDER_CONSTANT, value=border_color
|
192 |
+
)
|
193 |
+
|
194 |
+
return new_image
|
195 |
+
|
196 |
+
|
197 |
+
def extract_alpha_channel_as_bw(image, output_path):
|
198 |
+
# Check if the image has an alpha channel
|
199 |
+
if image.shape[2] < 4:
|
200 |
+
raise ValueError(
|
201 |
+
"Loaded image does not contain an alpha channel. Make sure the input image is in PNG format with an alpha channel."
|
202 |
+
)
|
203 |
+
|
204 |
+
# Extract the alpha channel
|
205 |
+
image = remove_alpha(image.copy())
|
206 |
+
alpha_channel = image[:, :, 3]
|
207 |
+
# Save or display the alpha channel as a black and white image
|
208 |
+
cv2.imwrite(output_path, alpha_channel)
|
209 |
+
|
210 |
+
|
211 |
+
def main():
|
212 |
+
parser = argparse.ArgumentParser(
|
213 |
+
description="Merge two images with one image having transparency."
|
214 |
+
)
|
215 |
+
parser.add_argument(
|
216 |
+
"-b", "--background", required=True, help="Path to the background image"
|
217 |
+
)
|
218 |
+
parser.add_argument(
|
219 |
+
"-o", "--overlay", required=True, help="Path to the overlay image"
|
220 |
+
)
|
221 |
+
parser.add_argument(
|
222 |
+
"-im",
|
223 |
+
"--image-path",
|
224 |
+
type=str,
|
225 |
+
default="im",
|
226 |
+
help="Path where the merged image will be saved",
|
227 |
+
)
|
228 |
+
parser.add_argument(
|
229 |
+
"--width",
|
230 |
+
type=int,
|
231 |
+
default=1920,
|
232 |
+
help="Width to which the background image will be resized",
|
233 |
+
)
|
234 |
+
parser.add_argument(
|
235 |
+
"--height",
|
236 |
+
type=int,
|
237 |
+
default=1080,
|
238 |
+
help="Height to which the background image will be resized",
|
239 |
+
)
|
240 |
+
parser.add_argument(
|
241 |
+
"-gt",
|
242 |
+
"--groundtruth-path",
|
243 |
+
type=str,
|
244 |
+
default="gt",
|
245 |
+
help="Ground truth folder",
|
246 |
+
)
|
247 |
+
args = parser.parse_args()
|
248 |
+
|
249 |
+
if not os.path.exists(args.image_path):
|
250 |
+
os.makedirs(args.image_path)
|
251 |
+
if not os.path.exists(args.groundtruth_path):
|
252 |
+
os.makedirs(args.groundtruth_path)
|
253 |
+
|
254 |
+
merge_images(
|
255 |
+
args.background,
|
256 |
+
args.overlay,
|
257 |
+
args.image_path,
|
258 |
+
args.groundtruth_path,
|
259 |
+
args.width,
|
260 |
+
args.height,
|
261 |
+
)
|
262 |
+
|
263 |
+
|
264 |
+
if __name__ == "__main__":
|
265 |
+
main()
|