Datasets:

ArXiv:
Tags:
art
License:
schirrmacher commited on
Commit
54e7ec8
·
verified ·
1 Parent(s): 4810ae5

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -9,13 +9,13 @@ pretty_name: Human Segmentation Dataset
9
 
10
  [>>> Download Here <<<](https://drive.google.com/drive/folders/1K1lK6nSoaQ7PLta-bcfol3XSGZA1b9nt?usp=drive_link)
11
 
12
- This dataset was created **for developing the best fully open-source background remover** of images with humans. It was crafted with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse), a Stable Diffusion extension for generating transparent images.
13
 
14
  The dataset covers a diverse set of segmented humans: various skin tones, clothes, hair styles etc. Since Stable Diffusion is not perfect, the dataset contains images with flaws. Still the dataset is good enough for training background remover models.
15
 
16
  It contains transparent images of humans (`/humans`) which are randomly combined with backgrounds (`/backgrounds`) with an augmentation script.
17
 
18
- I created more than 5.000 images with people and more than 5.000 diverse backgrounds.
19
 
20
  # Create Training Dataset
21
 
@@ -44,10 +44,15 @@ I had some trouble with the Hugging Face file upload. This is why you can find t
44
 
45
  Synthetic datasets have limitations for achieving great segmentation results. This is because artificial lighting, occlusion, scale or backgrounds create a gap between synthetic and real images. A "model trained solely on synthetic data generated with naïve domain randomization struggles to generalize on the real domain", see [PEOPLESANSPEOPLE: A Synthetic Data Generator for Human-Centric Computer Vision (2022)](https://arxiv.org/pdf/2112.09290). However, hybrid training approaches seem to be promising and can even improve segmentation results.
46
 
47
- Currently I am doing research how to close this gap with the resources I have. There are approaches like considering the pose of humans for improving segmentation results, see [Cross-Domain Complementary Learning Using Pose for Multi-Person Part Segmentation (2019)](https://arxiv.org/pdf/1907.05193).
48
 
49
  # Changelog
50
 
 
 
 
 
 
51
  ### 28.05.2024
52
 
53
  - Reduced blur, because it leads to blurred edges in results
 
9
 
10
  [>>> Download Here <<<](https://drive.google.com/drive/folders/1K1lK6nSoaQ7PLta-bcfol3XSGZA1b9nt?usp=drive_link)
11
 
12
+ This dataset was created **for developing the best fully open-source background remover** of images with humans. It was crafted with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse), a Stable Diffusion extension for generating transparent images. After creating segmented humans, [IC-Light](https://github.com/lllyasviel/IC-Light) was used for embedding them into realistic scenarios.
13
 
14
  The dataset covers a diverse set of segmented humans: various skin tones, clothes, hair styles etc. Since Stable Diffusion is not perfect, the dataset contains images with flaws. Still the dataset is good enough for training background remover models.
15
 
16
  It contains transparent images of humans (`/humans`) which are randomly combined with backgrounds (`/backgrounds`) with an augmentation script.
17
 
18
+ I created more than 7.000 images with people and diverse backgrounds.
19
 
20
  # Create Training Dataset
21
 
 
44
 
45
  Synthetic datasets have limitations for achieving great segmentation results. This is because artificial lighting, occlusion, scale or backgrounds create a gap between synthetic and real images. A "model trained solely on synthetic data generated with naïve domain randomization struggles to generalize on the real domain", see [PEOPLESANSPEOPLE: A Synthetic Data Generator for Human-Centric Computer Vision (2022)](https://arxiv.org/pdf/2112.09290). However, hybrid training approaches seem to be promising and can even improve segmentation results.
46
 
47
+ Currently I am doing research how to close this gap. Latest research is about creating segmented humans with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse) and then apply [IC-Light](https://github.com/lllyasviel/IC-Light) for creating realistic light effects and shadows.
48
 
49
  # Changelog
50
 
51
+ ### 08.06.2024
52
+
53
+ - Applied [IC-Light](https://github.com/lllyasviel/IC-Light) to segmented data
54
+ - Added higher rotation angle to augmentation transformation
55
+
56
  ### 28.05.2024
57
 
58
  - Reduced blur, because it leads to blurred edges in results