fredguth commited on
Commit
b06b504
β€’
1 Parent(s): 5bb1e9a

added dataset description

Browse files
Files changed (3) hide show
  1. .DS_Store +0 -0
  2. .gitignore +1 -0
  3. README.md +56 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ data/*
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - Beijing Wanxing Convergence Technology Co
4
+ license:
5
+ - mit
6
+ pretty_name: aisegmentcn-matting-human
7
+ size_categories:
8
+ - 10K<n<100K
9
+ tags:
10
+ - binary
11
+ - aisegment.cn
12
+ task_categories:
13
+ - image-segmentation
14
+ task_ids:
15
+ - semantic-segmentation
16
+ ---
17
+
18
+ # Dataset Card for AISegment.cn - Matting Human datasets
19
+
20
+ ## Table of Contents
21
+
22
+ - [Dataset Card for AISegment.cn - Matting Human datasets](#dataset-card-for-aisegmentcn---matting-human-datasets)
23
+ - [Table of Contents](#table-of-contents)
24
+ - [Dataset Description](#dataset-description)
25
+ - [Dataset Structure](#dataset-structure)
26
+ - [Licensing Information](#licensing-information)
27
+
28
+ ## Dataset Description
29
+
30
+ Quoting the [dataset's github](https://github.com/aisegmentcn/matting_human_datasets) (translated by Apple Translator):
31
+
32
+ > This dataset is currently the largest portrait matting dataset, containing 34,427 images and corresponding matting results.
33
+ > The data set was marked by the high quality of Beijing Play Star Convergence Technology Co. Ltd., and the portrait soft segmentation model trained using this data set has been commercialized.
34
+
35
+ > The original images in the dataset are from `Flickr`, `Baidu`, and `Taobao`. After face detection and area cropping, a half-length portrait of 600\*800 was generated.
36
+ > The clip_img directory is a half-length portrait image in the format jpg; the matting directory is the corresponding matting file (convenient to confirm the matting quality), the format is png, you should first extract the alpha map from the png image before training.
37
+
38
+ - **Repository:** [aisegmentcn/matting_human_datasets](https://github.com/aisegmentcn/matting_human_datasets)
39
+
40
+ ## Dataset Structure
41
+
42
+ └── data/
43
+ β”œβ”€β”€ clip*img/
44
+ β”‚ └── {group-id}/
45
+ β”‚ └── clip*{subgroup-id}/
46
+ β”‚ └── {group-id}-{img-id}.jpg
47
+ └── matting/
48
+ └── {group-id}/
49
+ └── matting\_{subgroup-id}/
50
+ └── {group-id}-{img-id}.png
51
+
52
+ The input `data/clip_img/1803151818/clip_00000000/1803151818-00000003.jpg` matches the label `data/matting/1803151818/matting_00000000/1803151818-00000003.png`
53
+
54
+ ### Licensing Information
55
+
56
+ See authors [Github](https://github.com/aisegmentcn/matting_human_datasets)