Datasets:

Size Categories:
10K<n<100K
ArXiv:
Tags:
License:
Files changed (1) hide show
  1. README.md +191 -187
README.md CHANGED
@@ -1,187 +1,191 @@
1
- ---
2
- pretty_name: Cartoon Set
3
- size_categories:
4
- - 10K<n<100K
5
- task_categories:
6
- - image
7
- - computer-vision
8
- - generative-modelling
9
- license: cc-by-4.0
10
- ---
11
- # Dataset Card for Cartoon Set
12
- ## Table of Contents
13
- - [Dataset Card for Cartoon Set](#dataset-card-for-cartoon-set)
14
- - [Table of Contents](#table-of-contents)
15
- - [Dataset Description](#dataset-description)
16
- - [Dataset Summary](#dataset-summary)
17
- - [Usage](#usage)
18
- - [Dataset Structure](#dataset-structure)
19
- - [Data Instances](#data-instances)
20
- - [Data Fields](#data-fields)
21
- - [Data Splits](#data-splits)
22
- - [Dataset Creation](#dataset-creation)
23
- - [Licensing Information](#licensing-information)
24
- - [Citation Information](#citation-information)
25
- - [Contributions](#contributions)
26
- ## Dataset Description
27
- - **Homepage:** https://google.github.io/cartoonset/
28
- - **Repository:** https://github.com/google/cartoonset/
29
- - **Paper:** XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings
30
- - **Leaderboard:**
31
- - **Point of Contact:**
32
- ### Dataset Summary
33
-
34
- ![Cartoon Set sample image](https://huggingface.co/datasets/cgarciae/cartoonset/resolve/main/sample.png)
35
-
36
- [Cartoon Set](https://google.github.io/cartoonset/) is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.
37
-
38
- #### Usage
39
- `cartoonset` provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways:
40
-
41
- **Using PIL:**
42
- ```python
43
- import datasets
44
- from io import BytesIO
45
- from PIL import Image
46
-
47
- ds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
48
-
49
- def process_fn(sample):
50
- img = Image.open(BytesIO(sample["img_bytes"]))
51
- ...
52
- return {"img": img}
53
-
54
- ds = ds.map(process_fn, remove_columns=["img_bytes"])
55
- ```
56
-
57
- **Using TensorFlow:**
58
- ```python
59
- import datasets
60
- import tensorflow as tf
61
-
62
- hfds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
63
-
64
- ds = tf.data.Dataset.from_generator(
65
- lambda: hfds,
66
- output_signature={
67
- "img_bytes": tf.TensorSpec(shape=(), dtype=tf.string),
68
- },
69
- )
70
-
71
- def process_fn(sample):
72
- img = tf.image.decode_png(sample["img_bytes"], channels=3)
73
- ...
74
- return {"img": img}
75
-
76
- ds = ds.map(process_fn)
77
- ```
78
-
79
- **Additional features:**
80
- You can also access the features that generated each sample e.g:
81
-
82
- ```python
83
- ds = datasets.load_dataset("cgarciae/cartoonset", "10k+features") # or "100k+features"
84
- ```
85
-
86
- Apart from `img_bytes` these configurations add a total of 18 * 2 additional `int` features, these come in `{feature}`, `{feature}_num_categories` pairs where `num_categories` indicates the number of categories for that feature. See [Data Fields](#data-fields) for the complete list of features.
87
-
88
- ## Dataset Structure
89
- ### Data Instances
90
- A sample from the training set is provided below:
91
- ```python
92
- {
93
- 'img_bytes': b'0x...',
94
- }
95
- ```
96
- If `+features` is added to the dataset name, the following additional fields are provided:
97
-
98
- ```python
99
- {
100
- 'img_bytes': b'0x...',
101
- 'eye_angle': 0,
102
- 'eye_angle_num_categories': 3,
103
- 'eye_lashes': 0,
104
- 'eye_lashes_num_categories': 2,
105
- 'eye_lid': 0,
106
- 'eye_lid_num_categories': 2,
107
- 'chin_length': 2,
108
- 'chin_length_num_categories': 3,
109
- ...
110
- }
111
- ```
112
-
113
- ### Data Fields
114
- - `img_bytes`: A byte string containing the raw data of a 500x500 PNG image.
115
-
116
- If `+features` is appended to the dataset name, the following additional `int32` fields are provided:
117
-
118
- - `eye_angle`
119
- - `eye_angle_num_categories`
120
- - `eye_lashes`
121
- - `eye_lashes_num_categories`
122
- - `eye_lid`
123
- - `eye_lid_num_categories`
124
- - `chin_length`
125
- - `chin_length_num_categories`
126
- - `eyebrow_weight`
127
- - `eyebrow_weight_num_categories`
128
- - `eyebrow_shape`
129
- - `eyebrow_shape_num_categories`
130
- - `eyebrow_thickness`
131
- - `eyebrow_thickness_num_categories`
132
- - `face_shape`
133
- - `face_shape_num_categories`
134
- - `facial_hair`
135
- - `facial_hair_num_categories`
136
- - `facial_hair_num_categories`
137
- - `facial_hair_num_categories`
138
- - `hair`
139
- - `hair_num_categories`
140
- - `hair_num_categories`
141
- - `hair_num_categories`
142
- - `eye_color`
143
- - `eye_color_num_categories`
144
- - `face_color`
145
- - `face_color_num_categories`
146
- - `hair_color`
147
- - `hair_color_num_categories`
148
- - `glasses`
149
- - `glasses_num_categories`
150
- - `glasses_color`
151
- - `glasses_color_num_categories`
152
- - `eyes_slant`
153
- - `eye_slant_num_categories`
154
- - `eyebrow_width`
155
- - `eyebrow_width_num_categories`
156
- - `eye_eyebrow_distance`
157
- - `eye_eyebrow_distance_num_categories`
158
-
159
-
160
- ### Data Splits
161
- Train
162
- ## Dataset Creation
163
- ### Licensing Information
164
- This data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License.
165
- ### Citation Information
166
- ```
167
- @article{DBLP:journals/corr/abs-1711-05139,
168
- author = {Amelie Royer and
169
- Konstantinos Bousmalis and
170
- Stephan Gouws and
171
- Fred Bertsch and
172
- Inbar Mosseri and
173
- Forrester Cole and
174
- Kevin Murphy},
175
- title = {{XGAN:} Unsupervised Image-to-Image Translation for many-to-many Mappings},
176
- journal = {CoRR},
177
- volume = {abs/1711.05139},
178
- year = {2017},
179
- url = {http://arxiv.org/abs/1711.05139},
180
- eprinttype = {arXiv},
181
- eprint = {1711.05139},
182
- timestamp = {Mon, 13 Aug 2018 16:47:38 +0200},
183
- biburl = {https://dblp.org/rec/journals/corr/abs-1711-05139.bib},
184
- bibsource = {dblp computer science bibliography, https://dblp.org}
185
- }
186
- ```
187
- ### Contributions
 
 
 
 
 
1
+ ---
2
+ license:
3
+ - cc-by-4.0
4
+ size_categories:
5
+ - 10K<n<100K
6
+ task_categories:
7
+ - other
8
+ tags:
9
+ - image
10
+ - computer-vision
11
+ - generative-modeling
12
+ pretty_name: Cartoon Set
13
+ ---
14
+
15
+ # Dataset Card for Cartoon Set
16
+ ## Table of Contents
17
+ - [Dataset Card for Cartoon Set](#dataset-card-for-cartoon-set)
18
+ - [Table of Contents](#table-of-contents)
19
+ - [Dataset Description](#dataset-description)
20
+ - [Dataset Summary](#dataset-summary)
21
+ - [Usage](#usage)
22
+ - [Dataset Structure](#dataset-structure)
23
+ - [Data Instances](#data-instances)
24
+ - [Data Fields](#data-fields)
25
+ - [Data Splits](#data-splits)
26
+ - [Dataset Creation](#dataset-creation)
27
+ - [Licensing Information](#licensing-information)
28
+ - [Citation Information](#citation-information)
29
+ - [Contributions](#contributions)
30
+ ## Dataset Description
31
+ - **Homepage:** https://google.github.io/cartoonset/
32
+ - **Repository:** https://github.com/google/cartoonset/
33
+ - **Paper:** XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings
34
+ - **Leaderboard:**
35
+ - **Point of Contact:**
36
+ ### Dataset Summary
37
+
38
+ ![Cartoon Set sample image](https://huggingface.co/datasets/cgarciae/cartoonset/resolve/main/sample.png)
39
+
40
+ [Cartoon Set](https://google.github.io/cartoonset/) is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.
41
+
42
+ #### Usage
43
+ `cartoonset` provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways:
44
+
45
+ **Using PIL:**
46
+ ```python
47
+ import datasets
48
+ from io import BytesIO
49
+ from PIL import Image
50
+
51
+ ds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
52
+
53
+ def process_fn(sample):
54
+ img = Image.open(BytesIO(sample["img_bytes"]))
55
+ ...
56
+ return {"img": img}
57
+
58
+ ds = ds.map(process_fn, remove_columns=["img_bytes"])
59
+ ```
60
+
61
+ **Using TensorFlow:**
62
+ ```python
63
+ import datasets
64
+ import tensorflow as tf
65
+
66
+ hfds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
67
+
68
+ ds = tf.data.Dataset.from_generator(
69
+ lambda: hfds,
70
+ output_signature={
71
+ "img_bytes": tf.TensorSpec(shape=(), dtype=tf.string),
72
+ },
73
+ )
74
+
75
+ def process_fn(sample):
76
+ img = tf.image.decode_png(sample["img_bytes"], channels=3)
77
+ ...
78
+ return {"img": img}
79
+
80
+ ds = ds.map(process_fn)
81
+ ```
82
+
83
+ **Additional features:**
84
+ You can also access the features that generated each sample e.g:
85
+
86
+ ```python
87
+ ds = datasets.load_dataset("cgarciae/cartoonset", "10k+features") # or "100k+features"
88
+ ```
89
+
90
+ Apart from `img_bytes` these configurations add a total of 18 * 2 additional `int` features, these come in `{feature}`, `{feature}_num_categories` pairs where `num_categories` indicates the number of categories for that feature. See [Data Fields](#data-fields) for the complete list of features.
91
+
92
+ ## Dataset Structure
93
+ ### Data Instances
94
+ A sample from the training set is provided below:
95
+ ```python
96
+ {
97
+ 'img_bytes': b'0x...',
98
+ }
99
+ ```
100
+ If `+features` is added to the dataset name, the following additional fields are provided:
101
+
102
+ ```python
103
+ {
104
+ 'img_bytes': b'0x...',
105
+ 'eye_angle': 0,
106
+ 'eye_angle_num_categories': 3,
107
+ 'eye_lashes': 0,
108
+ 'eye_lashes_num_categories': 2,
109
+ 'eye_lid': 0,
110
+ 'eye_lid_num_categories': 2,
111
+ 'chin_length': 2,
112
+ 'chin_length_num_categories': 3,
113
+ ...
114
+ }
115
+ ```
116
+
117
+ ### Data Fields
118
+ - `img_bytes`: A byte string containing the raw data of a 500x500 PNG image.
119
+
120
+ If `+features` is appended to the dataset name, the following additional `int32` fields are provided:
121
+
122
+ - `eye_angle`
123
+ - `eye_angle_num_categories`
124
+ - `eye_lashes`
125
+ - `eye_lashes_num_categories`
126
+ - `eye_lid`
127
+ - `eye_lid_num_categories`
128
+ - `chin_length`
129
+ - `chin_length_num_categories`
130
+ - `eyebrow_weight`
131
+ - `eyebrow_weight_num_categories`
132
+ - `eyebrow_shape`
133
+ - `eyebrow_shape_num_categories`
134
+ - `eyebrow_thickness`
135
+ - `eyebrow_thickness_num_categories`
136
+ - `face_shape`
137
+ - `face_shape_num_categories`
138
+ - `facial_hair`
139
+ - `facial_hair_num_categories`
140
+ - `facial_hair_num_categories`
141
+ - `facial_hair_num_categories`
142
+ - `hair`
143
+ - `hair_num_categories`
144
+ - `hair_num_categories`
145
+ - `hair_num_categories`
146
+ - `eye_color`
147
+ - `eye_color_num_categories`
148
+ - `face_color`
149
+ - `face_color_num_categories`
150
+ - `hair_color`
151
+ - `hair_color_num_categories`
152
+ - `glasses`
153
+ - `glasses_num_categories`
154
+ - `glasses_color`
155
+ - `glasses_color_num_categories`
156
+ - `eyes_slant`
157
+ - `eye_slant_num_categories`
158
+ - `eyebrow_width`
159
+ - `eyebrow_width_num_categories`
160
+ - `eye_eyebrow_distance`
161
+ - `eye_eyebrow_distance_num_categories`
162
+
163
+
164
+ ### Data Splits
165
+ Train
166
+ ## Dataset Creation
167
+ ### Licensing Information
168
+ This data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License.
169
+ ### Citation Information
170
+ ```
171
+ @article{DBLP:journals/corr/abs-1711-05139,
172
+ author = {Amelie Royer and
173
+ Konstantinos Bousmalis and
174
+ Stephan Gouws and
175
+ Fred Bertsch and
176
+ Inbar Mosseri and
177
+ Forrester Cole and
178
+ Kevin Murphy},
179
+ title = {{XGAN:} Unsupervised Image-to-Image Translation for many-to-many Mappings},
180
+ journal = {CoRR},
181
+ volume = {abs/1711.05139},
182
+ year = {2017},
183
+ url = {http://arxiv.org/abs/1711.05139},
184
+ eprinttype = {arXiv},
185
+ eprint = {1711.05139},
186
+ timestamp = {Mon, 13 Aug 2018 16:47:38 +0200},
187
+ biburl = {https://dblp.org/rec/journals/corr/abs-1711-05139.bib},
188
+ bibsource = {dblp computer science bibliography, https://dblp.org}
189
+ }
190
+ ```
191
+ ### Contributions