system HF staff commited on
Commit
505916b
0 Parent(s):

Update files from the datasets library (from 1.12.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.12.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +277 -0
  3. dataset_infos.json +1 -0
  4. dummy/0.0.0/dummy_data.zip +3 -0
  5. food101.py +191 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Food-101
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - extended|other-foodspotting
17
+ task_categories:
18
+ - other
19
+ task_ids:
20
+ - other-other-image-classification
21
+ paperswithcode_id: food-101
22
+ ---
23
+
24
+ # Dataset Card for Food-101
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** [Food-101 Dataset](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/)
54
+ - **Repository:**
55
+ - **Paper:** [Paper](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf)
56
+ - **Leaderboard:**
57
+ - **Point of Contact:**
58
+
59
+ ### Dataset Summary
60
+
61
+ This dataset consists of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels.
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ - image-classification
66
+
67
+ ### Languages
68
+
69
+ English
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ A sample from the training set is provided below:
76
+
77
+ ```
78
+ {
79
+ 'image': '/root/.cache/huggingface/datasets/downloads/extracted/6e1e8c9052e9f3f7ecbcb4b90860668f81c1d36d86cc9606d49066f8da8bfb4f/food-101/images/churros/1004234.jpg',
80
+ 'label': 23
81
+ }
82
+ ```
83
+
84
+ ### Data Fields
85
+
86
+ The data instances have the following fields:
87
+
88
+ - `image`: a `string` filepath to an image.
89
+ - `label`: an `int` classification label.
90
+
91
+ <details>
92
+ <summary>Class Label Mappings</summary>
93
+
94
+ ```json
95
+ {
96
+ "apple_pie": 0,
97
+ "baby_back_ribs": 1,
98
+ "baklava": 2,
99
+ "beef_carpaccio": 3,
100
+ "beef_tartare": 4,
101
+ "beet_salad": 5,
102
+ "beignets": 6,
103
+ "bibimbap": 7,
104
+ "bread_pudding": 8,
105
+ "breakfast_burrito": 9,
106
+ "bruschetta": 10,
107
+ "caesar_salad": 11,
108
+ "cannoli": 12,
109
+ "caprese_salad": 13,
110
+ "carrot_cake": 14,
111
+ "ceviche": 15,
112
+ "cheesecake": 16,
113
+ "cheese_plate": 17,
114
+ "chicken_curry": 18,
115
+ "chicken_quesadilla": 19,
116
+ "chicken_wings": 20,
117
+ "chocolate_cake": 21,
118
+ "chocolate_mousse": 22,
119
+ "churros": 23,
120
+ "clam_chowder": 24,
121
+ "club_sandwich": 25,
122
+ "crab_cakes": 26,
123
+ "creme_brulee": 27,
124
+ "croque_madame": 28,
125
+ "cup_cakes": 29,
126
+ "deviled_eggs": 30,
127
+ "donuts": 31,
128
+ "dumplings": 32,
129
+ "edamame": 33,
130
+ "eggs_benedict": 34,
131
+ "escargots": 35,
132
+ "falafel": 36,
133
+ "filet_mignon": 37,
134
+ "fish_and_chips": 38,
135
+ "foie_gras": 39,
136
+ "french_fries": 40,
137
+ "french_onion_soup": 41,
138
+ "french_toast": 42,
139
+ "fried_calamari": 43,
140
+ "fried_rice": 44,
141
+ "frozen_yogurt": 45,
142
+ "garlic_bread": 46,
143
+ "gnocchi": 47,
144
+ "greek_salad": 48,
145
+ "grilled_cheese_sandwich": 49,
146
+ "grilled_salmon": 50,
147
+ "guacamole": 51,
148
+ "gyoza": 52,
149
+ "hamburger": 53,
150
+ "hot_and_sour_soup": 54,
151
+ "hot_dog": 55,
152
+ "huevos_rancheros": 56,
153
+ "hummus": 57,
154
+ "ice_cream": 58,
155
+ "lasagna": 59,
156
+ "lobster_bisque": 60,
157
+ "lobster_roll_sandwich": 61,
158
+ "macaroni_and_cheese": 62,
159
+ "macarons": 63,
160
+ "miso_soup": 64,
161
+ "mussels": 65,
162
+ "nachos": 66,
163
+ "omelette": 67,
164
+ "onion_rings": 68,
165
+ "oysters": 69,
166
+ "pad_thai": 70,
167
+ "paella": 71,
168
+ "pancakes": 72,
169
+ "panna_cotta": 73,
170
+ "peking_duck": 74,
171
+ "pho": 75,
172
+ "pizza": 76,
173
+ "pork_chop": 77,
174
+ "poutine": 78,
175
+ "prime_rib": 79,
176
+ "pulled_pork_sandwich": 80,
177
+ "ramen": 81,
178
+ "ravioli": 82,
179
+ "red_velvet_cake": 83,
180
+ "risotto": 84,
181
+ "samosa": 85,
182
+ "sashimi": 86,
183
+ "scallops": 87,
184
+ "seaweed_salad": 88,
185
+ "shrimp_and_grits": 89,
186
+ "spaghetti_bolognese": 90,
187
+ "spaghetti_carbonara": 91,
188
+ "spring_rolls": 92,
189
+ "steak": 93,
190
+ "strawberry_shortcake": 94,
191
+ "sushi": 95,
192
+ "tacos": 96,
193
+ "takoyaki": 97,
194
+ "tiramisu": 98,
195
+ "tuna_tartare": 99,
196
+ "waffles": 100
197
+ }
198
+ ```
199
+ </details>
200
+
201
+
202
+ ### Data Splits
203
+
204
+
205
+ | |train|validation|
206
+ |----------|----:|---------:|
207
+ |# of examples|75750|25250|
208
+
209
+
210
+ ## Dataset Creation
211
+
212
+ ### Curation Rationale
213
+
214
+ [More Information Needed]
215
+
216
+ ### Source Data
217
+
218
+ #### Initial Data Collection and Normalization
219
+
220
+ [More Information Needed]
221
+
222
+ #### Who are the source language producers?
223
+
224
+ [More Information Needed]
225
+
226
+ ### Annotations
227
+
228
+ #### Annotation process
229
+
230
+ [More Information Needed]
231
+
232
+ #### Who are the annotators?
233
+
234
+ [More Information Needed]
235
+
236
+ ### Personal and Sensitive Information
237
+
238
+ [More Information Needed]
239
+
240
+ ## Considerations for Using the Data
241
+
242
+ ### Social Impact of Dataset
243
+
244
+ [More Information Needed]
245
+
246
+ ### Discussion of Biases
247
+
248
+ [More Information Needed]
249
+
250
+ ### Other Known Limitations
251
+
252
+ [More Information Needed]
253
+
254
+ ## Additional Information
255
+
256
+ ### Dataset Curators
257
+
258
+ [More Information Needed]
259
+
260
+ ### Licensing Information
261
+
262
+ [More Information Needed]
263
+
264
+ ### Citation Information
265
+
266
+ ```
267
+ @inproceedings{bossard14,
268
+ title = {Food-101 -- Mining Discriminative Components with Random Forests},
269
+ author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc},
270
+ booktitle = {European Conference on Computer Vision},
271
+ year = {2014}
272
+ }
273
+ ```
274
+
275
+ ### Contributions
276
+
277
+ Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "This dataset consists of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels.", "citation": " @inproceedings{bossard14,\n title = {Food-101 -- Mining Discriminative Components with Random Forests},\n author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc},\n booktitle = {European Conference on Computer Vision},\n year = {2014}\n}\n", "homepage": "https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/", "license": "", "features": {"image": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 101, "names": ["apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "image", "output": "label"}, "task_templates": [{"task": "image-classification", "image_file_path_column": "image", "label_column": "label", "labels": ["apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheese_plate", "cheesecake", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles"]}], "builder_name": "food101", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 13210094, "num_examples": 75750, "dataset_name": "food101"}, "validation": {"name": "validation", "num_bytes": 4403191, "num_examples": 25250, "dataset_name": "food101"}}, "download_checksums": {"http://data.vision.ee.ethz.ch/cvl/food-101.tar.gz": {"num_bytes": 4996278331, "checksum": "d97d15e438b7f4498f96086a4f7e2fa42a32f2712e87d3295441b2b6314053a4"}}, "download_size": 4996278331, "post_processing_size": null, "dataset_size": 17613285, "size_in_bytes": 5013891616}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c75fa3428f1705c7b7390392422b3a952a18beddcde785af2663dd96bc84571b
3
+ size 715637
food101.py ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Dataset class for Food-101 dataset."""
16
+
17
+ import json
18
+ from pathlib import Path
19
+
20
+ import datasets
21
+ from datasets.tasks import ImageClassification
22
+
23
+
24
+ _BASE_URL = "http://data.vision.ee.ethz.ch/cvl/food-101.tar.gz"
25
+
26
+ _HOMEPAGE = "https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/"
27
+
28
+ _DESCRIPTION = (
29
+ "This dataset consists of 101 food categories, with 101'000 images. For "
30
+ "each class, 250 manually reviewed test images are provided as well as 750"
31
+ " training images. On purpose, the training images were not cleaned, and "
32
+ "thus still contain some amount of noise. This comes mostly in the form of"
33
+ " intense colors and sometimes wrong labels. All images were rescaled to "
34
+ "have a maximum side length of 512 pixels."
35
+ )
36
+
37
+ _CITATION = """\
38
+ @inproceedings{bossard14,
39
+ title = {Food-101 -- Mining Discriminative Components with Random Forests},
40
+ author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc},
41
+ booktitle = {European Conference on Computer Vision},
42
+ year = {2014}
43
+ }
44
+ """
45
+
46
+ _NAMES = [
47
+ "apple_pie",
48
+ "baby_back_ribs",
49
+ "baklava",
50
+ "beef_carpaccio",
51
+ "beef_tartare",
52
+ "beet_salad",
53
+ "beignets",
54
+ "bibimbap",
55
+ "bread_pudding",
56
+ "breakfast_burrito",
57
+ "bruschetta",
58
+ "caesar_salad",
59
+ "cannoli",
60
+ "caprese_salad",
61
+ "carrot_cake",
62
+ "ceviche",
63
+ "cheesecake",
64
+ "cheese_plate",
65
+ "chicken_curry",
66
+ "chicken_quesadilla",
67
+ "chicken_wings",
68
+ "chocolate_cake",
69
+ "chocolate_mousse",
70
+ "churros",
71
+ "clam_chowder",
72
+ "club_sandwich",
73
+ "crab_cakes",
74
+ "creme_brulee",
75
+ "croque_madame",
76
+ "cup_cakes",
77
+ "deviled_eggs",
78
+ "donuts",
79
+ "dumplings",
80
+ "edamame",
81
+ "eggs_benedict",
82
+ "escargots",
83
+ "falafel",
84
+ "filet_mignon",
85
+ "fish_and_chips",
86
+ "foie_gras",
87
+ "french_fries",
88
+ "french_onion_soup",
89
+ "french_toast",
90
+ "fried_calamari",
91
+ "fried_rice",
92
+ "frozen_yogurt",
93
+ "garlic_bread",
94
+ "gnocchi",
95
+ "greek_salad",
96
+ "grilled_cheese_sandwich",
97
+ "grilled_salmon",
98
+ "guacamole",
99
+ "gyoza",
100
+ "hamburger",
101
+ "hot_and_sour_soup",
102
+ "hot_dog",
103
+ "huevos_rancheros",
104
+ "hummus",
105
+ "ice_cream",
106
+ "lasagna",
107
+ "lobster_bisque",
108
+ "lobster_roll_sandwich",
109
+ "macaroni_and_cheese",
110
+ "macarons",
111
+ "miso_soup",
112
+ "mussels",
113
+ "nachos",
114
+ "omelette",
115
+ "onion_rings",
116
+ "oysters",
117
+ "pad_thai",
118
+ "paella",
119
+ "pancakes",
120
+ "panna_cotta",
121
+ "peking_duck",
122
+ "pho",
123
+ "pizza",
124
+ "pork_chop",
125
+ "poutine",
126
+ "prime_rib",
127
+ "pulled_pork_sandwich",
128
+ "ramen",
129
+ "ravioli",
130
+ "red_velvet_cake",
131
+ "risotto",
132
+ "samosa",
133
+ "sashimi",
134
+ "scallops",
135
+ "seaweed_salad",
136
+ "shrimp_and_grits",
137
+ "spaghetti_bolognese",
138
+ "spaghetti_carbonara",
139
+ "spring_rolls",
140
+ "steak",
141
+ "strawberry_shortcake",
142
+ "sushi",
143
+ "tacos",
144
+ "takoyaki",
145
+ "tiramisu",
146
+ "tuna_tartare",
147
+ "waffles",
148
+ ]
149
+
150
+
151
+ class Food101(datasets.GeneratorBasedBuilder):
152
+ """Food-101 Images dataset."""
153
+
154
+ def _info(self):
155
+ return datasets.DatasetInfo(
156
+ description=_DESCRIPTION,
157
+ features=datasets.Features(
158
+ {
159
+ "image": datasets.Value("string"),
160
+ "label": datasets.features.ClassLabel(names=_NAMES),
161
+ }
162
+ ),
163
+ supervised_keys=("image", "label"),
164
+ homepage=_HOMEPAGE,
165
+ task_templates=[ImageClassification(image_file_path_column="image", label_column="label", labels=_NAMES)],
166
+ citation=_CITATION,
167
+ )
168
+
169
+ def _split_generators(self, dl_manager):
170
+ dl_path = Path(dl_manager.download_and_extract(_BASE_URL))
171
+ meta_path = dl_path / "food-101" / "meta"
172
+ image_dir_path = dl_path / "food-101" / "images"
173
+ return [
174
+ datasets.SplitGenerator(
175
+ name=datasets.Split.TRAIN,
176
+ gen_kwargs={"json_file_path": meta_path / "train.json", "image_dir_path": image_dir_path},
177
+ ),
178
+ datasets.SplitGenerator(
179
+ name=datasets.Split.VALIDATION,
180
+ gen_kwargs={"json_file_path": meta_path / "test.json", "image_dir_path": image_dir_path},
181
+ ),
182
+ ]
183
+
184
+ def _generate_examples(self, json_file_path, image_dir_path):
185
+ """Generate images and labels for splits."""
186
+ data = json.loads(json_file_path.read_text())
187
+ for label, images in data.items():
188
+ for image_name in images:
189
+ image = image_dir_path / f"{image_name}.jpg"
190
+ features = {"image": str(image), "label": label}
191
+ yield image_name, features