Ralph Peeters commited on
Commit
33ef954
1 Parent(s): 831c96e

add dataset

Browse files
.gitattributes CHANGED
@@ -35,3 +35,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
35
  *.mp3 filter=lfs diff=lfs merge=lfs -text
36
  *.ogg filter=lfs diff=lfs merge=lfs -text
37
  *.wav filter=lfs diff=lfs merge=lfs -text
 
 
35
  *.mp3 filter=lfs diff=lfs merge=lfs -text
36
  *.ogg filter=lfs diff=lfs merge=lfs -text
37
  *.wav filter=lfs diff=lfs merge=lfs -text
38
+ *.json.gz filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - weak supervision
4
+ - expert-generated
5
+ languages:
6
+ - '''en-US'''
7
+ licenses:
8
+ - unknown
9
+ multilinguality:
10
+ - monolingual
11
+ pretty_name: products-2017
12
+ size_categories:
13
+ - 1K<n<10K
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ - data-integration
20
+ task_ids:
21
+ - entity-matching
22
+ - identity-resolution
23
+ - product-matching
24
+ paperswithcode_id: wdc-products
25
+ ---
26
+
27
+ # Dataset Card for [products-2017]
28
+
29
+ ## Table of Contents
30
+ - [Table of Contents](#table-of-contents)
31
+ - [Dataset Description](#dataset-description)
32
+ - [Dataset Summary](#dataset-summary)
33
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
34
+ - [Languages](#languages)
35
+ - [Dataset Structure](#dataset-structure)
36
+ - [Data Instances](#data-instances)
37
+ - [Data Fields](#data-fields)
38
+ - [Data Splits](#data-splits)
39
+ - [Dataset Creation](#dataset-creation)
40
+ - [Annotations](#annotations)
41
+ - [Additional Information](#additional-information)
42
+ - [Citation Information](#citation-information)
43
+
44
+ ## Dataset Description
45
+
46
+ - **Homepage:** [LSPCv2 Homepage](http://webdatacommons.org/largescaleproductcorpus/v2/index.html)
47
+ - **Point of Contact:** [Ralph Peeters](mailto:ralph.peeters@uni-mannheim.de)
48
+
49
+ ### Dataset Summary
50
+
51
+ Many e-shops have started to mark-up product data within their HTML pages using the schema.org vocabulary. The Web Data Commons project regularly extracts such data from the Common Crawl, a large public web crawl. The Web Data Commons Training and Test Sets for Large-Scale Product Matching contain product offers from different e-shops in the form of binary product pairs (with corresponding label "match" or "no match")
52
+
53
+ In order to support the evaluation of machine learning-based matching methods, the data is split into training, validation and test set. We provide training and validation sets in four different sizes for four product categories. The labels of the test sets were manually checked while those of the training sets were derived using shared product identifiers from the Web via weak supervision.
54
+
55
+ The data stems from the WDC Product Data Corpus for Large-Scale Product Matching - Version 2.0 which consists of 26 million product offers originating from 79 thousand websites.
56
+
57
+
58
+ ### Supported Tasks and Leaderboards
59
+
60
+ Entity Matching, Product Matching
61
+
62
+ ### Languages
63
+
64
+ English
65
+
66
+ ## Dataset Structure
67
+
68
+ ### Data Instances
69
+
70
+ The data is structured as pairs of product offers with the corresponding match/non-match label. This is an example instance from the computers category:
71
+
72
+ ```
73
+ {"pair_id":"581109#16637861","label":0,"id_left":581109,"category_left":"Computers_and_Accessories","cluster_id_left":1324529,"brand_left":"\"Gigabyte\"@en","title_left":" \"Gigabyte Radeon RX 480 G1 Gaming 4096MB GDDR5 PCI-Express Graphics Card\"@en \"Gigabyte Gr| OcUK\"@en","description_left":"\"GV-RX480G1 GAMING-4GD, Core Clock: 1202MHz, Boost Clock: 1290MHz, Memory: 4096MB 7000MHz GDDR5, Stream Processors: 2304, Crossfire Ready, VR Ready, FreeSync Ready, 3 Years Warranty\"@en ","price_left":null,"specTableContent_left":null,"id_right":16637861,"category_right":"Computers_and_Accessories","cluster_id_right":107415,"brand_right":"\"Gigabyte\"@en","title_right":" \"Gigabyte Radeon RX 550 Gaming OC 2048MB GDDR5 PCI-Express Graphics Card\"@en \"Gigabyte Gr| OcUK\"@en","description_right":"\"GV-RX550GAMING OC-2GD, Boost: 1219MHz, Memory: 2048MB 7000MHz GDDR5, Stream Processors: 512, DirectX 12 Support, 3 Years Warranty\"@en ","price_right":null,"specTableContent_right":null}
74
+ ```
75
+
76
+ ### Data Fields
77
+
78
+ - pair_id: unique identifier of a pair (string)
79
+ - label: binary label, match or non-match (int)
80
+
81
+ The following attributes are contained twice, once for the first and once for the second product offer
82
+
83
+ - id: unique id of the product offer (int)
84
+ - category: product category (string)
85
+ - cluster_id: id of the product cluster from the original corpus this offer belongs to (int)
86
+ - brand: brand of the product (string)
87
+ - title: product title (string)
88
+ - description: longer product description (string)
89
+ - price: price of the product offer (string)
90
+ - specTableContent: additional data found in specification tables on the webpage that contains the product offer (string)
91
+
92
+ ### Data Splits
93
+ - Computers
94
+ - Test set - 1100 pairs
95
+ - Small Train set - 2267 pairs
96
+ - Small Validation set - 567 pairs
97
+ - Medium Train set - 6475 pairs
98
+ - Medium Validation set - 1619 pairs
99
+ - Large Train set - 26687 pairs
100
+ - Large Validation set - 6672 pairs
101
+ - XLarge Train set - 54768 pairs
102
+ - Xlarge Validation set - 13693 pairs
103
+
104
+ - Cameras
105
+ - Test set - 1100 pairs
106
+ - Small Train set - 1508 pairs
107
+ - Small Validation set - 378 pairs
108
+ - Medium Train set - 4204 pairs
109
+ - Medium Validation set - 1051 pairs
110
+ - Large Train set - 16028 pairs
111
+ - Large Validation set - 4008 pairs
112
+ - XLarge Train set - 33821 pairs
113
+ - Xlarge Validation set - 8456 pairs
114
+
115
+ - Watches
116
+ - Test set - 1100 pairs
117
+ - Small Train set - 1804 pairs
118
+ - Small Validation set - 451 pairs
119
+ - Medium Train set - 5130 pairs
120
+ - Medium Validation set - 1283 pairs
121
+ - Large Train set - 21621 pairs
122
+ - Large Validation set - 5406 pairs
123
+ - XLarge Train set - 49255 pairs
124
+ - Xlarge Validation set - 12314 pairs
125
+
126
+ - Shoes
127
+ - Test set - 1100 pairs
128
+ - Small Train set - 1650 pairs
129
+ - Small Validation set - 413 pairs
130
+ - Medium Train set - 4644 pairs
131
+ - Medium Validation set - 1161 pairs
132
+ - Large Train set - 18391 pairs
133
+ - Large Validation set - 4598 pairs
134
+ - XLarge Train set - 33943 pairs
135
+ - Xlarge Validation set - 8486 pairs
136
+
137
+
138
+ ## Dataset Creation
139
+
140
+ ### Annotations
141
+
142
+ #### Annotation process
143
+
144
+ - Training and Validation sets: distant supervision via shared schema.org product IDs
145
+ - Test sets: Single expert annotator
146
+
147
+ #### Who are the annotators?
148
+
149
+ [Ralph Peeters](https://www.uni-mannheim.de/dws/people/researchers/phd-students/ralph-peeters/)
150
+
151
+ ## Additional Information
152
+
153
+ ### Citation Information
154
+
155
+ ```
156
+ @inproceedings{primpeli2019wdc,
157
+ title={The WDC training dataset and gold standard for large-scale product matching},
158
+ author={Primpeli, Anna and Peeters, Ralph and Bizer, Christian},
159
+ booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
160
+ pages={381--386},
161
+ year={2019}
162
+ }
163
+ ```
cameras/test.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bce167650dbf6c993e848baa9e2759ec91f4e0147c7032cec938eb0f6c777f2c
3
+ size 662236
cameras/train_large.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a39dbf9f3c5e62a4ab330e5cabe2e3ffa8268765a9b56ba6e566bb5021e1077
3
+ size 9702430
cameras/train_medium.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0d597eb300adb2de5b860e9773bd81e23878e80ab612cf3ca9257b11ae76ce9
3
+ size 2508508
cameras/train_small.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90eafd129c495d4120b3bdc4e0004d7e0b075669b299f434f89addf25468926f
3
+ size 896323
cameras/train_xlarge.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c23f4bea284e030bcc6bf67b143a0252b9a88baa1868ae4b91647d1f51684072
3
+ size 21291906
cameras/valid_large.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84236bb649899400ee816fed8c1dda06c4bea78f02f8a1c08032fdef51613ba4
3
+ size 2364699
cameras/valid_medium.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bc9ffdbad348e1966bd624e3725bab54f1309452f075a7f65d973f35bed186f
3
+ size 614504
cameras/valid_small.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8ad16d21e69067b593a48a8fe55bc77f5e914b98520d531474d10c7877a9dec
3
+ size 240242
cameras/valid_xlarge.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9c0527d4275b87e8ca2d13d93059407dfb78117b5941d0c9b349394dde33dfd
3
+ size 5338288
computers/test.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daeee50fc827d32838365da9963239917f621e367f8944ec9c90196ee529c46b
3
+ size 440473
computers/train_large.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27f32be2bc7ad18c9da715c81d899391f7eaf3d5b431ab261b00acc0a608a31a
3
+ size 10749627
computers/train_medium.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b22ca10d19b3e0061bd53e22a4bed67595c981bfff38d27f231be36ed8649d8
3
+ size 2562779
computers/train_small.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:013bd9c97b0b817d299e0a8ca3583692c8c0d12de0876590b16303bed12db0f2
3
+ size 913653
computers/train_xlarge.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d19055be29b211d7c7efd08413e1180c91fa4f7180c4d01572af9479ac31609a
3
+ size 21979464
computers/valid_large.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:524ad560d4865226149ad6c783bf9654ee1d8660c2f5aac827940b198d57d625
3
+ size 2619716
computers/valid_medium.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b11017fb8560165d7cc2c4884808ac95174847e8cd27b52cf59e711dfb1bfa1
3
+ size 660571
computers/valid_small.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01448826a98a9b4ccf63186caf021ab650fe870dcbfdc13f3de0e1df67c81367
3
+ size 211906
computers/valid_xlarge.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce4789d79d228fd3cbe6c4a8de4027119c7c7c9332ceb73d014cd6b79c61ed96
3
+ size 5481832
products-2017.py ADDED
@@ -0,0 +1,293 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """The WDC Product Data Corpus and Gold Standard for Large-Scale Product Matching - Version 2.0."""
15
+
16
+
17
+ import csv
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+ from pdb import set_trace
24
+
25
+
26
+ _CITATION = """\
27
+ @inproceedings{primpeli2019wdc,
28
+ title={The WDC training dataset and gold standard for large-scale product matching},
29
+ author={Primpeli, Anna and Peeters, Ralph and Bizer, Christian},
30
+ booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
31
+ pages={381--386},
32
+ year={2019}
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ Many e-shops have started to mark-up product data within their HTML pages using the schema.org vocabulary. The Web Data Commons project regularly extracts such data from the Common Crawl, a large public web crawl. The Web Data Commons Training and Test Sets for Large-Scale Product Matching contain product offers from different e-shops in the form of binary product pairs (with corresponding label "match" or "no match")
38
+
39
+ In order to support the evaluation of machine learning-based matching methods, the data is split into training, validation and test set. We provide training and validation sets in four different sizes for four product categories. The labels of the test sets were manually checked while those of the training sets were derived using shared product identifiers from the Web via weak supervision.
40
+
41
+ The data stems from the WDC Product Data Corpus for Large-Scale Product Matching - Version 2.0 which consists of 26 million product offers originating from 79 thousand websites.
42
+ """
43
+
44
+ _HOMEPAGE = "http://webdatacommons.org/largescaleproductcorpus/v2/index.html"
45
+
46
+ _LICENSE = ""
47
+
48
+ # TODO: Add link to the official dataset URLs here
49
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
50
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
51
+ _URLS = {
52
+ "computers": "https://huggingface.co/datasets/wdc/products-2017/computers/",
53
+ "cameras": "https://huggingface.co/datasets/wdc/products-2017/cameras/",
54
+ "watches": "https://huggingface.co/datasets/wdc/products-2017/watches/",
55
+ "shoes": "https://huggingface.co/datasets/wdc/products-2017/shoes/"
56
+ }
57
+
58
+ _BASE_DATA_PAT_FORMAT_STR = "{category}/"
59
+
60
+ class Products2017Config(datasets.BuilderConfig):
61
+ """The WDC Product Data Corpus and Gold Standard for Large-Scale Product Matching - Version 2.0."""
62
+
63
+ def __init__(self, name, category: str, **kwargs):
64
+ """BuilderConfig for WDC Products 2017.
65
+ Args:
66
+ category (str): The product category and training set size.
67
+ """
68
+
69
+ size = name.split('_')[1]
70
+ # Initialize the base class.
71
+ description = (
72
+ f"Dataset for category {name}"
73
+ )
74
+ super(Products2017Config, self).__init__(
75
+ name=name, **kwargs
76
+ )
77
+
78
+ # Additional attributes
79
+ self.name = name
80
+ self.category = category
81
+ self.size = size
82
+ self.base_data_path = _BASE_DATA_PAT_FORMAT_STR.format(
83
+ category=category
84
+ )
85
+
86
+ class Products2017(datasets.GeneratorBasedBuilder):
87
+ """The WDC Product Data Corpus and Gold Standard for Large-Scale Product Matching - Version 2.0."""
88
+
89
+ VERSION = datasets.Version("2.1.0")
90
+
91
+ # This is an example of a dataset with multiple configurations.
92
+ # If you don't want/need to define several sub-sets in your dataset,
93
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
94
+
95
+ # If you need to make complex sub-parts in the datasets with configurable options
96
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
97
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
98
+
99
+ # You will be able to load one or the other configurations in the following list with
100
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
101
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
102
+ BUILDER_CONFIGS = [
103
+ Products2017Config(
104
+ name='computers_xlarge',
105
+ category='computers',
106
+ version=VERSION,
107
+ description="The computers xlarge dataset part of Products-2017"),
108
+ Products2017Config(
109
+ name='computers_large',
110
+ category='computers',
111
+ version=VERSION,
112
+ description="The computers large dataset part of Products-2017"),
113
+ Products2017Config(
114
+ name='computers_medium',
115
+ category='computers',
116
+ version=VERSION,
117
+ description="The computers medium dataset part of Products-2017"),
118
+ Products2017Config(
119
+ name='computers_small',
120
+ category='computers',
121
+ version=VERSION,
122
+ description="The computers small dataset part of Products-2017"),
123
+ Products2017Config(
124
+ name='cameras_xlarge',
125
+ category='cameras',
126
+ version=VERSION,
127
+ description="The cameras xlarge dataset part of Products-2017"),
128
+ Products2017Config(
129
+ name='cameras_large',
130
+ category='cameras',
131
+ version=VERSION,
132
+ description="The cameras large dataset part of Products-2017"),
133
+ Products2017Config(
134
+ name='cameras_medium',
135
+ category='cameras',
136
+ version=VERSION,
137
+ description="The cameras medium dataset part of Products-2017"),
138
+ Products2017Config(
139
+ name='cameras_small',
140
+ category='cameras',
141
+ version=VERSION,
142
+ description="The cameras small dataset part of Products-2017"),
143
+ Products2017Config(
144
+ name='watches_xlarge',
145
+ category='watches',
146
+ version=VERSION,
147
+ description="The watches xlarge dataset part of Products-2017"),
148
+ Products2017Config(
149
+ name='watches_large',
150
+ category='watches',
151
+ version=VERSION,
152
+ description="The watches large dataset part of Products-2017"),
153
+ Products2017Config(
154
+ name='watches_medium',
155
+ category='watches',
156
+ version=VERSION,
157
+ description="The watches medium dataset part of Products-2017"),
158
+ Products2017Config(
159
+ name='watches_small',
160
+ category='watches',
161
+ version=VERSION,
162
+ description="The watches small dataset part of Products-2017"),
163
+ Products2017Config(
164
+ name='shoes_xlarge',
165
+ category='shoes',
166
+ version=VERSION,
167
+ description="The shoes xlarge dataset part of Products-2017"),
168
+ Products2017Config(
169
+ name='shoes_large',
170
+ category='shoes',
171
+ version=VERSION,
172
+ description="The shoes large dataset part of Products-2017"),
173
+ Products2017Config(
174
+ name='shoes_medium',
175
+ category='shoes',
176
+ version=VERSION,
177
+ description="The shoes medium dataset part of Products-2017"),
178
+ Products2017Config(
179
+ name='shoes_small',
180
+ category='shoes',
181
+ version=VERSION,
182
+ description="The shoes small dataset part of Products-2017"),
183
+
184
+ ]
185
+
186
+ DEFAULT_CONFIG_NAME = "computers_medium" # It's not mandatory to have a default configuration. Just use one if it make sense.
187
+
188
+ def _info(self):
189
+
190
+ features = datasets.Features(
191
+ {
192
+ "pair_id": datasets.Value("string"),
193
+ "label": datasets.Value("int32"),
194
+ "id_left": datasets.Value("int32"),
195
+ "category_left": datasets.Value("string"),
196
+ "cluster_id_left": datasets.Value("int32"),
197
+ "brand_left": datasets.Value("string"),
198
+ "title_left": datasets.Value("string"),
199
+ "description_left": datasets.Value("string"),
200
+ "price_left": datasets.Value("string"),
201
+ "specTableContent_left": datasets.Value("string"),
202
+ "id_right": datasets.Value("int32"),
203
+ "category_right": datasets.Value("string"),
204
+ "cluster_id_right": datasets.Value("int32"),
205
+ "brand_right": datasets.Value("string"),
206
+ "title_right": datasets.Value("string"),
207
+ "description_right": datasets.Value("string"),
208
+ "price_right": datasets.Value("string"),
209
+ "specTableContent_right": datasets.Value("string"),
210
+ }
211
+ )
212
+
213
+ return datasets.DatasetInfo(
214
+ # This is the description that will appear on the datasets page.
215
+ description=_DESCRIPTION,
216
+ # This defines the different columns of the dataset and their types
217
+ features=features, # Here we define them above because they are different between the two configurations
218
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
219
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
220
+ # supervised_keys=("sentence", "label"),
221
+ # Homepage of the dataset for documentation
222
+ homepage=_HOMEPAGE,
223
+ # License for the dataset if available
224
+ license=_LICENSE,
225
+ # Citation for the dataset
226
+ citation=_CITATION,
227
+ )
228
+
229
+ def _split_generators(self, dl_manager):
230
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
231
+
232
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
233
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
234
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
235
+ main_path = self.config.base_data_path
236
+ size = self.config.size
237
+ relevant_files = [f'{main_path}train_{size}.json.gz', f'{main_path}valid_{size}.json.gz', f'{main_path}test.json.gz']
238
+
239
+ data_dir = dl_manager.download_and_extract(relevant_files)
240
+
241
+ return [
242
+ datasets.SplitGenerator(
243
+ name=datasets.Split.TRAIN,
244
+ # These kwargs will be passed to _generate_examples
245
+ gen_kwargs={
246
+ "filepath": data_dir[0],
247
+ "split": "train",
248
+ },
249
+ ),
250
+ datasets.SplitGenerator(
251
+ name=datasets.Split.TEST,
252
+ # These kwargs will be passed to _generate_examples
253
+ gen_kwargs={
254
+ "filepath": data_dir[2],
255
+ "split": "test"
256
+ },
257
+ ),
258
+ datasets.SplitGenerator(
259
+ name=datasets.Split.VALIDATION,
260
+ # These kwargs will be passed to _generate_examples
261
+ gen_kwargs={
262
+ "filepath": data_dir[1],
263
+ "split": "validation",
264
+ },
265
+ ),
266
+ ]
267
+
268
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
269
+ def _generate_examples(self, filepath, split):
270
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
271
+ with open(filepath, encoding="utf-8") as f:
272
+ for key, row in enumerate(f):
273
+ data = json.loads(row)
274
+ yield key, {
275
+ "pair_id": data["pair_id"],
276
+ "label": data["label"],
277
+ "id_left": data["id_left"],
278
+ "category_left": data["category_left"],
279
+ "cluster_id_left": data["cluster_id_left"],
280
+ "brand_left": data["brand_left"],
281
+ "title_left": data["title_left"],
282
+ "description_left": data["description_left"],
283
+ "price_left": data["price_left"],
284
+ "specTableContent_left": data["specTableContent_left"],
285
+ "id_right": data["id_right"],
286
+ "category_right": data["category_right"],
287
+ "cluster_id_right": data["cluster_id_right"],
288
+ "brand_right": data["brand_right"],
289
+ "title_right": data["title_right"],
290
+ "description_right": data["description_right"],
291
+ "price_right": data["price_right"],
292
+ "specTableContent_right": data["specTableContent_right"]
293
+ }
shoes/test.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb5fb369bea226e8abc6d0fefcbbb74f4d4fb262774c6a9ecc3686d42c640683
3
+ size 470891
shoes/train_large.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdf8c3b0f9516e7d7a24c41f6ed8fe8926b04f42f36969d148d68912acac5472
3
+ size 8745243
shoes/train_medium.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17d36cca4b6bbe6e25494245b14ddbe9323b9d4858e615ae1fa5670e02fb1c39
3
+ size 2123481
shoes/train_small.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd26d5e1bae5efbac8134eb784a0b2c7d2b6691195d5b7b7eeb304683e6c557d
3
+ size 757540
shoes/train_xlarge.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1718653a3b4a8dcd47b5c1f55d7fae4e3fcf3c04af13e4446cf49a4b48f7f504
3
+ size 16435876
shoes/valid_large.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03ce9ab66ccc8569917c1ba56f627cca43780746d26ed5fd2afba9f24f155ced
3
+ size 2160668
shoes/valid_medium.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b1f33af9314ea3826265af0cc4e26290da6ab8f194fd8f81d8764ae84fd9bc3
3
+ size 545330
shoes/valid_small.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d76d3c97888e892abbce436cc14334c862a84adc8929fd073089229c6f214b0
3
+ size 194196
shoes/valid_xlarge.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:461e37861da72d786ce11da3b78033fe0785969bf41df0b49ef27d6e7e5f0f27
3
+ size 4181524
watches/test.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d632c71bf50f7d127068e496acd8755247157b016a098d6b79a8c787c5b13ca4
3
+ size 514961
watches/train_large.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e24dcd1a732f0dee0fa3fd81024f11d1c8a0ece1738bc26efacb52b697e261e1
3
+ size 10263745
watches/train_medium.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f538119833efb1c8b565a50b1defe4aace04a20c278c10cacab6361ab091bcc
3
+ size 2526427
watches/train_small.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f36b2a66bb9c574adb66a047783cf20721aff21a9635610eeefc5afff3cbc8f
3
+ size 867101
watches/train_xlarge.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:265a1a8729a69ffcb20564ffe50bae14eb86dcce8f96ad36d1fb74ab084275a5
3
+ size 22312475
watches/valid_large.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7339ef6ea2f34e289db98609b86f2ee327d7105b3f19c3919ec0ddb7358ed1ce
3
+ size 2563533
watches/valid_medium.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a83688e1074dda7233cb4820cf82db04dd89015607c5e0003513de11e7de4b30
3
+ size 623683
watches/valid_small.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01eb5dad8a9db595fcf24cad482fbb67114eb6361c5aee42820c1ad42009fc43
3
+ size 213527
watches/valid_xlarge.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c8dc012317e2b1df8ad01e4b3adecbc07ab4387e0b2259b9d55c86c7cb089df
3
+ size 5463237