system HF staff commited on
Commit
805357e
0 Parent(s):

Update files from the datasets library (from 1.6.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.6.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-nc-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-retrieval
18
+ task_ids:
19
+ - document-retrieval
20
+ - text-retrieval-other-document-to-document-retrieval
21
+ ---
22
+
23
+ # Dataset Card for the RegIR datasets
24
+
25
+ ## Table of Contents
26
+ - [Dataset Card for the RegIR datasets](#dataset-card-for-ecthr-cases)
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
41
+ - [Annotations](#annotations)
42
+ - [Annotation process](#annotation-process)
43
+ - [Who are the annotators?](#who-are-the-annotators)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+
55
+ ## Dataset Description
56
+
57
+ - **Homepage:** https://archive.org/details/eacl2021_regir_datasets
58
+ - **Repository:** https://archive.org/details/eacl2021_regir_datasets
59
+ - **Paper:** https://arxiv.org/abs/2101.10726
60
+ - **Leaderboard:** N/A
61
+ - **Point of Contact:** [Ilias Chalkidis](mailto:ihalk@aueb.gr)
62
+
63
+ ### Dataset Summary
64
+
65
+ The European Union (EU) has a legislation scheme analogous to regulatory compliance for organizations. According to the Treaty on the Functioning of the European Union (TFEU), all published EU directives must take effect at the national level. Thus, all EU member states must adopt a law to transpose a newly issued directive within the period set by the directive (typically 2 years).
66
+
67
+ Here, we have two datasets, EU2UK and UK2EU, containing EU directives and UK regulations, which can serve both as queries and documents under the ground truth assumption that a UK law is relevant to the EU directives it transposes and vice versa.
68
+
69
+
70
+
71
+ ### Supported Tasks and Leaderboards
72
+
73
+ The dataset supports:
74
+
75
+ **EU2UK** (`eu2uk`): Given an EU directive *Q*, retrieve the set of relevant documents from the pool of all available UK regulations. Relevant documents are those that transpose the EU directive (*Q*).
76
+
77
+ **UK2EU** (`uk2eu`): Given a UK regulation *Q*, retrieve the set of relevant documents from the pool of all available EU directives. Relevant documents are those that are being transposed by the UK regulations (*Q*).
78
+
79
+
80
+ ### Languages
81
+
82
+ All documents are written in English.
83
+
84
+ ## Dataset Structure
85
+
86
+ ### Data Instances
87
+
88
+ ```json
89
+ {
90
+ "document_id": "31977L0794",
91
+ "publication_year": "1977",
92
+ "text": "Commission Directive 77/794/EEC ... of agricultural levies and customs duties",
93
+ "relevant_documents": ["UKPGA19800048", "UKPGA19770036"]
94
+ }
95
+ ```
96
+
97
+ ### Data Fields
98
+
99
+ The following data fields are provided for query documents (`train`, `dev`, `test`):
100
+
101
+ `document_id`: (**str**) The ID of the document.\
102
+ `publication_year`: (**str**) The publication year of the document.\
103
+ `text`: (**str**) The text of the document.\
104
+ `relevant_documents`: (**List[str]**) The list of relevant documents, as represented by their `document_id`.
105
+
106
+ The following data fields are provided for corpus documents (`corpus`):
107
+
108
+ `document_id`: (**str**) The ID of the document.\
109
+ `publication_year`: (**str**) The publication year of the document.\
110
+ `text`: (**str**) The text of the document.\
111
+
112
+ ### Data Splits
113
+
114
+ #### EU2UK dataset
115
+
116
+ | Split | No of Queries | Avg. relevant documents |
117
+ | ------------------- | ------------------------------------ | --- |
118
+ | Train | 1,400 | 1.79 |
119
+ |Development | 300 | 2.09 |
120
+ |Test | 300 | 1.74 |
121
+ Document Pool (Corpus): 52,515 UK regulations
122
+
123
+ #### UK2EU dataset
124
+
125
+ | Split | No of Queries | Avg. relevant documents |
126
+ | ------------------- | ------------------------------------ | --- |
127
+ | Train | 1,500 | 1.90 |
128
+ |Development | 300 | 1.46 |
129
+ |Test | 300 | 1.29 |
130
+ Document Pool (Corpus): 3,930 EU directives
131
+
132
+ ## Dataset Creation
133
+
134
+ ### Curation Rationale
135
+
136
+ The dataset was curated by Chalkidis et al. (2021).\
137
+ The transposition pairs are publicly available by the Publications Office of EU (https://publications.europa.eu/en).
138
+
139
+ ### Source Data
140
+
141
+ #### Initial Data Collection and Normalization
142
+
143
+ The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) and Legislation.GOV.UK (http://legislation.gov.uk/) in an unprocessed format.\
144
+ The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).\
145
+ For more information on the dataset curation, read Chalkidis et al. (2021).
146
+
147
+ #### Who are the source language producers?
148
+
149
+ [More Information Needed]
150
+
151
+ ### Annotations
152
+
153
+ #### Annotation process
154
+
155
+ * The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) and Legislation.GOV.UK (http://legislation.gov.uk/) in an unprocessed format.
156
+ * The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
157
+
158
+
159
+ #### Who are the annotators?
160
+
161
+ Publications Office of EU (https://publications.europa.eu/en)
162
+
163
+ ### Personal and Sensitive Information
164
+
165
+ The dataset does not include personal or sensitive information.
166
+
167
+ ## Considerations for Using the Data
168
+
169
+ ### Social Impact of Dataset
170
+
171
+ [More Information Needed]
172
+
173
+ ### Discussion of Biases
174
+
175
+ [More Information Needed]
176
+
177
+ ### Other Known Limitations
178
+
179
+ [More Information Needed]
180
+
181
+ ## Additional Information
182
+
183
+ ### Dataset Curators
184
+
185
+ Chalkidis et al. (2021)
186
+
187
+ ### Licensing Information
188
+
189
+ **EU Data**
190
+
191
+ © European Union, 1998-2021
192
+
193
+ The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
194
+
195
+ The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence​​ . This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
196
+
197
+ Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
198
+ Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
199
+
200
+ **UK Data**
201
+
202
+ You are encouraged to use and re-use the Information that is available under this licence freely and flexibly, with only a few conditions.
203
+
204
+ You are free to:
205
+
206
+ - copy, publish, distribute and transmit the Information;
207
+ - adapt the Information;
208
+ - exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application.
209
+
210
+ You must (where you do any of the above):
211
+
212
+ acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/.
213
+
214
+ ### Citation Information
215
+
216
+ *Ilias Chalkidis, Manos Fergadiotis, Nikos Manginas, Eva Katakalou and Prodromos Malakasiotis.*
217
+ *Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations*
218
+ *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021). Online. 2021*
219
+ ```
220
+ @inproceedings{chalkidis-etal-2021-regir,
221
+ title = "Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations",
222
+ author = "Chalkidis, Ilias and Fergadiotis, Manos and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos",
223
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)",
224
+ year = "2021",
225
+ address = "Online",
226
+ publisher = "Association for Computational Linguistics",
227
+ url = "https://arxiv.org/abs/2101.10726",
228
+ }
229
+ ```
230
+
231
+ ### Contributions
232
+
233
+ Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"eu2uk": {"description": "EURegIR: Regulatory Compliance IR (EU/UK)\n", "citation": "@inproceedings{chalkidis-etal-2021-regir,\n title = \"Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations\",\n author = \"Chalkidis, Ilias and Fergadiotis, Emmanouil and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos\",\n booktitle = \"Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)\",\n year = \"2021\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://arxiv.org/abs/2101.10726\",\n}\n", "homepage": "https://archive.org/details/eacl2021_regir_dataset", "license": "CC BY-SA (Creative Commons / Attribution-ShareAlike)", "features": {"document_id": {"dtype": "string", "id": null, "_type": "Value"}, "publication_year": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "relevant_documents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "eu_regulatory_ir", "config_name": "eu2uk", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 20665038, "num_examples": 1400, "dataset_name": "eu_regulatory_ir"}, "test": {"name": "test", "num_bytes": 8844145, "num_examples": 300, "dataset_name": "eu_regulatory_ir"}, "validation": {"name": "validation", "num_bytes": 5852814, "num_examples": 300, "dataset_name": "eu_regulatory_ir"}, "uk_corpus": {"name": "uk_corpus", "num_bytes": 502468359, "num_examples": 52515, "dataset_name": "eu_regulatory_ir"}}, "download_checksums": {"https://archive.org/download/eacl2021_regir_datasets/eu2uk.zip": {"num_bytes": 119685577, "checksum": "8afc26fbd3559aa800ce0adb3c35c0e4c2c99b93529bc20d5f32ce44f55ba286"}}, "download_size": 119685577, "post_processing_size": null, "dataset_size": 537830356, "size_in_bytes": 657515933}, "uk2eu": {"description": "EURegIR: Regulatory Compliance IR (EU/UK)\n", "citation": "@inproceedings{chalkidis-etal-2021-regir,\n title = \"Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations\",\n author = \"Chalkidis, Ilias and Fergadiotis, Emmanouil and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos\",\n booktitle = \"Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)\",\n year = \"2021\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://arxiv.org/abs/2101.10726\",\n}\n", "homepage": "https://archive.org/details/eacl2021_regir_dataset", "license": "CC BY-SA (Creative Commons / Attribution-ShareAlike)", "features": {"document_id": {"dtype": "string", "id": null, "_type": "Value"}, "publication_year": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "relevant_documents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "eu_regulatory_ir", "config_name": "uk2eu", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 55144655, "num_examples": 1500, "dataset_name": "eu_regulatory_ir"}, "test": {"name": "test", "num_bytes": 14810460, "num_examples": 300, "dataset_name": "eu_regulatory_ir"}, "validation": {"name": "validation", "num_bytes": 15175644, "num_examples": 300, "dataset_name": "eu_regulatory_ir"}, "eu_corpus": {"name": "eu_corpus", "num_bytes": 57212422, "num_examples": 3930, "dataset_name": "eu_regulatory_ir"}}, "download_checksums": {"https://archive.org/download/eacl2021_regir_datasets/uk2eu.zip": {"num_bytes": 31835104, "checksum": "767d4edc0c5210b80ccf24c31a6d28a62b64edffc69810f7027d34b0e1401194"}}, "download_size": 31835104, "post_processing_size": null, "dataset_size": 142343181, "size_in_bytes": 174178285}}
dummy/eu2uk/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca1d529d07c16c84b7cf16c179b737476ed9c0c99efd0534670ae73c33236925
3
+ size 38340
dummy/uk2eu/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b388307c3e2723f077e6a560f89e932d0b2a4d59d746fea85e031e6f2c5cb76
3
+ size 59683
eu_regulatory_ir.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """EURegIR: Regulatory Compliance IR (EU/UK)"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{chalkidis-etal-2021-regir,
26
+ title = "Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations",
27
+ author = "Chalkidis, Ilias and Fergadiotis, Emmanouil and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos",
28
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)",
29
+ year = "2021",
30
+ address = "Online",
31
+ publisher = "Association for Computational Linguistics",
32
+ url = "https://arxiv.org/abs/2101.10726",
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ EURegIR: Regulatory Compliance IR (EU/UK)
38
+ """
39
+
40
+ _HOMEPAGE = "https://archive.org/details/eacl2021_regir_dataset"
41
+
42
+ _LICENSE = "CC BY-SA (Creative Commons / Attribution-ShareAlike)"
43
+
44
+ _URLs = {
45
+ "eu2uk": "https://archive.org/download/eacl2021_regir_datasets/eu2uk.zip",
46
+ "uk2eu": "https://archive.org/download/eacl2021_regir_datasets/uk2eu.zip",
47
+ }
48
+
49
+
50
+ class EuRegulatoryIr(datasets.GeneratorBasedBuilder):
51
+ """EURegIR: Regulatory Compliance IR (EU/UK)"""
52
+
53
+ VERSION = datasets.Version("1.1.0")
54
+
55
+ BUILDER_CONFIGS = [
56
+ datasets.BuilderConfig(name="eu2uk", version=VERSION, description="EURegIR: Regulatory Compliance IR (EU2UK)"),
57
+ datasets.BuilderConfig(name="uk2eu", version=VERSION, description="EURegIR: Regulatory Compliance IR (UK2EU)"),
58
+ ]
59
+
60
+ def _info(self):
61
+ if self.config.name == "eu2uk":
62
+ features = datasets.Features(
63
+ {
64
+ "document_id": datasets.Value("string"),
65
+ "publication_year": datasets.Value("string"),
66
+ "text": datasets.Value("string"),
67
+ "relevant_documents": datasets.features.Sequence(datasets.Value("string")),
68
+ }
69
+ )
70
+ else:
71
+ features = datasets.Features(
72
+ {
73
+ "document_id": datasets.Value("string"),
74
+ "publication_year": datasets.Value("string"),
75
+ "text": datasets.Value("string"),
76
+ "relevant_documents": datasets.features.Sequence(datasets.Value("string")),
77
+ }
78
+ )
79
+ return datasets.DatasetInfo(
80
+ # This is the description that will appear on the datasets page.
81
+ description=_DESCRIPTION,
82
+ # This defines the different columns of the dataset and their types
83
+ features=features, # Here we define them above because they are different between the two configurations
84
+ # If there's a common (input, target) tuple from the features,
85
+ # specify them here. They'll be used if as_supervised=True in
86
+ # builder.as_dataset.
87
+ supervised_keys=None,
88
+ # Homepage of the dataset for documentation
89
+ homepage=_HOMEPAGE,
90
+ # License for the dataset if available
91
+ license=_LICENSE,
92
+ # Citation for the dataset
93
+ citation=_CITATION,
94
+ )
95
+
96
+ def _split_generators(self, dl_manager):
97
+ """Returns SplitGenerators."""
98
+ my_urls = _URLs[self.config.name]
99
+ data_dir = dl_manager.download_and_extract(my_urls)
100
+ return [
101
+ datasets.SplitGenerator(
102
+ name=datasets.Split.TRAIN,
103
+ # These kwargs will be passed to _generate_examples
104
+ gen_kwargs={
105
+ "filepath": os.path.join(data_dir, "train.jsonl"),
106
+ "split": "train",
107
+ },
108
+ ),
109
+ datasets.SplitGenerator(
110
+ name=datasets.Split.TEST,
111
+ # These kwargs will be passed to _generate_examples
112
+ gen_kwargs={"filepath": os.path.join(data_dir, "test.jsonl"), "split": "test"},
113
+ ),
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.VALIDATION,
116
+ # These kwargs will be passed to _generate_examples
117
+ gen_kwargs={
118
+ "filepath": os.path.join(data_dir, "dev.jsonl"),
119
+ "split": "dev",
120
+ },
121
+ ),
122
+ datasets.SplitGenerator(
123
+ name=f"{self.config.name.split('2')[1]}_corpus",
124
+ # These kwargs will be passed to _generate_examples
125
+ gen_kwargs={
126
+ "filepath": os.path.join(data_dir, "corpus.jsonl"),
127
+ "split": f"{self.config.name.split('2')[1]}_corpus",
128
+ },
129
+ ),
130
+ ]
131
+
132
+ def _generate_examples(
133
+ self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
134
+ ):
135
+ """ Yields examples as (key, example) tuples. """
136
+ # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
137
+ # The `key` is here for legacy reason (tfds) and is not important in itself.
138
+
139
+ with open(filepath, encoding="utf-8") as f:
140
+ for id_, row in enumerate(f):
141
+ data = json.loads(row)
142
+ yield id_, {
143
+ "document_id": data["document_id"],
144
+ "text": data["text"],
145
+ "publication_year": data["publication_year"],
146
+ "relevant_documents": data["relevant_documents"]
147
+ if split != f"{self.config.name.split('2')[1]}_corpus"
148
+ else [],
149
+ }