antxa commited on
Commit
28d5dca
1 Parent(s): 2e44286

Add ElkarHizketak v1.0 dataset (#3780)

Browse files

* Add ElkarHizketak v1.0 dataset

* Update datasets/elkarhizketak/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/elkarhizketak/README.md

Add missing sections to the ToC

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update dataset information and delete dummy files

* Update dummy files and code for generating examples to return just one example

* Apply suggestions from code review

* Update elkarhizketak.py

* fill empty sections

Co-authored-by: Arantxa Otegi <arantza.otegi@ehu.eus>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
Co-authored-by: Quentin Lhoest <lhoest.q@gmail.com>

Commit from https://github.com/huggingface/datasets/commit/cf47649eaed608fb7030f692020a0921e16f23c8

README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - eu
8
+ licenses:
9
+ - cc-by-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ - other-dialogue
19
+ task_ids:
20
+ - extractive-qa
21
+ pretty_name: ElkarHizketak
22
+ ---
23
+
24
+ # Dataset Card for ElkarHizketak
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
41
+ - [Annotations](#annotations)
42
+ - [Annotation process](#annotation-process)
43
+ - [Who are the annotators?](#who-are-the-annotators)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+
55
+ ## Dataset Description
56
+
57
+ - **Homepage:** [ElkarHizketak homepage](http://ixa.si.ehu.es/node/12934)
58
+ - **Paper:** [Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque](https://aclanthology.org/2020.lrec-1.55/)
59
+ - **Point of Contact:** [Arantxa Otegi](mailto:arantza.otegi@ehu.eus)
60
+
61
+ ### Dataset Summary
62
+
63
+ ElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ - `extractive-qa`: The dataset can be used to train a model for Conversational Question Answering.
68
+
69
+ ### Languages
70
+
71
+ The text in the dataset is in Basque.
72
+
73
+ ## Dataset Structure
74
+
75
+ ### Data Instances
76
+
77
+ An example from the train split:
78
+ ```
79
+ {'dialogue_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d',
80
+ 'wikipedia_page_title': 'Howard Becker',
81
+ 'background': 'Howard Saul Becker (Chicago,Illinois, 1928ko apirilaren 18an) Estatu Batuetako soziologoa bat da. Bere ekarpen handienak desbiderakuntzaren soziologian, artearen soziologian eta musikaren soziologian egin ditu. "Outsiders" (1963) bere lanik garrantzitsuetako da eta bertan garatu zuen bere etiketatze-teoria. Nahiz eta elkarrekintza sinbolikoaren edo gizarte-konstruktibismoaren korronteen barruan sartu izan, berak ez du bere burua inongo paradigman kokatzen. Chicagoko Unibertsitatean graduatua, Becker Chicagoko Soziologia Eskolako bigarren belaunaldiaren barruan kokatu ohi da, Erving Goffman eta Anselm Strauss-ekin batera.',
82
+ 'section_title': 'Hastapenak eta hezkuntza.',
83
+ 'context': 'Howard Saul Becker Chicagon jaio zen 1928ko apirilaren 18an. Oso gazte zelarik piano jotzen asi zen eta 15 urte zituenean dagoeneko tabernetan aritzen zen pianoa jotzen. Beranduago Northwestern Unibertsitateko banda batean jo zuen. Beckerren arabera, erdi-profesional gisa aritu ahal izan zen Bigarren Mundu Gerra tokatu eta musikari gehienak soldadugai zeudelako. Musikari bezala egin zuen lan horretan egin zuen lehen aldiz drogaren kulturaren ezagutza, aurrerago ikerketa-gai hartuko zuena. 1946an bere graduazpiko soziologia titulua lortu zuen Chicagoko Unibertsitatean. Ikasten ari zen bitartean, pianoa jotzen jarraitu zuen modu erdi-profesionalean. Hala ere, soziologiako masterra eta doktoretza eskuratu zituen Chicagoko Unibertsitatean. Unibertsitate horretan Chicagoko Soziologia Eskolaren jatorrizko tradizioaren barruan hezia izan zen. Chicagoko Soziologia Eskolak garrantzi berezia ematen zion datu kualitatiboen analisiari eta Chicagoko hiria hartzen zuen ikerketa eremu bezala. Beckerren hasierako lan askok eskola honen tradizioaren eragina dute, bereziko Everett C. Hughes-en eragina, bere tutore eta gidari izan zena. Askotan elkarrekintzaile sinboliko bezala izendatua izan da, nahiz eta Beckerek berak ez duen gogoko izendapen hori. Haren arabera, bere leinu akademikoa Georg Simmel, Robert E. Park eta Everett Hughes dira. Doktoretza lortu ostean, 23 urterekin, Beckerrek marihuanaren erabilpena ikertu zuen "Institut for Juvenil Reseac"h-en. Ondoren Illinoisko Unibertsitatean eta Standfor Unibertsitateko ikerketa institutu batean aritu zen bere irakasle karrera hasi aurretik. CANNOTANSWER',
84
+ 'turn_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d_q#0',
85
+ 'question': 'Zer da desbiderakuntzaren soziologia?',
86
+ 'yesno': 2,
87
+ 'answers': {'text': ['CANNOTANSWER'],
88
+ 'answer_start': [1601],
89
+ 'input_text': ['CANNOTANSWER']},
90
+ 'orig_answer': {'text': 'CANNOTANSWER', 'answer_start': 1601}}
91
+ ```
92
+
93
+ ### Data Fields
94
+
95
+ The different fields are:
96
+
97
+ - `dialogue_id`: string,
98
+ - `wikipedia_page_title`: title of the wikipedia page as a string,
99
+ - `background`: string,
100
+ - `section_title`: title os the section as a string,
101
+ - `context`: context of the question as a string string,
102
+ - `turn_id`: string,
103
+ - `question`: question as a string,
104
+ - `yesno`: Class label that represents if the question is a yes/no question. Possible values are "y" (0), "n" (1), "x" (2),
105
+ - `answers`: a dictionary with three fields:
106
+ - `text`: list of texts of the answer as a string,
107
+ - `answer_start`: list of positions of the answers in the context as an int32,
108
+ - `input_text`: list of strings,
109
+ }
110
+ ),
111
+ - `orig_answer`: {
112
+ - `text`: original answer text as a string,
113
+ - `answer_start`: original position of the answer as an int32,
114
+ },
115
+
116
+ ### Data Splits
117
+
118
+ The data is split into a training, development and test set. The split sizes are as follow:
119
+
120
+ - train: 1,306 questions / 301 dialogues
121
+ - development: 161 questions / 38 dialogues
122
+ - test: 167 questions / 38 dialogues
123
+
124
+ ## Dataset Creation
125
+
126
+ ### Curation Rationale
127
+
128
+ This is the first non-English conversational QA dataset and the first conversational dataset for Basque. Its small size presents a realistic low-resource scenario for conversational QA systems.
129
+
130
+ ### Source Data
131
+
132
+ #### Initial Data Collection and Normalization
133
+
134
+ First we selected sections of Wikipedia articles about people, as less specialized knowledge is required to converse about people than other categories. In order to retrieve articles we selected the following categories in Basque Wikipedia: Biografia (Biography is the equivalent category in English Wikipedia), Biografiak (People) and Gizabanako biziak (Living people). We applied this category filter and downloaded the articles using a querying tool provided by the Wikimedia foundation. Once we retrieved the articles, we selected sections from them that contained between 175 and 300 words. These filters and threshold were set after some pilot studies where we check the adequacy of the people involved in the selected articles and the length of the passages in order to have enough but not to much information to hold a conversation.
135
+
136
+ Then, dialogues were collected during some online sessions that we arranged with Basque speaking volunteers. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
137
+
138
+ #### Who are the source language producers?
139
+
140
+ The language producers are Basque speaking volunteers which hold a conversation using a text-based chat interface developed for those purposes.
141
+
142
+ ### Annotations
143
+
144
+ #### Annotation process
145
+
146
+ [More Information Needed]
147
+
148
+ #### Who are the annotators?
149
+
150
+ [More Information Needed]
151
+
152
+ ### Personal and Sensitive Information
153
+
154
+ [More Information Needed]
155
+
156
+ ## Considerations for Using the Data
157
+
158
+ ### Social Impact of Dataset
159
+
160
+ [More Information Needed]
161
+
162
+ ### Discussion of Biases
163
+
164
+ [More Information Needed]
165
+
166
+ ### Other Known Limitations
167
+
168
+ [More Information Needed]
169
+
170
+ ## Additional Information
171
+
172
+ ### Dataset Curators
173
+
174
+ The dataset was created by Arantxa Otegi, Jon Ander Campos, Aitor Soroa and Eneko Agirre from the [HiTZ Basque Center for Language Technologies](https://www.hitz.eus/) and [Ixa NLP Group](https://www.ixa.eus/) at the University of the Basque Country (UPV/EHU).
175
+
176
+ ### Licensing Information
177
+
178
+ Copyright (C) by Ixa Taldea, University of the Basque Country UPV/EHU.
179
+
180
+ This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0).
181
+ To view a copy of this license, visit [https://creativecommons.org/licenses/by-sa/4.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode).
182
+
183
+ ### Citation Information
184
+
185
+ If you are using this dataset in your work, please cite this publication:
186
+
187
+ ```bibtex
188
+ @inproceedings{otegi-etal-2020-conversational,
189
+ title = "{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque}",
190
+ author = "Otegi, Arantxa and
191
+ Agirre, Aitor and
192
+ Campos, Jon Ander and
193
+ Soroa, Aitor and
194
+ Agirre, Eneko",
195
+ booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
196
+ year = "2020",
197
+ address = "Marseille, France",
198
+ publisher = "European Language Resources Association",
199
+ url = "https://aclanthology.org/2020.lrec-1.55",
200
+ pages = "436--442"
201
+ }
202
+ ```
203
+
204
+ ### Contributions
205
+
206
+ Thanks to [@antxa](https://github.com/antxa) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"plain_text": {"description": "\nElkarHizketak is a low resource conversational Question Answering\n(QA) dataset in Basque created by Basque speaker volunteers. The\ndataset contains close to 400 dialogues and more than 1600 question\nand answers, and its small size presents a realistic low-resource\nscenario for conversational QA systems. The dataset is built on top of\nWikipedia sections about popular people and organizations. The\ndialogues involve two crowd workers: (1) a student ask questions after\nreading a small introduction about the person, but without seeing the\nsection text; and (2) a teacher answers the questions selecting a span\nof text of the section. ", "citation": "@inproceedings{otegi-etal-2020-conversational,\n title = \"{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for {B}asque}\",\n author = \"Otegi, Arantxa and\n Agirre, Aitor and\n Campos, Jon Ander and\n Soroa, Aitor and\n Agirre, Eneko\",\n booktitle = \"Proceedings of the 12th Language Resources and Evaluation Conference\",\n year = \"2020\",\n publisher = \"European Language Resources Association\",\n url = \"https://aclanthology.org/2020.lrec-1.55\",\n pages = \"436--442\",\n ISBN = \"979-10-95546-34-4\",\n}\n", "homepage": "http://ixa.si.ehu.es/node/12934", "license": "Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)", "features": {"dialogue_id": {"dtype": "string", "id": null, "_type": "Value"}, "wikipedia_page_title": {"dtype": "string", "id": null, "_type": "Value"}, "background": {"dtype": "string", "id": null, "_type": "Value"}, "section_title": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "turn_ids": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "questions": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "yesnos": {"feature": {"num_classes": 3, "names": ["y", "n", "x"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "answers": {"feature": {"texts": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answer_starts": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "input_texts": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "orig_answers": {"texts": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answer_starts": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "elkarhizketak", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1024378, "num_examples": 301, "dataset_name": "elkarhizketak"}, "validation": {"name": "validation", "num_bytes": 125667, "num_examples": 38, "dataset_name": "elkarhizketak"}, "test": {"name": "test", "num_bytes": 127640, "num_examples": 38, "dataset_name": "elkarhizketak"}}, "download_checksums": {"http://ixa2.si.ehu.es/convai/elkarhizketak-v1.0/elkarhizketak-train-v1.0.json": {"num_bytes": 1543845, "checksum": "36674936820c9a5d8a5de144776dd57e2e4f5f63eec6ac45f93e47e5fd9daecd"}, "http://ixa2.si.ehu.es/convai/elkarhizketak-v1.0/elkarhizketak-dev-v1.0.json": {"num_bytes": 189736, "checksum": "fbf2e14b63de9a8406a9b44dccd0e2c4dcdf07724af737ccd05e06311a632f57"}, "http://ixa2.si.ehu.es/convai/elkarhizketak-v1.0/elkarhizketak-test-v1.0.json": {"num_bytes": 193893, "checksum": "311154feb69ede265ed695f97ab81811d78d837572114396b6e8779fdeb3e3f0"}}, "download_size": 1927474, "post_processing_size": null, "dataset_size": 1277685, "size_in_bytes": 3205159}}
dummy/plain_text/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:097409039353bbf8eb9afbcf4b41e2761b7d24a319313bcc8c7ffa3f14854e7d
3
+ size 6188
elkarhizketak.py ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """ElkarHizketak: Conversational Question Answering dataset in Basque"""
16
+
17
+ import json
18
+
19
+ import datasets
20
+
21
+
22
+ logger = datasets.logging.get_logger(__name__)
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{otegi-etal-2020-conversational,
27
+ title = "{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for {B}asque}",
28
+ author = "Otegi, Arantxa and
29
+ Agirre, Aitor and
30
+ Campos, Jon Ander and
31
+ Soroa, Aitor and
32
+ Agirre, Eneko",
33
+ booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
34
+ year = "2020",
35
+ publisher = "European Language Resources Association",
36
+ url = "https://aclanthology.org/2020.lrec-1.55",
37
+ pages = "436--442",
38
+ ISBN = "979-10-95546-34-4",
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """
43
+ ElkarHizketak is a low resource conversational Question Answering
44
+ (QA) dataset in Basque created by Basque speaker volunteers. The
45
+ dataset contains close to 400 dialogues and more than 1600 question
46
+ and answers, and its small size presents a realistic low-resource
47
+ scenario for conversational QA systems. The dataset is built on top of
48
+ Wikipedia sections about popular people and organizations. The
49
+ dialogues involve two crowd workers: (1) a student ask questions after
50
+ reading a small introduction about the person, but without seeing the
51
+ section text; and (2) a teacher answers the questions selecting a span
52
+ of text of the section. """
53
+
54
+ _HOMEPAGE = "http://ixa.si.ehu.es/node/12934"
55
+
56
+ _LICENSE = "Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)"
57
+
58
+ _URLs = {
59
+ "train": "http://ixa2.si.ehu.es/convai/elkarhizketak-v1.0/elkarhizketak-train-v1.0.json",
60
+ "validation": "http://ixa2.si.ehu.es/convai/elkarhizketak-v1.0/elkarhizketak-dev-v1.0.json",
61
+ "test": "http://ixa2.si.ehu.es/convai/elkarhizketak-v1.0/elkarhizketak-test-v1.0.json",
62
+ }
63
+
64
+
65
+ class Elkarhizketak(datasets.GeneratorBasedBuilder):
66
+ """ElkarHizketak: Conversational Question Answering dataset in Basque. Version 1.0."""
67
+
68
+ VERSION = datasets.Version("1.0.0")
69
+
70
+ BUILDER_CONFIGS = [
71
+ datasets.BuilderConfig(
72
+ name="plain_text",
73
+ description="Plain text",
74
+ version=VERSION,
75
+ ),
76
+ ]
77
+
78
+ def _info(self):
79
+ return datasets.DatasetInfo(
80
+ description=_DESCRIPTION,
81
+ features=datasets.Features(
82
+ {
83
+ "dialogue_id": datasets.Value("string"),
84
+ "wikipedia_page_title": datasets.Value("string"),
85
+ "background": datasets.Value("string"),
86
+ "section_title": datasets.Value("string"),
87
+ "context": datasets.Value("string"),
88
+ "turn_id": datasets.Value("string"),
89
+ "question": datasets.Value("string"),
90
+ "yesno": datasets.ClassLabel(names=["y", "n", "x"]),
91
+ "answers": datasets.Sequence(
92
+ {
93
+ "text": datasets.Value("string"),
94
+ "answer_start": datasets.Value("int32"),
95
+ "input_text": datasets.Value("string"),
96
+ }
97
+ ),
98
+ "orig_answer": {
99
+ "text": datasets.Value("string"),
100
+ "answer_start": datasets.Value("int32"),
101
+ },
102
+ }
103
+ ),
104
+ supervised_keys=None,
105
+ homepage=_HOMEPAGE,
106
+ license=_LICENSE,
107
+ citation=_CITATION,
108
+ )
109
+
110
+ def _split_generators(self, dl_manager):
111
+ """Returns SplitGenerators."""
112
+ data_dir = dl_manager.download_and_extract(_URLs)
113
+ return [
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TRAIN,
116
+ gen_kwargs={
117
+ "filepath": data_dir["train"],
118
+ },
119
+ ),
120
+ datasets.SplitGenerator(
121
+ name=datasets.Split.VALIDATION,
122
+ gen_kwargs={
123
+ "filepath": data_dir["validation"],
124
+ },
125
+ ),
126
+ datasets.SplitGenerator(
127
+ name=datasets.Split.TEST,
128
+ gen_kwargs={
129
+ "filepath": data_dir["test"],
130
+ },
131
+ ),
132
+ ]
133
+
134
+ def _generate_examples(self, filepath):
135
+ """Yields examples."""
136
+ logger.info("generating examples from = %s", filepath)
137
+
138
+ key = 0
139
+ with open(filepath, encoding="utf-8") as f:
140
+ elkarhizketak = json.load(f)
141
+ for section in elkarhizketak["data"]:
142
+ wiki_page_title = section.get("title", "").strip()
143
+ background = section.get("background", "").strip()
144
+ section_title = section.get("section_title", "").strip()
145
+ for dialogue in section["paragraphs"]:
146
+ context = dialogue["context"].strip()
147
+ dialogue_id = dialogue["id"]
148
+ for qa in dialogue["qas"]:
149
+ answer_starts = [answer["answer_start"] for answer in qa["answers"]]
150
+ answers = [answer["text"].strip() for answer in qa["answers"]]
151
+ input_texts = [answer["input_text"].strip() for answer in qa["answers"]]
152
+ yield key, {
153
+ "wikipedia_page_title": wiki_page_title,
154
+ "background": background,
155
+ "section_title": section_title,
156
+ "context": context,
157
+ "dialogue_id": dialogue_id,
158
+ "question": qa["question"],
159
+ "turn_id": qa["id"],
160
+ "yesno": qa["yesno"],
161
+ "answers": {
162
+ "answer_start": answer_starts,
163
+ "text": answers,
164
+ "input_text": input_texts,
165
+ },
166
+ "orig_answer": {
167
+ "answer_start": qa["orig_answer"]["answer_start"],
168
+ "text": qa["orig_answer"]["text"],
169
+ },
170
+ }
171
+ key += 1