Datasets:

Sub-tasks:
extractive-qa
Languages:
Spanish
Multilinguality:
monolingual
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
License:
Asier Gutiérrez Fandiño commited on
Commit
6df3a2d
1 Parent(s): b04c1a2

Initial commit

Browse files
Files changed (6) hide show
  1. .gitattributes +1 -0
  2. README.md +247 -0
  3. SQAC.py +143 -0
  4. dev.json +3 -0
  5. test.json +3 -0
  6. train.json +3 -0
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - es
8
+ licenses:
9
+ - cc-by-sa-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Spanish Question Answering Corpus (SQAC)
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - extractive-qa
21
+
22
+ ---
23
+
24
+ # SQAC (Spanish Question-Answering Corpus): An extractive QA dataset for the Spanish language
25
+
26
+ ## BibTeX citation
27
+
28
+ ```bibtex
29
+ @article{DBLP:journals/corr/abs-2107-07253,
30
+ author = {Asier Guti{\'{e}}rrez{-}Fandi{\~{n}}o and
31
+ Jordi Armengol{-}Estap{\'{e}} and
32
+ Marc P{\`{a}}mies and
33
+ Joan Llop{-}Palao and
34
+ Joaqu{\'{\i}}n Silveira{-}Ocampo and
35
+ Casimiro Pio Carrino and
36
+ Aitor Gonzalez{-}Agirre and
37
+ Carme Armentano{-}Oller and
38
+ Carlos Rodr{\'{\i}}guez Penagos and
39
+ Marta Villegas},
40
+ title = {Spanish Language Models},
41
+ journal = {CoRR},
42
+ volume = {abs/2107.07253},
43
+ year = {2021},
44
+ url = {https://arxiv.org/abs/2107.07253},
45
+ archivePrefix = {arXiv},
46
+ eprint = {2107.07253},
47
+ timestamp = {Wed, 21 Jul 2021 15:55:35 +0200},
48
+ biburl = {https://dblp.org/rec/journals/corr/abs-2107-07253.bib},
49
+ bibsource = {dblp computer science bibliography, https://dblp.org}
50
+ }
51
+ ```
52
+
53
+ See the pre-print version of our paper for further details: https://arxiv.org/abs/2107.07253
54
+
55
+ <!-- ## Digital Object Identifier (DOI) and access to dataset files -->
56
+
57
+
58
+ ## Introduction
59
+
60
+ This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment.
61
+
62
+ The sources of the contexts are:
63
+ * Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
64
+ * News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
65
+ * Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
66
+
67
+ This dataset can be used to build extractive-QA.
68
+
69
+ ### Supported Tasks and Leaderboards
70
+
71
+ Extractive-QA
72
+
73
+ ### Languages
74
+
75
+ ES - Spanish
76
+
77
+ ### Directory structure
78
+
79
+ * README.md
80
+ * dev.json
81
+ * test.json
82
+ * train.json
83
+ * sqac.py
84
+
85
+ ## Dataset Structure
86
+
87
+ ### Data Instances
88
+
89
+ JSON files
90
+
91
+ ### Data Fields
92
+
93
+ Follows (Rajpurkar, Pranav et al., 2016) for squad v1 datasets. (see below for full reference).
94
+ We added a field "source" with the source of the context.
95
+
96
+ ### Example
97
+ <pre>
98
+ {
99
+ "data": [
100
+ {
101
+ "paragraphs": [
102
+ {
103
+ "context": "Al cogote, y fumando como una cafetera. Ah!, no era él, éramos todos nosotros. Luego llegó Billie Holiday. Bajo el epígrafe Arte, la noche temática, pasaron la vida de la única cantante del universo que no es su voz, sino su alma lo que se escucha cuando interpreta. Gata golpeada por el mundo, pateada, violada, enganchada a todos los paraísos artificiales del planeta, jamás encontró el Edén. El Edén lo encontramos nosotros cuando, al concluir la sesión de la tele, pusimos en la doméstica cadena de sonido el mítico Last Recording, su última grabación (marzo de 1959), con la orquesta de Ray Ellis y el piano de Hank Jones. Se estaba muriendo Lady Day, y no obstante, mientras moría, su alma cantaba, Baby, won't you please come home. O sea, niño, criatura, amor, vuelve, a casa por favor.",
104
+ "qas": [
105
+ {
106
+ "question": "¿Quién se incorporó a la reunión más adelante?",
107
+ "id": "c5429572-64b8-4c5d-9553-826f867b07be",
108
+ "answers": [
109
+ {
110
+ "answer_start": 91,
111
+ "text": "Billie Holiday"
112
+ }
113
+ ]
114
+ },
115
+
116
+ ...
117
+
118
+ ]
119
+ }
120
+ ],
121
+ "title": "P_129_20010702_&_P_154_20010102_&_P_108_20000301_c_&_P_108_20000601_d",
122
+ "source": "ancora"
123
+ },
124
+ ...
125
+ ]
126
+ }
127
+
128
+ </pre>
129
+
130
+ ### Data Splits
131
+
132
+ - train
133
+ - development
134
+ - test
135
+
136
+ ## Content analysis
137
+
138
+ ### Number of articles, paragraphs and questions
139
+
140
+ * Number of articles: 3,834
141
+ * Number of contexts: 6,247
142
+ * Number of questions: 18,817
143
+ * Questions/context: 3.01
144
+ * Number of sentences: 48,026
145
+ * Sentences/context: 7.70
146
+
147
+ ### Number of tokens
148
+
149
+ * Total tokens in context: 1,561,616
150
+ * Tokens/context 250.30
151
+ * Total tokens in questions: 203,235
152
+ * Tokens in questions/questions: 10.80
153
+ * Tokens in questions/tokens in context: 0.13
154
+ * Total tokens in answers: 90,307
155
+ * Tokens in answers/answers: 4.80
156
+ * Tokens in answers/tokens in context: 0.06
157
+
158
+ ### Lexical variation
159
+
160
+ 46.38 of the words in the Question can be found in the Context.
161
+
162
+ ### Question type
163
+
164
+ | Question | Count | % |
165
+ |----------|-------:|---:|
166
+ | qué | 6,381 | 33.91 % |
167
+ | quién/es | 2,952 | 15.69 % |
168
+ | cuál/es | 2,034 | 10.81 % |
169
+ | cómo | 1,949 | 10.36 % |
170
+ | dónde | 1,856 | 9.86 % |
171
+ | cuándo | 1,639 | 8.71 % |
172
+ | cuánto | 1,311 | 6.97 % |
173
+ | cuántos | 495 |2.63 % |
174
+ | adónde | 100 | 0.53 % |
175
+ | cuánta | 49 | 0.26 % |
176
+ | no question mark | 43 | 0.23 % |
177
+ | cuántas | 19 | 0.10 % |
178
+
179
+
180
+ ## Dataset Creation
181
+
182
+ ### Methodology
183
+
184
+ 6,247 contexts were randomly chosen from the three corpus described below. We commisioned the creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250). In total, 18,817 pairs of a question and an extracted fragment that contains the answer were created.
185
+
186
+
187
+ ### Curation Rationale
188
+
189
+ For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. We also created another QA dataset with Wikipedia to ensure thematic and stylistic variety.
190
+
191
+ ### Source Data
192
+
193
+ - Spanish Wikipedia: https://es.wikipedia.org
194
+ - Spanish Wikinews: https://es.wikinews.org/
195
+ - AnCora corpus: http://clic.ub.edu/corpus/en
196
+
197
+ #### Initial Data Collection and Normalization
198
+
199
+ The source data are scraped articles from the Spanish Wikipedia site, Wikinews site and from AnCora corpus.
200
+
201
+ #### Who are the source language producers?
202
+
203
+ [More Information Needed]
204
+
205
+ ### Annotations
206
+
207
+ #### Annotation process
208
+
209
+ We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250).
210
+
211
+
212
+ #### Who are the annotators?
213
+
214
+ Native language speakers.
215
+
216
+ ### Dataset Curators
217
+
218
+ Carlos Rodríguez and Carme Armentano, from BSC-CNS.
219
+
220
+ ### Personal and Sensitive Information
221
+
222
+ No personal or sensitive information included.
223
+
224
+ ## Considerations for Using the Data
225
+
226
+ ### Social Impact of Dataset
227
+
228
+ [More Information Needed]
229
+
230
+ ### Discussion of Biases
231
+
232
+ [More Information Needed]
233
+
234
+ ### Other Known Limitations
235
+
236
+ [More Information Needed]
237
+
238
+ ## Contact
239
+
240
+ Carlos Rodríguez-Penagos (carlos.rodriguez1@bsc.es) and Carme Armentano-Oller (carme.armentano@bsc.es)
241
+
242
+ ## Funding
243
+ This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
244
+
245
+ ## License
246
+
247
+ <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/"><img alt="Attribution-ShareAlike 4.0 International License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
SQAC.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Loading script for the SQAC dataset.
2
+ import json
3
+ import datasets
4
+
5
+ logger = datasets.logging.get_logger(__name__)
6
+
7
+ _CITATION = """
8
+ bibtex
9
+ @article{DBLP:journals/corr/abs-2107-07253,
10
+ author = {Asier Guti{\'{e}}rrez{-}Fandi{\~{n}}o and
11
+ Jordi Armengol{-}Estap{\'{e}} and
12
+ Marc P{\`{a}}mies and
13
+ Joan Llop{-}Palao and
14
+ Joaqu{\'{\i}}n Silveira{-}Ocampo and
15
+ Casimiro Pio Carrino and
16
+ Aitor Gonzalez{-}Agirre and
17
+ Carme Armentano{-}Oller and
18
+ Carlos Rodr{\'{\i}}guez Penagos and
19
+ Marta Villegas},
20
+ title = {Spanish Language Models},
21
+ journal = {CoRR},
22
+ volume = {abs/2107.07253},
23
+ year = {2021},
24
+ url = {https://arxiv.org/abs/2107.07253},
25
+ archivePrefix = {arXiv},
26
+ eprint = {2107.07253},
27
+ timestamp = {Wed, 21 Jul 2021 15:55:35 +0200},
28
+ biburl = {https://dblp.org/rec/journals/corr/abs-2107-07253.bib},
29
+ bibsource = {dblp computer science bibliography, https://dblp.org}
30
+ }
31
+ """
32
+
33
+ _DESCRIPTION = """
34
+ This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment.
35
+
36
+ The sources of the contexts are:
37
+
38
+ * Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
39
+
40
+ * News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
41
+
42
+ * Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence] (https://creativecommons.org/licenses/by/4.0/legalcode).
43
+
44
+ This dataset can be used to build extractive-QA.
45
+ """
46
+
47
+ _HOMEPAGE = """"""
48
+
49
+ _URL = "https://huggingface.co/datasets/BSC-TeMU/SQAC/resolve/main/"
50
+ _TRAINING_FILE = "train.json"
51
+ _DEV_FILE = "dev.json"
52
+ _TEST_FILE = "test.json"
53
+
54
+
55
+ class SQACConfig(datasets.BuilderConfig):
56
+ """ Builder config for the SQAC dataset """
57
+
58
+ def __init__(self, **kwargs):
59
+ """BuilderConfig for SQAC.
60
+ Args:
61
+ **kwargs: keyword arguments forwarded to super.
62
+ """
63
+ super(SQACConfig, self).__init__(**kwargs)
64
+
65
+
66
+ class SQAC(datasets.GeneratorBasedBuilder):
67
+ """SQAC Dataset."""
68
+
69
+ BUILDER_CONFIGS = [
70
+ SQACConfig(
71
+ name="SQAC",
72
+ #version=datasets.Version("1.0.1"),
73
+ description="SQAC dataset",
74
+ ),
75
+ ]
76
+
77
+ def _info(self):
78
+ return datasets.DatasetInfo(
79
+ description=_DESCRIPTION,
80
+ features=datasets.Features(
81
+ {
82
+ "id": datasets.Value("string"),
83
+ "title": datasets.Value("string"),
84
+ "context": datasets.Value("string"),
85
+ "question": datasets.Value("string"),
86
+ "answers": datasets.features.Sequence(
87
+ {
88
+ "text": datasets.Value("string"),
89
+ "answer_start": datasets.Value("int32"),
90
+ }
91
+ ),
92
+ }
93
+ ),
94
+ # No default supervised_keys (as we have to pass both question
95
+ # and context as input).
96
+ supervised_keys=None,
97
+ homepage=_HOMEPAGE,
98
+ citation=_CITATION,
99
+ )
100
+
101
+ def _split_generators(self, dl_manager):
102
+ """Returns SplitGenerators."""
103
+ urls_to_download = {
104
+ "train": f"{_URL}{_TRAINING_FILE}",
105
+ "dev": f"{_URL}{_DEV_FILE}",
106
+ "test": f"{_URL}{_TEST_FILE}",
107
+ }
108
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
109
+
110
+ return [
111
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
112
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
113
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
114
+ ]
115
+
116
+ def _generate_examples(self, filepath):
117
+ """This function returns the examples in the raw (text) form."""
118
+ logger.info("generating examples from = %s", filepath)
119
+ with open(filepath, encoding="utf-8") as f:
120
+ viquiquad = json.load(f, encoding="utf-8")
121
+ for article in viquiquad["data"]:
122
+ title = article.get("title", "").strip()
123
+ for paragraph in article["paragraphs"]:
124
+ context = paragraph["context"].strip()
125
+ for qa in paragraph["qas"]:
126
+ question = qa["question"].strip()
127
+ id_ = qa["id"]
128
+
129
+ answer_starts = [answer["answer_start"] for answer in qa["answers"]]
130
+ answers = [answer["text"].strip() for answer in qa["answers"]]
131
+
132
+ # Features currently used are "context", "question", and "answers".
133
+ # Others are extracted here for the ease of future expansions.
134
+ yield id_, {
135
+ "title": title,
136
+ "context": context,
137
+ "question": question,
138
+ "id": id_,
139
+ "answers": {
140
+ "answer_start": answer_starts,
141
+ "text": answers,
142
+ },
143
+ }
dev.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec748f222626e5081a34193ca469bcf8112092837678a6948a2d6ae7d6629d1a
3
+ size 1402428
test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af84d20d4afeba9a6936c3c7d8b5f577a15af5461c8f0e29f94286bb2205b18d
3
+ size 1345694
train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d5c76176646e2ae7bdcd8b5ec6f18349102a9363aa25ad7d0e48262d7480d43
3
+ size 11042089