Datasets:

Modalities:
Text
Formats:
json
Sub-tasks:
extractive-qa
Languages:
Catalan
ArXiv:
Libraries:
Datasets
pandas
License:
ccasimiro commited on
Commit
4effc5f
1 Parent(s): 9a72997

Upload dataset

Browse files
Files changed (5) hide show
  1. README.md +236 -0
  2. dev.json +0 -0
  3. test.json +0 -0
  4. train.json +0 -0
  5. vilaquad.py +197 -0
README.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+
3
+ languages:
4
+
5
+ - ca
6
+
7
+ ---
8
+
9
+ # VilaQuAD, An extractive QA dataset for catalan, from Vilaweb newswire text
10
+
11
+ ## BibTeX citation
12
+
13
+ If you use any of these resources (datasets or models) in your work, please cite our latest paper:
14
+
15
+ ```bibtex
16
+
17
+ @inproceedings{armengol-estape-etal-2021-multilingual,
18
+
19
+ title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
20
+
21
+ author = "Armengol-Estap{\'e}, Jordi and
22
+
23
+ Carrino, Casimiro Pio and
24
+
25
+ Rodriguez-Penagos, Carlos and
26
+
27
+ de Gibert Bonet, Ona and
28
+
29
+ Armentano-Oller, Carme and
30
+
31
+ Gonzalez-Agirre, Aitor and
32
+
33
+ Melero, Maite and
34
+
35
+ Villegas, Marta",
36
+
37
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
38
+
39
+ month = aug,
40
+
41
+ year = "2021",
42
+
43
+ address = "Online",
44
+
45
+ publisher = "Association for Computational Linguistics",
46
+
47
+ url = "https://aclanthology.org/2021.findings-acl.437",
48
+
49
+ doi = "10.18653/v1/2021.findings-acl.437",
50
+
51
+ pages = "4933--4946",
52
+
53
+ }
54
+
55
+ ```
56
+
57
+ ## Digital Object Identifier (DOI) and access to dataset files
58
+
59
+ https://doi.org/10.5281/zenodo.4562337
60
+
61
+
62
+ ## Introduction
63
+
64
+ This dataset contains 2095 of Catalan language news articles along with 1 to 5 questions referring to each fragment (or context).
65
+ VilaQuad articles are extracted from the daily Vilaweb (www.vilaweb.cat) and used under CC-by-nc-sa-nd (https://creativecommons.org/licenses/by-nc-nd/3.0/deed.ca) licence.
66
+ This dataset can be used to build extractive-QA and Language Models.
67
+
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ Extractive-QA, Language Model
71
+
72
+ ### Languages
73
+
74
+ CA- Catalan
75
+
76
+ ### Directory structure
77
+
78
+ * README.md
79
+ * dev.json
80
+ * test.json
81
+ * train.json
82
+ * vilaquad.py
83
+
84
+ ## Dataset Structure
85
+
86
+ ### Data Instances
87
+
88
+ Three json files
89
+
90
+ ### Data Fields
91
+
92
+ Follows ((Rajpurkar, Pranav et al., 2016) for squad v1 datasets. (see below for full reference)
93
+
94
+ ### Example:
95
+ <pre>
96
+ {
97
+ "data": [
98
+ {
99
+ "title": "Com celebrar el Cap d'Any 2020? Deu propostes per a acomiadar-se del 2019",
100
+ "paragraphs": [
101
+ {
102
+ "context": "Hi ha moltes propostes per a acomiadar-se d'aquest 2019. Els uns es queden a casa, els altres volen anar lluny o sortir al teatre. També s'organitzen festes o festivals a l'engròs, fins i tot hi ha propostes diürnes. Tot és possible per Cap d'Any. Encara no sabeu com celebrar l'entrada el 2020? Us oferim una llista amb deu propostes variades arreu dels Països Catalans: Festivern El Festivern enguany celebra quinze anys.",
103
+ "qas": [
104
+
105
+ {
106
+ "answers": [
107
+ {
108
+ "text": "festes o festivals",
109
+ "answer_start": 150
110
+ }
111
+ ],
112
+ "id": "P_23_C_23_Q2",
113
+ "question": "Què s'organitza a l'engròs per acomiadar el 2019?"
114
+ },
115
+ ...
116
+ ]
117
+ }
118
+ ]
119
+ },
120
+ ...
121
+ ]
122
+ }
123
+
124
+ </pre>
125
+
126
+ ### Data Splits
127
+
128
+ train.json: 1295 contexts, 3882 questions
129
+ dev.json: 400 contexts, 1200 questions
130
+ test.json: 400 contexts, 1200 questions
131
+
132
+ ## Content analysis
133
+
134
+ ### Number of articles, paragraphs and questions
135
+
136
+ * Number of contexts: 2095
137
+ * Number of questions: 6282
138
+ * Questions/context: 2.99
139
+ * Number of sentences in contexts: 11901
140
+ * Sentences/context: 5.6
141
+
142
+ ### Number of tokens
143
+
144
+ * tokens in context: 422477
145
+ * tokens/context 201.66
146
+ * tokens in questons: 65849
147
+ * tokens/questions: 10.48
148
+ * tokens in answers: 27716
149
+ * tokens/answers: 4.41
150
+
151
+ ### Question type
152
+
153
+ | Question | Count | % |
154
+ |--------|-----|------|
155
+ | què | 1698 | 27.03 % |
156
+ | qui | 1161 | 18.48 % |
157
+ | com | 574 | 9.14 % |
158
+ | quan | 468 | 7.45 % |
159
+ | on | 559 | 8.9 % |
160
+ | quant | 601 | 9.57 % |
161
+ | quin | 1301 | 20.87 % |
162
+ | no question mark | 0 | 0.0 % |
163
+
164
+
165
+ ### Question-answer relationships
166
+
167
+ From 100 randomly selected samples:
168
+
169
+ * Lexical variation: 32.0%
170
+ * World knowledge: 16.0%
171
+ * Syntactic variation: 22.0%
172
+ * Multiple sentence: 16.0%
173
+
174
+ ## Dataset Creation
175
+
176
+ ### Methodology
177
+ From a the online edition of the catalan newspaper Vilaweb (https://www.vilaweb.cat), 2095 articles were randomnly selected. These headlines were also used to create a Textual Entailment dataset. For the extractive QA dataset, creation of between 1 and 5 questions for each news context was commissioned, following an adaptation of the guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)), http://arxiv.org/abs/1606.05250. In total, 6282 pairs of a question and an extracted fragment that contains the answer were created.
178
+
179
+ ### Curation Rationale
180
+
181
+ For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. We also created another QA dataset with wikipedia to ensure thematic and stylistic variety.
182
+
183
+ ### Source Data
184
+
185
+ - https://www.vilaweb.cat/
186
+
187
+ #### Initial Data Collection and Normalization
188
+
189
+ The source data are scraped articles from archives of Catalan newspaper website Vilaweb (https://www.vilaweb.cat).
190
+
191
+ #### Who are the source language producers?
192
+
193
+ [More Information Needed]
194
+
195
+ ### Annotations
196
+
197
+ #### Annotation process
198
+
199
+ We comissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)), http://arxiv.org/abs/1606.05250.
200
+
201
+ #### Who are the annotators?
202
+
203
+ Annotation was commissioned to an specialized company that hired a team of native language speakers.
204
+
205
+ ### Dataset Curators
206
+
207
+ Carlos Rodríguez and Carme Armentano, from BSC-CNS
208
+
209
+ ### Personal and Sensitive Information
210
+
211
+ No personal or sensitive information included.
212
+
213
+ ## Considerations for Using the Data
214
+
215
+ ### Social Impact of Dataset
216
+
217
+ [More Information Needed]
218
+
219
+ ### Discussion of Biases
220
+
221
+ [More Information Needed]
222
+
223
+ ### Other Known Limitations
224
+
225
+ [More Information Needed]
226
+
227
+
228
+ ## Contact
229
+
230
+ Carlos Rodríguez-Penagos (carlos.rodriguez1@bsc.es) and Carme Armentano-Oller (carme.armentano@bsc.es)
231
+
232
+
233
+ ## License
234
+
235
+ <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/"><img alt="Attribution-ShareAlike 4.0 International License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
236
+
dev.json ADDED
The diff for this file is too large to render. See raw diff
 
test.json ADDED
The diff for this file is too large to render. See raw diff
 
train.json ADDED
The diff for this file is too large to render. See raw diff
 
vilaquad.py ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Loading script for the VilaQuAD dataset.
2
+
3
+ import json
4
+
5
+ import datasets
6
+
7
+ logger = datasets.logging.get_logger(__name__)
8
+
9
+ _CITATION = """
10
+
11
+ Rodriguez-Penagos, Carlos Gerardo, & Armentano-Oller, Carme. (2021).
12
+
13
+ VilaQuAD: an extractive QA dataset for catalan, from Vilaweb newswire text
14
+
15
+ [Data set]. Zenodo. https://doi.org/10.5281/zenodo.4562337
16
+
17
+ """
18
+
19
+ _DESCRIPTION = """
20
+
21
+ This dataset contains 2095 of Catalan language news articles along with 1 to 5 questions referring to each fragment (or context).
22
+
23
+ VilaQuad articles are extracted from the daily Vilaweb (www.vilaweb.cat) and used under CC-by-nc-sa-nd (https://creativecommons.org/licenses/by-nc-nd/3.0/deed.ca) licence.
24
+
25
+ This dataset can be used to build extractive-QA and Language Models.
26
+
27
+ Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
28
+
29
+ MT4ALL and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).
30
+
31
+ """
32
+
33
+ _HOMEPAGE = """https://doi.org/10.5281/zenodo.4562337"""
34
+
35
+ _URL = "https://huggingface.co/datasets/BSC-TeMU/vilaquad/resolve/main/"
36
+ _URL = "/home/carme/Desktop/tasques/QA/HF/vilaquad/"
37
+
38
+ _TRAINING_FILE = "train.json"
39
+
40
+ _DEV_FILE = "dev.json"
41
+
42
+ _TEST_FILE = "test.json"
43
+
44
+ class VilaQuADConfig(datasets.BuilderConfig):
45
+
46
+ """ Builder config for the VilaQuAD dataset """
47
+
48
+ def __init__(self, **kwargs):
49
+
50
+ """BuilderConfig for VilaQuAD.
51
+
52
+ Args:
53
+
54
+ **kwargs: keyword arguments forwarded to super.
55
+
56
+ """
57
+
58
+ super(VilaQuADConfig, self).__init__(**kwargs)
59
+
60
+ class VilaQuAD(datasets.GeneratorBasedBuilder):
61
+
62
+ """VilaQuAD Dataset."""
63
+
64
+ BUILDER_CONFIGS = [
65
+
66
+ VilaQuADConfig(
67
+
68
+ name="VilaQuAD",
69
+
70
+ version=datasets.Version("1.0.1"),
71
+
72
+ description="VilaQuAD dataset",
73
+
74
+ ),
75
+
76
+ ]
77
+
78
+ def _info(self):
79
+
80
+ return datasets.DatasetInfo(
81
+
82
+ description=_DESCRIPTION,
83
+
84
+ features=datasets.Features(
85
+
86
+ {
87
+
88
+ "id": datasets.Value("string"),
89
+
90
+ "title": datasets.Value("string"),
91
+
92
+ "context": datasets.Value("string"),
93
+
94
+ "question": datasets.Value("string"),
95
+
96
+ "answers": [
97
+
98
+ {
99
+
100
+ "text": datasets.Value("string"),
101
+
102
+ "answer_start": datasets.Value("int32"),
103
+
104
+ }
105
+
106
+ ],
107
+
108
+ }
109
+
110
+ ),
111
+
112
+ # No default supervised_keys (as we have to pass both question
113
+
114
+ # and context as input).
115
+
116
+ supervised_keys=None,
117
+
118
+ homepage=_HOMEPAGE,
119
+
120
+ citation=_CITATION,
121
+
122
+ )
123
+
124
+ def _split_generators(self, dl_manager):
125
+
126
+ """Returns SplitGenerators."""
127
+
128
+ urls_to_download = {
129
+
130
+ "train": f"{_URL}{_TRAINING_FILE}",
131
+
132
+ "dev": f"{_URL}{_DEV_FILE}",
133
+
134
+ "test": f"{_URL}{_TEST_FILE}",
135
+
136
+ }
137
+
138
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
139
+
140
+ return [
141
+
142
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
143
+
144
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
145
+
146
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
147
+
148
+ ]
149
+
150
+ def _generate_examples(self, filepath):
151
+
152
+ """This function returns the examples in the raw (text) form."""
153
+
154
+ logger.info("generating examples from = %s", filepath)
155
+
156
+ with open(filepath, encoding="utf-8") as f:
157
+
158
+ vilaquad = json.load(f, encoding="utf-8")
159
+
160
+ for article in vilaquad["data"]:
161
+
162
+ title = article.get("title", "").strip()
163
+
164
+ for paragraph in article["paragraphs"]:
165
+
166
+ context = paragraph["context"].strip()
167
+
168
+ for qa in paragraph["qas"]:
169
+
170
+ question = qa["question"].strip()
171
+
172
+ id_ = qa["id"]
173
+
174
+ #answer_starts = [answer["answer_start"] for answer in qa["answers"]]
175
+
176
+ #answers = [answer["text"].strip() for answer in qa["answers"]]
177
+
178
+
179
+ # Features currently used are "context", "question", and "answers".
180
+
181
+ # Others are extracted here for the ease of future expansions.
182
+ text = qa["answers"][0]["text"]
183
+ answer_start = qa["answers"][0]["answer_start"]
184
+
185
+ yield id_, {
186
+
187
+ "title": title,
188
+
189
+ "context": context,
190
+
191
+ "question": question,
192
+
193
+ "id": id_,
194
+
195
+ "answers": [{"text": text, "answer_start": answer_start}]
196
+
197
+ }