eraldoluis commited on
Commit
d24046d
1 Parent(s): 3a9984b

README and load script

Browse files
Files changed (2) hide show
  1. README.md +174 -0
  2. faquad.py +155 -0
README.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: FaQuAD
3
+ annotations_creators:
4
+ - expert-generated
5
+ language_creators:
6
+ - found
7
+ language:
8
+ - br
9
+ license:
10
+ - cc-by-4.0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - n<1K
15
+ source_datasets:
16
+ - extended|wikipedia
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - extractive-qa
21
+ # paperswithcode_id: faquad
22
+ train-eval-index:
23
+ - config: plain_text
24
+ task: question-answering
25
+ task_id: extractive_question_answering
26
+ splits:
27
+ train_split: train
28
+ eval_split: validation
29
+ col_mapping:
30
+ question: question
31
+ context: context
32
+ answers:
33
+ text: text
34
+ answer_start: answer_start
35
+ metrics:
36
+ - type: squad
37
+ name: SQuAD
38
+ ---
39
+
40
+ # Dataset Card for [Dataset Name]
41
+
42
+ ## Table of Contents
43
+ - [Table of Contents](#table-of-contents)
44
+ - [Dataset Description](#dataset-description)
45
+ - [Dataset Summary](#dataset-summary)
46
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
47
+ - [Languages](#languages)
48
+ - [Dataset Structure](#dataset-structure)
49
+ - [Data Instances](#data-instances)
50
+ - [Data Fields](#data-fields)
51
+ - [Data Splits](#data-splits)
52
+ - [Dataset Creation](#dataset-creation)
53
+ - [Curation Rationale](#curation-rationale)
54
+ - [Source Data](#source-data)
55
+ - [Annotations](#annotations)
56
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
57
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
58
+ - [Social Impact of Dataset](#social-impact-of-dataset)
59
+ - [Discussion of Biases](#discussion-of-biases)
60
+ - [Other Known Limitations](#other-known-limitations)
61
+ - [Additional Information](#additional-information)
62
+ - [Dataset Curators](#dataset-curators)
63
+ - [Licensing Information](#licensing-information)
64
+ - [Citation Information](#citation-information)
65
+ - [Contributions](#contributions)
66
+
67
+ ## Dataset Description
68
+
69
+ - **Homepage:** https://github.com/liafacom/faquad
70
+ - **Repository:** https://github.com/liafacom/faquad
71
+ - **Paper:** https://ieeexplore.ieee.org/document/8923668/
72
+ <!-- - **Leaderboard:** -->
73
+ - **Point of Contact:** Eraldo R. Fernandes <eraldoluis@gmail.com>
74
+
75
+ ### Dataset Summary
76
+
77
+ Academic secretaries and faculty members of higher education institutions face a common problem:
78
+ the abundance of questions sent by academics
79
+ whose answers are found in available institutional documents.
80
+ The official documents produced by Brazilian public universities are vast and disperse,
81
+ which discourage students to further search for answers in such sources.
82
+ In order to lessen this problem, we present FaQuAD:
83
+ a novel machine reading comprehension dataset
84
+ in the domain of Brazilian higher education institutions.
85
+ FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016].
86
+ It comprises 900 questions about 249 reading passages (paragraphs),
87
+ which were taken from 18 official documents of a computer science college
88
+ from a Brazilian federal university
89
+ and 21 Wikipedia articles related to Brazilian higher education system.
90
+ As far as we know, this is the first Portuguese reading comprehension dataset in this format.
91
+
92
+ ### Supported Tasks and Leaderboards
93
+
94
+ [More Information Needed]
95
+
96
+ ### Languages
97
+
98
+ [More Information Needed]
99
+
100
+ ## Dataset Structure
101
+
102
+ ### Data Instances
103
+
104
+ [More Information Needed]
105
+
106
+ ### Data Fields
107
+
108
+ [More Information Needed]
109
+
110
+ ### Data Splits
111
+
112
+ [More Information Needed]
113
+
114
+ ## Dataset Creation
115
+
116
+ ### Curation Rationale
117
+
118
+ [More Information Needed]
119
+
120
+ ### Source Data
121
+
122
+ #### Initial Data Collection and Normalization
123
+
124
+ [More Information Needed]
125
+
126
+ #### Who are the source language producers?
127
+
128
+ [More Information Needed]
129
+
130
+ ### Annotations
131
+
132
+ #### Annotation process
133
+
134
+ [More Information Needed]
135
+
136
+ #### Who are the annotators?
137
+
138
+ [More Information Needed]
139
+
140
+ ### Personal and Sensitive Information
141
+
142
+ [More Information Needed]
143
+
144
+ ## Considerations for Using the Data
145
+
146
+ ### Social Impact of Dataset
147
+
148
+ [More Information Needed]
149
+
150
+ ### Discussion of Biases
151
+
152
+ [More Information Needed]
153
+
154
+ ### Other Known Limitations
155
+
156
+ [More Information Needed]
157
+
158
+ ## Additional Information
159
+
160
+ ### Dataset Curators
161
+
162
+ [More Information Needed]
163
+
164
+ ### Licensing Information
165
+
166
+ [More Information Needed]
167
+
168
+ ### Citation Information
169
+
170
+ [More Information Needed]
171
+
172
+ ### Contributions
173
+
174
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
faquad.py ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ #
16
+ # Adapted from the SQuAD script.
17
+ #
18
+
19
+ # Lint as: python3
20
+ """FaQuAD: Reading Comprehension Dataset in the Domain of Brazilian Higher Education."""
21
+
22
+
23
+ import json
24
+
25
+ import datasets
26
+ from datasets.tasks import QuestionAnsweringExtractive
27
+
28
+
29
+ logger = datasets.logging.get_logger(__name__)
30
+
31
+
32
+ _CITATION = """\
33
+ @INPROCEEDINGS{
34
+ 8923668,
35
+ author={Sayama, Hélio Fonseca and Araujo, Anderson Viçoso and Fernandes, Eraldo Rezende},
36
+ booktitle={2019 8th Brazilian Conference on Intelligent Systems (BRACIS)},
37
+ title={FaQuAD: Reading Comprehension Dataset in the Domain of Brazilian Higher Education},
38
+ year={2019},
39
+ volume={},
40
+ number={},
41
+ pages={443-448},
42
+ doi={10.1109/BRACIS.2019.00084}
43
+ }
44
+ """
45
+
46
+ _DESCRIPTION = """\
47
+ Academic secretaries and faculty members of higher education institutions face a common problem:
48
+ the abundance of questions sent by academics
49
+ whose answers are found in available institutional documents.
50
+ The official documents produced by Brazilian public universities are vast and disperse,
51
+ which discourage students to further search for answers in such sources.
52
+ In order to lessen this problem, we present FaQuAD:
53
+ a novel machine reading comprehension dataset
54
+ in the domain of Brazilian higher education institutions.
55
+ FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016].
56
+ It comprises 900 questions about 249 reading passages (paragraphs),
57
+ which were taken from 18 official documents of a computer science college
58
+ from a Brazilian federal university
59
+ and 21 Wikipedia articles related to Brazilian higher education system.
60
+ As far as we know, this is the first Portuguese reading comprehension dataset in this format.
61
+ """
62
+
63
+ _URL = "https://raw.githubusercontent.com/liafacom/faquad/master/data/"
64
+ _URLS = {
65
+ "train": _URL + "train.json",
66
+ "dev": _URL + "dev.json",
67
+ }
68
+
69
+
70
+ class FaquadConfig(datasets.BuilderConfig):
71
+ """BuilderConfig for FaQuAD."""
72
+
73
+ def __init__(self, **kwargs):
74
+ """BuilderConfig for FaQuAD.
75
+
76
+ Args:
77
+ **kwargs: keyword arguments forwarded to super.
78
+ """
79
+ super(FaquadConfig, self).__init__(**kwargs)
80
+
81
+
82
+ class Faquad(datasets.GeneratorBasedBuilder):
83
+ """FaQuAD: Reading Comprehension Dataset in the Domain of Brazilian Higher Education. Version 1.0."""
84
+
85
+ BUILDER_CONFIGS = [
86
+ FaquadConfig(
87
+ name="plain_text",
88
+ version=datasets.Version("1.0.0", ""),
89
+ description="Plain text",
90
+ ),
91
+ ]
92
+
93
+ def _info(self):
94
+ return datasets.DatasetInfo(
95
+ description=_DESCRIPTION,
96
+ features=datasets.Features(
97
+ {
98
+ "id": datasets.Value("string"),
99
+ "title": datasets.Value("string"),
100
+ "context": datasets.Value("string"),
101
+ "question": datasets.Value("string"),
102
+ "answers": datasets.features.Sequence(
103
+ {
104
+ "text": datasets.Value("string"),
105
+ "answer_start": datasets.Value("int32"),
106
+ }
107
+ ),
108
+ }
109
+ ),
110
+ # No default supervised_keys (as we have to pass both question
111
+ # and context as input).
112
+ supervised_keys=None,
113
+ homepage="https://github.com/liafacom/faquad",
114
+ citation=_CITATION,
115
+ task_templates=[
116
+ QuestionAnsweringExtractive(
117
+ question_column="question", context_column="context", answers_column="answers"
118
+ )
119
+ ],
120
+ )
121
+
122
+ def _split_generators(self, dl_manager):
123
+ downloaded_files = dl_manager.download_and_extract(_URLS)
124
+
125
+ return [
126
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
127
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
128
+ ]
129
+
130
+ def _generate_examples(self, filepath):
131
+ """This function returns the examples in the raw (text) form."""
132
+ logger.info("generating examples from = %s", filepath)
133
+ key = 0
134
+ with open(filepath, encoding="utf-8") as f:
135
+ faquad = json.load(f)
136
+ for article in faquad["data"]:
137
+ title = article.get("title", "")
138
+ for paragraph in article["paragraphs"]:
139
+ context = paragraph["context"] # do not strip leading blank spaces GH-2585
140
+ for qa in paragraph["qas"]:
141
+ answer_starts = [answer["answer_start"] for answer in qa["answers"]]
142
+ answers = [answer["text"] for answer in qa["answers"]]
143
+ # Features currently used are "context", "question", and "answers".
144
+ # Others are extracted here for the ease of future expansions.
145
+ yield key, {
146
+ "title": title,
147
+ "context": context,
148
+ "question": qa["question"],
149
+ "id": qa["id"],
150
+ "answers": {
151
+ "answer_start": answer_starts,
152
+ "text": answers,
153
+ },
154
+ }
155
+ key += 1