Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
ArXiv:
Tags:
License:
system HF staff commited on
Commit
150a664
1 Parent(s): ba79068

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. squad.py +4 -2
README.md CHANGED
@@ -46,7 +46,7 @@ task_ids:
46
  - [Citation Information](#citation-information)
47
  - [Contributions](#contributions)
48
 
49
- ## [Dataset Description](#dataset-description)
50
 
51
  - **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
52
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -56,23 +56,23 @@ task_ids:
56
  - **Size of the generated dataset:** 85.75 MB
57
  - **Total amount of disk used:** 119.27 MB
58
 
59
- ### [Dataset Summary](#dataset-summary)
60
 
61
  Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
62
 
63
- ### [Supported Tasks](#supported-tasks)
64
 
65
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
 
67
- ### [Languages](#languages)
68
 
69
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
70
 
71
- ## [Dataset Structure](#dataset-structure)
72
 
73
  We show detailed information for up to 5 configurations of the dataset.
74
 
75
- ### [Data Instances](#data-instances)
76
 
77
  #### plain_text
78
 
@@ -94,7 +94,7 @@ An example of 'train' looks as follows.
94
  }
95
  ```
96
 
97
- ### [Data Fields](#data-fields)
98
 
99
  The data fields are the same among all splits.
100
 
@@ -107,55 +107,55 @@ The data fields are the same among all splits.
107
  - `text`: a `string` feature.
108
  - `answer_start`: a `int32` feature.
109
 
110
- ### [Data Splits Sample Size](#data-splits-sample-size)
111
 
112
  | name |train|validation|
113
  |----------|----:|---------:|
114
  |plain_text|87599| 10570|
115
 
116
- ## [Dataset Creation](#dataset-creation)
117
 
118
- ### [Curation Rationale](#curation-rationale)
119
 
120
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
 
122
- ### [Source Data](#source-data)
123
 
124
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
 
126
- ### [Annotations](#annotations)
127
 
128
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
 
130
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
131
 
132
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
 
134
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
135
 
136
- ### [Social Impact of Dataset](#social-impact-of-dataset)
137
 
138
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
 
140
- ### [Discussion of Biases](#discussion-of-biases)
141
 
142
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
 
144
- ### [Other Known Limitations](#other-known-limitations)
145
 
146
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
 
148
- ## [Additional Information](#additional-information)
149
 
150
- ### [Dataset Curators](#dataset-curators)
151
 
152
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
 
154
- ### [Licensing Information](#licensing-information)
155
 
156
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
 
158
- ### [Citation Information](#citation-information)
159
 
160
  ```
161
  @article{2016arXiv160605250R,
 
46
  - [Citation Information](#citation-information)
47
  - [Contributions](#contributions)
48
 
49
+ ## Dataset Description
50
 
51
  - **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
52
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
56
  - **Size of the generated dataset:** 85.75 MB
57
  - **Total amount of disk used:** 119.27 MB
58
 
59
+ ### Dataset Summary
60
 
61
  Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
62
 
63
+ ### Supported Tasks
64
 
65
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
 
67
+ ### Languages
68
 
69
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
70
 
71
+ ## Dataset Structure
72
 
73
  We show detailed information for up to 5 configurations of the dataset.
74
 
75
+ ### Data Instances
76
 
77
  #### plain_text
78
 
 
94
  }
95
  ```
96
 
97
+ ### Data Fields
98
 
99
  The data fields are the same among all splits.
100
 
 
107
  - `text`: a `string` feature.
108
  - `answer_start`: a `int32` feature.
109
 
110
+ ### Data Splits Sample Size
111
 
112
  | name |train|validation|
113
  |----------|----:|---------:|
114
  |plain_text|87599| 10570|
115
 
116
+ ## Dataset Creation
117
 
118
+ ### Curation Rationale
119
 
120
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
 
122
+ ### Source Data
123
 
124
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
 
126
+ ### Annotations
127
 
128
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
 
130
+ ### Personal and Sensitive Information
131
 
132
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
 
134
+ ## Considerations for Using the Data
135
 
136
+ ### Social Impact of Dataset
137
 
138
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
 
140
+ ### Discussion of Biases
141
 
142
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
 
144
+ ### Other Known Limitations
145
 
146
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
 
148
+ ## Additional Information
149
 
150
+ ### Dataset Curators
151
 
152
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
 
154
+ ### Licensing Information
155
 
156
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
 
158
+ ### Citation Information
159
 
160
  ```
161
  @article{2016arXiv160605250R,
squad.py CHANGED
@@ -19,11 +19,13 @@
19
  from __future__ import absolute_import, division, print_function
20
 
21
  import json
22
- import logging
23
 
24
  import datasets
25
 
26
 
 
 
 
27
  _CITATION = """\
28
  @article{2016arXiv160605250R,
29
  author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
@@ -109,7 +111,7 @@ class Squad(datasets.GeneratorBasedBuilder):
109
 
110
  def _generate_examples(self, filepath):
111
  """This function returns the examples in the raw (text) form."""
112
- logging.info("generating examples from = %s", filepath)
113
  with open(filepath, encoding="utf-8") as f:
114
  squad = json.load(f)
115
  for article in squad["data"]:
 
19
  from __future__ import absolute_import, division, print_function
20
 
21
  import json
 
22
 
23
  import datasets
24
 
25
 
26
+ logger = datasets.logging.get_logger(__name__)
27
+
28
+
29
  _CITATION = """\
30
  @article{2016arXiv160605250R,
31
  author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
 
111
 
112
  def _generate_examples(self, filepath):
113
  """This function returns the examples in the raw (text) form."""
114
+ logger.info("generating examples from = %s", filepath)
115
  with open(filepath, encoding="utf-8") as f:
116
  squad = json.load(f)
117
  for article in squad["data"]: