system HF staff commited on
Commit
4d2abcd
1 Parent(s): 5336ad3

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. squadshifts.py +4 -2
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://modestyachts.github.io/squadshifts-website/index.html](https://modestyachts.github.io/squadshifts-website/index.html)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,24 +37,24 @@
37
  - **Size of the generated dataset:** 35.82 MB
38
  - **Total amount of disk used:** 98.78 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  SquadShifts consists of four new test sets for the Stanford Question Answering Dataset (SQuAD) from four different domains: Wikipedia articles, New York \
43
  Times articles, Reddit comments, and Amazon product reviews. Each dataset was generated using the same data generating pipeline, Amazon Mechanical Turk interface, and data cleaning code as the original SQuAD v1.1 dataset. The "new-wikipedia" dataset measures overfitting on the original SQuAD v1.1 dataset. The "new-york-times", "reddit", and "amazon" datasets measure robustness to natural distribution shifts. We encourage SQuAD model developers to also evaluate their methods on these new datasets!
44
 
45
- ### [Supported Tasks](#supported-tasks)
46
 
47
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
48
 
49
- ### [Languages](#languages)
50
 
51
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
 
53
- ## [Dataset Structure](#dataset-structure)
54
 
55
  We show detailed information for up to 5 configurations of the dataset.
56
 
57
- ### [Data Instances](#data-instances)
58
 
59
  #### amazon
60
 
@@ -136,7 +136,7 @@ An example of 'test' looks as follows.
136
  }
137
  ```
138
 
139
- ### [Data Fields](#data-fields)
140
 
141
  The data fields are the same among all splits.
142
 
@@ -176,7 +176,7 @@ The data fields are the same among all splits.
176
  - `text`: a `string` feature.
177
  - `answer_start`: a `int32` feature.
178
 
179
- ### [Data Splits Sample Size](#data-splits-sample-size)
180
 
181
  | name |test |
182
  |--------|----:|
@@ -185,49 +185,49 @@ The data fields are the same among all splits.
185
  |nyt |10065|
186
  |reddit | 9803|
187
 
188
- ## [Dataset Creation](#dataset-creation)
189
 
190
- ### [Curation Rationale](#curation-rationale)
191
 
192
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
193
 
194
- ### [Source Data](#source-data)
195
 
196
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
 
198
- ### [Annotations](#annotations)
199
 
200
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
 
202
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
203
 
204
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
 
206
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
207
 
208
- ### [Social Impact of Dataset](#social-impact-of-dataset)
209
 
210
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
 
212
- ### [Discussion of Biases](#discussion-of-biases)
213
 
214
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
 
216
- ### [Other Known Limitations](#other-known-limitations)
217
 
218
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
219
 
220
- ## [Additional Information](#additional-information)
221
 
222
- ### [Dataset Curators](#dataset-curators)
223
 
224
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
 
226
- ### [Licensing Information](#licensing-information)
227
 
228
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
229
 
230
- ### [Citation Information](#citation-information)
231
 
232
  ```
233
  @inproceedings{miller2020effect,
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://modestyachts.github.io/squadshifts-website/index.html](https://modestyachts.github.io/squadshifts-website/index.html)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 35.82 MB
38
  - **Total amount of disk used:** 98.78 MB
39
 
40
+ ### Dataset Summary
41
 
42
  SquadShifts consists of four new test sets for the Stanford Question Answering Dataset (SQuAD) from four different domains: Wikipedia articles, New York \
43
  Times articles, Reddit comments, and Amazon product reviews. Each dataset was generated using the same data generating pipeline, Amazon Mechanical Turk interface, and data cleaning code as the original SQuAD v1.1 dataset. The "new-wikipedia" dataset measures overfitting on the original SQuAD v1.1 dataset. The "new-york-times", "reddit", and "amazon" datasets measure robustness to natural distribution shifts. We encourage SQuAD model developers to also evaluate their methods on these new datasets!
44
 
45
+ ### Supported Tasks
46
 
47
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
48
 
49
+ ### Languages
50
 
51
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
 
53
+ ## Dataset Structure
54
 
55
  We show detailed information for up to 5 configurations of the dataset.
56
 
57
+ ### Data Instances
58
 
59
  #### amazon
60
 
 
136
  }
137
  ```
138
 
139
+ ### Data Fields
140
 
141
  The data fields are the same among all splits.
142
 
 
176
  - `text`: a `string` feature.
177
  - `answer_start`: a `int32` feature.
178
 
179
+ ### Data Splits Sample Size
180
 
181
  | name |test |
182
  |--------|----:|
 
185
  |nyt |10065|
186
  |reddit | 9803|
187
 
188
+ ## Dataset Creation
189
 
190
+ ### Curation Rationale
191
 
192
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
193
 
194
+ ### Source Data
195
 
196
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
 
198
+ ### Annotations
199
 
200
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
 
202
+ ### Personal and Sensitive Information
203
 
204
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
 
206
+ ## Considerations for Using the Data
207
 
208
+ ### Social Impact of Dataset
209
 
210
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
 
212
+ ### Discussion of Biases
213
 
214
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
 
216
+ ### Other Known Limitations
217
 
218
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
219
 
220
+ ## Additional Information
221
 
222
+ ### Dataset Curators
223
 
224
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
 
226
+ ### Licensing Information
227
 
228
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
229
 
230
+ ### Citation Information
231
 
232
  ```
233
  @inproceedings{miller2020effect,
squadshifts.py CHANGED
@@ -19,11 +19,13 @@
19
  from __future__ import absolute_import, division, print_function
20
 
21
  import json
22
- import logging
23
 
24
  import datasets
25
 
26
 
 
 
 
27
  _CITATION = """\
28
  @inproceedings{miller2020effect,
29
  author = {J. Miller and K. Krauth and B. Recht and L. Schmidt},
@@ -143,7 +145,7 @@ class SquadShifts(datasets.GeneratorBasedBuilder):
143
 
144
  def _generate_examples(self, filepath):
145
  """This function returns the examples in the raw (text) form."""
146
- logging.info("generating examples from = %s", filepath)
147
  with open(filepath, encoding="utf-8") as f:
148
  squad = json.load(f)
149
  for article in squad["data"]:
 
19
  from __future__ import absolute_import, division, print_function
20
 
21
  import json
 
22
 
23
  import datasets
24
 
25
 
26
+ logger = datasets.logging.get_logger(__name__)
27
+
28
+
29
  _CITATION = """\
30
  @inproceedings{miller2020effect,
31
  author = {J. Miller and K. Krauth and B. Recht and L. Schmidt},
 
145
 
146
  def _generate_examples(self, filepath):
147
  """This function returns the examples in the raw (text) form."""
148
+ logger.info("generating examples from = %s", filepath)
149
  with open(filepath, encoding="utf-8") as f:
150
  squad = json.load(f)
151
  for article in squad["data"]: