system HF staff commited on
Commit
2b98f1c
1 Parent(s): 31ae874

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (1) hide show
  1. README.md +17 -4
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  ---
3
 
4
  # Dataset Card for "squadshifts"
@@ -6,12 +7,12 @@
6
  ## Table of Contents
7
  - [Dataset Description](#dataset-description)
8
  - [Dataset Summary](#dataset-summary)
9
- - [Supported Tasks](#supported-tasks)
10
  - [Languages](#languages)
11
  - [Dataset Structure](#dataset-structure)
12
  - [Data Instances](#data-instances)
13
  - [Data Fields](#data-fields)
14
- - [Data Splits Sample Size](#data-splits-sample-size)
15
  - [Dataset Creation](#dataset-creation)
16
  - [Curation Rationale](#curation-rationale)
17
  - [Source Data](#source-data)
@@ -42,7 +43,7 @@
42
  SquadShifts consists of four new test sets for the Stanford Question Answering Dataset (SQuAD) from four different domains: Wikipedia articles, New York \
43
  Times articles, Reddit comments, and Amazon product reviews. Each dataset was generated using the same data generating pipeline, Amazon Mechanical Turk interface, and data cleaning code as the original SQuAD v1.1 dataset. The "new-wikipedia" dataset measures overfitting on the original SQuAD v1.1 dataset. The "new-york-times", "reddit", and "amazon" datasets measure robustness to natural distribution shifts. We encourage SQuAD model developers to also evaluate their methods on these new datasets!
44
 
45
- ### Supported Tasks
46
 
47
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
48
 
@@ -176,7 +177,7 @@ The data fields are the same among all splits.
176
  - `text`: a `string` feature.
177
  - `answer_start`: a `int32` feature.
178
 
179
- ### Data Splits Sample Size
180
 
181
  | name |test |
182
  |--------|----:|
@@ -193,10 +194,22 @@ The data fields are the same among all splits.
193
 
194
  ### Source Data
195
 
 
 
 
 
 
 
196
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
 
198
  ### Annotations
199
 
 
 
 
 
 
 
200
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
 
202
  ### Personal and Sensitive Information
 
1
  ---
2
+ paperswithcode_id: squad-shifts
3
  ---
4
 
5
  # Dataset Card for "squadshifts"
 
7
  ## Table of Contents
8
  - [Dataset Description](#dataset-description)
9
  - [Dataset Summary](#dataset-summary)
10
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
  - [Languages](#languages)
12
  - [Dataset Structure](#dataset-structure)
13
  - [Data Instances](#data-instances)
14
  - [Data Fields](#data-fields)
15
+ - [Data Splits](#data-splits)
16
  - [Dataset Creation](#dataset-creation)
17
  - [Curation Rationale](#curation-rationale)
18
  - [Source Data](#source-data)
 
43
  SquadShifts consists of four new test sets for the Stanford Question Answering Dataset (SQuAD) from four different domains: Wikipedia articles, New York \
44
  Times articles, Reddit comments, and Amazon product reviews. Each dataset was generated using the same data generating pipeline, Amazon Mechanical Turk interface, and data cleaning code as the original SQuAD v1.1 dataset. The "new-wikipedia" dataset measures overfitting on the original SQuAD v1.1 dataset. The "new-york-times", "reddit", and "amazon" datasets measure robustness to natural distribution shifts. We encourage SQuAD model developers to also evaluate their methods on these new datasets!
45
 
46
+ ### Supported Tasks and Leaderboards
47
 
48
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
 
 
177
  - `text`: a `string` feature.
178
  - `answer_start`: a `int32` feature.
179
 
180
+ ### Data Splits
181
 
182
  | name |test |
183
  |--------|----:|
 
194
 
195
  ### Source Data
196
 
197
+ #### Initial Data Collection and Normalization
198
+
199
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
200
+
201
+ #### Who are the source language producers?
202
+
203
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
204
 
205
  ### Annotations
206
 
207
+ #### Annotation process
208
+
209
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
210
+
211
+ #### Who are the annotators?
212
+
213
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
214
 
215
  ### Personal and Sensitive Information