system HF staff commited on
Commit
4e0463d
1 Parent(s): 59ee9ab

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (1) hide show
  1. README.md +21 -21
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://github.com/deepmind/pg19](https://github.com/deepmind/pg19)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,7 +37,7 @@
37
  - **Size of the generated dataset:** 10978.29 MB
38
  - **Total amount of disk used:** 22174.89 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  This repository contains the PG-19 language modeling benchmark.
43
  It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919.
@@ -50,19 +50,19 @@ Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mappin
50
  To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table.
51
  One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
52
 
53
- ### [Supported Tasks](#supported-tasks)
54
 
55
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
 
57
- ### [Languages](#languages)
58
 
59
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
60
 
61
- ## [Dataset Structure](#dataset-structure)
62
 
63
  We show detailed information for up to 5 configurations of the dataset.
64
 
65
- ### [Data Instances](#data-instances)
66
 
67
  #### default
68
 
@@ -82,7 +82,7 @@ This example was too long and was cropped:
82
  }
83
  ```
84
 
85
- ### [Data Fields](#data-fields)
86
 
87
  The data fields are the same among all splits.
88
 
@@ -92,55 +92,55 @@ The data fields are the same among all splits.
92
  - `url`: a `string` feature.
93
  - `text`: a `string` feature.
94
 
95
- ### [Data Splits Sample Size](#data-splits-sample-size)
96
 
97
  | name |train|validation|test|
98
  |-------|----:|---------:|---:|
99
  |default|28602| 50| 100|
100
 
101
- ## [Dataset Creation](#dataset-creation)
102
 
103
- ### [Curation Rationale](#curation-rationale)
104
 
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
- ### [Source Data](#source-data)
108
 
109
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
 
111
- ### [Annotations](#annotations)
112
 
113
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
 
115
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
116
 
117
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
 
119
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
120
 
121
- ### [Social Impact of Dataset](#social-impact-of-dataset)
122
 
123
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
 
125
- ### [Discussion of Biases](#discussion-of-biases)
126
 
127
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
 
129
- ### [Other Known Limitations](#other-known-limitations)
130
 
131
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
 
133
- ## [Additional Information](#additional-information)
134
 
135
- ### [Dataset Curators](#dataset-curators)
136
 
137
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
 
139
- ### [Licensing Information](#licensing-information)
140
 
141
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
 
143
- ### [Citation Information](#citation-information)
144
 
145
  ```
146
  @article{raecompressive2019,
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://github.com/deepmind/pg19](https://github.com/deepmind/pg19)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 10978.29 MB
38
  - **Total amount of disk used:** 22174.89 MB
39
 
40
+ ### Dataset Summary
41
 
42
  This repository contains the PG-19 language modeling benchmark.
43
  It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919.
 
50
  To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table.
51
  One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
52
 
53
+ ### Supported Tasks
54
 
55
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
 
57
+ ### Languages
58
 
59
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
60
 
61
+ ## Dataset Structure
62
 
63
  We show detailed information for up to 5 configurations of the dataset.
64
 
65
+ ### Data Instances
66
 
67
  #### default
68
 
 
82
  }
83
  ```
84
 
85
+ ### Data Fields
86
 
87
  The data fields are the same among all splits.
88
 
 
92
  - `url`: a `string` feature.
93
  - `text`: a `string` feature.
94
 
95
+ ### Data Splits Sample Size
96
 
97
  | name |train|validation|test|
98
  |-------|----:|---------:|---:|
99
  |default|28602| 50| 100|
100
 
101
+ ## Dataset Creation
102
 
103
+ ### Curation Rationale
104
 
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
+ ### Source Data
108
 
109
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
 
111
+ ### Annotations
112
 
113
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
 
115
+ ### Personal and Sensitive Information
116
 
117
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
 
119
+ ## Considerations for Using the Data
120
 
121
+ ### Social Impact of Dataset
122
 
123
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
 
125
+ ### Discussion of Biases
126
 
127
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
 
129
+ ### Other Known Limitations
130
 
131
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
 
133
+ ## Additional Information
134
 
135
+ ### Dataset Curators
136
 
137
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
 
139
+ ### Licensing Information
140
 
141
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
 
143
+ ### Citation Information
144
 
145
  ```
146
  @article{raecompressive2019,