lhoestq HF staff commited on
Commit
2a94963
1 Parent(s): 43e1cfa

Update README.md (#2)

Browse files

- Update README.md (33db4b056fac2ec164e9307ae0655be533cb2aea)

Files changed (1) hide show
  1. README.md +148 -1
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- pretty_name: Lm1b
3
  paperswithcode_id: billion-word-benchmark
4
  dataset_info:
5
  features:
@@ -15,8 +15,155 @@ dataset_info:
15
  num_examples: 306688
16
  download_size: 1792209805
17
  dataset_size: 4281148561
 
 
 
 
 
 
18
  ---
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ### Contributions
21
 
22
  Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
1
  ---
2
+ pretty_name: One Billion Word Language Model Benchmark
3
  paperswithcode_id: billion-word-benchmark
4
  dataset_info:
5
  features:
 
15
  num_examples: 306688
16
  download_size: 1792209805
17
  dataset_size: 4281148561
18
+ task_categories:
19
+ - text-generation
20
+ - fill-mask
21
+ task_ids:
22
+ - language-modeling
23
+ - masked-language-modeling
24
  ---
25
 
26
+ # Dataset Card for One Billion Word Language Model Benchmark
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+ - [Contributions](#contributions)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:** [statmt](http://www.statmt.org/lm-benchmark/)
55
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
+ - **Paper:** [arxiv](https://arxiv.org/pdf/1312.3005v3.pdf)
57
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
58
+ - **Size of downloaded dataset files:** 1.79 GB
59
+ - **Size of the generated dataset:** 4.28 GB
60
+ - **Total amount of disk used:** 6.07 GB
61
+
62
+ ### Dataset Summary
63
+
64
+ A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data.
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
69
+
70
+ ### Languages
71
+
72
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ #### plain_text
79
+
80
+ - **Size of downloaded dataset files:** 1.79 GB
81
+ - **Size of the generated dataset:** 4.28 GB
82
+ - **Total amount of disk used:** 6.07 GB
83
+
84
+ An example of 'train' looks as follows.
85
+ ```
86
+ This example was too long and was cropped:
87
+
88
+ {
89
+ "text": "While athletes in different professions dealt with doping scandals and other controversies , Woods continued to do what he did best : dominate the field of professional golf and rake in endorsements ."
90
+ }
91
+ ```
92
+
93
+ ### Data Fields
94
+
95
+ The data fields are the same among all splits.
96
+
97
+ #### plain_text
98
+ - `text`: a `string` feature.
99
+
100
+ ### Data Splits
101
+
102
+ | name | train | test |
103
+ |------------|----------|--------|
104
+ | plain_text | 30301028 | 306688 |
105
+
106
+ ## Dataset Creation
107
+
108
+ ### Curation Rationale
109
+
110
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
111
+
112
+ ### Source Data
113
+
114
+ #### Initial Data Collection and Normalization
115
+
116
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
+
118
+ #### Who are the source language producers?
119
+
120
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
+
122
+ ### Annotations
123
+
124
+ The dataset doesn't contain annotations.
125
+
126
+ ### Personal and Sensitive Information
127
+
128
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
+
130
+ ## Considerations for Using the Data
131
+
132
+ ### Social Impact of Dataset
133
+
134
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
+
136
+ ### Discussion of Biases
137
+
138
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
+
140
+ ### Other Known Limitations
141
+
142
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
+
144
+ ## Additional Information
145
+
146
+ ### Dataset Curators
147
+
148
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
149
+
150
+ ### Licensing Information
151
+
152
+ [More Information Needeate this repository accordingly.
153
+
154
+ ### Citation Information
155
+
156
+ ```bibtex
157
+ @misc{chelba2014billion,
158
+ title={One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling},
159
+ author={Ciprian Chelba and Tomas Mikolov and Mike Schuster and Qi Ge and Thorsten Brants and Phillipp Koehn and Tony Robinson},
160
+ year={2014},
161
+ eprint={1312.3005},
162
+ archivePrefix={arXiv},
163
+ primaryClass={cs.CL}
164
+ }
165
+ ```
166
+
167
  ### Contributions
168
 
169
  Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset.