Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
License:

Convert dataset sizes from base 2 to base 10 in the dataset card

#3
by albertvillanova HF staff - opened
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -75,9 +75,9 @@ dataset_info:
75
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
76
  - **Paper:** [Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507)
77
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
78
- - **Size of downloaded dataset files:** 11196.60 MB
79
- - **Size of the generated dataset:** 10978.29 MB
80
- - **Total amount of disk used:** 22174.89 MB
81
 
82
  ### Dataset Summary
83
 
@@ -106,9 +106,9 @@ One could use this dataset for benchmarking long-range language models, or use i
106
 
107
  #### default
108
 
109
- - **Size of downloaded dataset files:** 11196.60 MB
110
- - **Size of the generated dataset:** 10978.29 MB
111
- - **Total amount of disk used:** 22174.89 MB
112
 
113
  An example of 'train' looks as follows.
114
  ```
75
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
76
  - **Paper:** [Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507)
77
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
78
+ - **Size of downloaded dataset files:** 11.74 GB
79
+ - **Size of the generated dataset:** 11.51 GB
80
+ - **Total amount of disk used:** 23.25 GB
81
 
82
  ### Dataset Summary
83
 
106
 
107
  #### default
108
 
109
+ - **Size of downloaded dataset files:** 11.74 GB
110
+ - **Size of the generated dataset:** 11.51 GB
111
+ - **Total amount of disk used:** 23.25 GB
112
 
113
  An example of 'train' looks as follows.
114
  ```