Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10M<n<100M
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
Tags:
License:

Convert dataset sizes from base 2 to base 10 in the dataset card

#2
by albertvillanova HF staff - opened
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -66,9 +66,9 @@ dataset_info:
66
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
68
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
69
- - **Size of downloaded dataset files:** 1124.87 MB
70
- - **Size of the generated dataset:** 4629.00 MB
71
- - **Total amount of disk used:** 5753.87 MB
72
 
73
  ### Dataset Summary
74
 
@@ -88,9 +88,9 @@ Books are a rich source of both fine-grained information, how a character, an ob
88
 
89
  #### plain_text
90
 
91
- - **Size of downloaded dataset files:** 1124.87 MB
92
- - **Size of the generated dataset:** 4629.00 MB
93
- - **Total amount of disk used:** 5753.87 MB
94
 
95
  An example of 'train' looks as follows.
96
  ```
66
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
68
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
69
+ - **Size of downloaded dataset files:** 1.18 GB
70
+ - **Size of the generated dataset:** 4.85 GB
71
+ - **Total amount of disk used:** 6.03 GB
72
 
73
  ### Dataset Summary
74
 
88
 
89
  #### plain_text
90
 
91
+ - **Size of downloaded dataset files:** 1.18 GB
92
+ - **Size of the generated dataset:** 4.85 GB
93
+ - **Total amount of disk used:** 6.03 GB
94
 
95
  An example of 'train' looks as follows.
96
  ```