albertvillanova HF staff commited on
Commit
d8b0b42
1 Parent(s): 122c335

Convert dataset sizes from base 2 to base 10 in the dataset card (#3)

Browse files

- Convert dataset sizes from base 2 to base 10 in the dataset card (ae6cff2302b13b5ade537085f309de0893d620bf)

Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -119,9 +119,9 @@ dataset_info:
119
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
122
- - **Size of downloaded dataset files:** 110.48 MB
123
- - **Size of the generated dataset:** 146.26 MB
124
- - **Total amount of disk used:** 256.74 MB
125
 
126
  ### Dataset Summary
127
 
@@ -141,9 +141,9 @@ The Natural Language Inference in Turkish (NLI-TR) is a set of two large scale d
141
 
142
  #### multinli_tr
143
 
144
- - **Size of downloaded dataset files:** 72.02 MB
145
- - **Size of the generated dataset:** 75.79 MB
146
- - **Total amount of disk used:** 147.81 MB
147
 
148
  An example of 'validation_matched' looks as follows.
149
  ```
@@ -159,9 +159,9 @@ This example was too long and was cropped:
159
 
160
  #### snli_tr
161
 
162
- - **Size of downloaded dataset files:** 38.46 MB
163
- - **Size of the generated dataset:** 70.47 MB
164
- - **Total amount of disk used:** 108.93 MB
165
 
166
  An example of 'train' looks as follows.
167
  ```
 
119
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
122
+ - **Size of downloaded dataset files:** 115.85 MB
123
+ - **Size of the generated dataset:** 153.36 MB
124
+ - **Total amount of disk used:** 269.21 MB
125
 
126
  ### Dataset Summary
127
 
 
141
 
142
  #### multinli_tr
143
 
144
+ - **Size of downloaded dataset files:** 75.52 MB
145
+ - **Size of the generated dataset:** 79.47 MB
146
+ - **Total amount of disk used:** 154.99 MB
147
 
148
  An example of 'validation_matched' looks as follows.
149
  ```
 
159
 
160
  #### snli_tr
161
 
162
+ - **Size of downloaded dataset files:** 40.33 MB
163
+ - **Size of the generated dataset:** 73.89 MB
164
+ - **Total amount of disk used:** 114.22 MB
165
 
166
  An example of 'train' looks as follows.
167
  ```