Datasets:
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
English
Size:
10M - 100M
Tags:
text-search
License:
Commit
•
136dd44
1
Parent(s):
30905e2
Convert dataset sizes from base 2 to base 10 in the dataset card
Browse filesConvert dataset sizes from base 2 (MiB) to base 10 (MB) in the dataset card, as it is the case in the dataset viewer.
See: https://github.com/huggingface/datasets/issues/5708
README.md
CHANGED
@@ -139,8 +139,8 @@ English:
|
|
139 |
#### wiki40b_en_100_0
|
140 |
|
141 |
- **Size of downloaded dataset files:** 0.00 MB
|
142 |
-
- **Size of the generated dataset:**
|
143 |
-
- **Total amount of disk used:**
|
144 |
|
145 |
An example of 'train' looks as follows:
|
146 |
```
|
@@ -159,8 +159,8 @@ An example of 'train' looks as follows:
|
|
159 |
#### wikipedia_en_100_0
|
160 |
|
161 |
- **Size of downloaded dataset files:** 0.00 MB
|
162 |
-
- **Size of the generated dataset:**
|
163 |
-
- **Total amount of disk used:**
|
164 |
|
165 |
An example of 'train' looks as follows:
|
166 |
```
|
|
|
139 |
#### wiki40b_en_100_0
|
140 |
|
141 |
- **Size of downloaded dataset files:** 0.00 MB
|
142 |
+
- **Size of the generated dataset:** 12.94 GB
|
143 |
+
- **Total amount of disk used:** 12.94 GB
|
144 |
|
145 |
An example of 'train' looks as follows:
|
146 |
```
|
|
|
159 |
#### wikipedia_en_100_0
|
160 |
|
161 |
- **Size of downloaded dataset files:** 0.00 MB
|
162 |
+
- **Size of the generated dataset:** 26.41 GB
|
163 |
+
- **Total amount of disk used:** 26.41 GB
|
164 |
|
165 |
An example of 'train' looks as follows:
|
166 |
```
|