Datasets:

Multilinguality:
translation
Size Categories:
1M<n<10M
Language Creators:
expert-generated
Annotations Creators:
crowdsourced
Tags:
License:
albertvillanova HF staff commited on
Commit
3a54beb
1 Parent(s): e9f767a

Convert dataset sizes from base 2 to base 10 in the dataset card (#1)

Browse files

- Convert dataset sizes from base 2 to base 10 in the dataset card (bf2384be2994351a3300c7f09473078148a567e7)

Files changed (1) hide show
  1. README.md +18 -18
README.md CHANGED
@@ -343,9 +343,9 @@ dataset_info:
343
  - **Repository:** https://github.com/neulab/word-embeddings-for-nmt
344
  - **Paper:** [When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?](https://aclanthology.org/N18-2084/)
345
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
346
- - **Size of downloaded dataset files:** 1749.12 MB
347
- - **Size of the generated dataset:** 268.61 MB
348
- - **Total amount of disk used:** 2017.73 MB
349
 
350
  ### Dataset Summary
351
 
@@ -366,9 +366,9 @@ where one is high resource and the other is low resource.
366
 
367
  #### az_to_en
368
 
369
- - **Size of downloaded dataset files:** 124.94 MB
370
- - **Size of the generated dataset:** 1.46 MB
371
- - **Total amount of disk used:** 126.40 MB
372
 
373
  An example of 'train' looks as follows.
374
  ```
@@ -382,9 +382,9 @@ An example of 'train' looks as follows.
382
 
383
  #### aztr_to_en
384
 
385
- - **Size of downloaded dataset files:** 124.94 MB
386
- - **Size of the generated dataset:** 38.28 MB
387
- - **Total amount of disk used:** 163.22 MB
388
 
389
  An example of 'train' looks as follows.
390
  ```
@@ -398,9 +398,9 @@ An example of 'train' looks as follows.
398
 
399
  #### be_to_en
400
 
401
- - **Size of downloaded dataset files:** 124.94 MB
402
- - **Size of the generated dataset:** 1.36 MB
403
- - **Total amount of disk used:** 126.29 MB
404
 
405
  An example of 'train' looks as follows.
406
  ```
@@ -414,9 +414,9 @@ An example of 'train' looks as follows.
414
 
415
  #### beru_to_en
416
 
417
- - **Size of downloaded dataset files:** 124.94 MB
418
- - **Size of the generated dataset:** 57.41 MB
419
- - **Total amount of disk used:** 182.35 MB
420
 
421
  An example of 'validation' looks as follows.
422
  ```
@@ -429,9 +429,9 @@ This example was too long and was cropped:
429
 
430
  #### es_to_pt
431
 
432
- - **Size of downloaded dataset files:** 124.94 MB
433
- - **Size of the generated dataset:** 8.71 MB
434
- - **Total amount of disk used:** 133.65 MB
435
 
436
  An example of 'validation' looks as follows.
437
  ```
343
  - **Repository:** https://github.com/neulab/word-embeddings-for-nmt
344
  - **Paper:** [When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?](https://aclanthology.org/N18-2084/)
345
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
346
+ - **Size of downloaded dataset files:** 1.83 GB
347
+ - **Size of the generated dataset:** 281.66 MB
348
+ - **Total amount of disk used:** 2.12 GB
349
 
350
  ### Dataset Summary
351
 
366
 
367
  #### az_to_en
368
 
369
+ - **Size of downloaded dataset files:** 131.01 MB
370
+ - **Size of the generated dataset:** 1.53 MB
371
+ - **Total amount of disk used:** 132.54 MB
372
 
373
  An example of 'train' looks as follows.
374
  ```
382
 
383
  #### aztr_to_en
384
 
385
+ - **Size of downloaded dataset files:** 131.01 MB
386
+ - **Size of the generated dataset:** 40.14 MB
387
+ - **Total amount of disk used:** 171.15 MB
388
 
389
  An example of 'train' looks as follows.
390
  ```
398
 
399
  #### be_to_en
400
 
401
+ - **Size of downloaded dataset files:** 131.01 MB
402
+ - **Size of the generated dataset:** 1.43 MB
403
+ - **Total amount of disk used:** 132.42 MB
404
 
405
  An example of 'train' looks as follows.
406
  ```
414
 
415
  #### beru_to_en
416
 
417
+ - **Size of downloaded dataset files:** 131.01 MB
418
+ - **Size of the generated dataset:** 60.20 MB
419
+ - **Total amount of disk used:** 191.21 MB
420
 
421
  An example of 'validation' looks as follows.
422
  ```
429
 
430
  #### es_to_pt
431
 
432
+ - **Size of downloaded dataset files:** 131.01 MB
433
+ - **Size of the generated dataset:** 9.13 MB
434
+ - **Total amount of disk used:** 140.14 MB
435
 
436
  An example of 'validation' looks as follows.
437
  ```