Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
machine-generated
Source Datasets:
original
ArXiv:
Tags:
split-and-rephrase
License:
albertvillanova HF staff commited on
Commit
2cdf4fe
1 Parent(s): 8d78341

Convert dataset sizes from base 2 to base 10 in the dataset card (#1)

Browse files

- Convert dataset sizes from base 2 to base 10 in the dataset card (9f17d5e1a3c19d570fc7cb9e037f849e56ca43b3)

Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -74,9 +74,9 @@ dataset_info:
74
  - **Repository:** https://github.com/google-research-datasets/wiki-split
75
  - **Paper:** [Learning To Split and Rephrase From Wikipedia Edit History](https://arxiv.org/abs/1808.09468)
76
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
77
- - **Size of downloaded dataset files:** 95.63 MB
78
- - **Size of the generated dataset:** 370.41 MB
79
- - **Total amount of disk used:** 466.04 MB
80
 
81
  ### Dataset Summary
82
 
@@ -98,9 +98,9 @@ the dataset contains some inherent noise, it can serve as valuable training data
98
 
99
  #### default
100
 
101
- - **Size of downloaded dataset files:** 95.63 MB
102
- - **Size of the generated dataset:** 370.41 MB
103
- - **Total amount of disk used:** 466.04 MB
104
 
105
  An example of 'train' looks as follows.
106
  ```
74
  - **Repository:** https://github.com/google-research-datasets/wiki-split
75
  - **Paper:** [Learning To Split and Rephrase From Wikipedia Edit History](https://arxiv.org/abs/1808.09468)
76
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
77
+ - **Size of downloaded dataset files:** 100.28 MB
78
+ - **Size of the generated dataset:** 388.40 MB
79
+ - **Total amount of disk used:** 488.68 MB
80
 
81
  ### Dataset Summary
82
 
98
 
99
  #### default
100
 
101
+ - **Size of downloaded dataset files:** 100.28 MB
102
+ - **Size of the generated dataset:** 388.40 MB
103
+ - **Total amount of disk used:** 488.68 MB
104
 
105
  An example of 'train' looks as follows.
106
  ```