Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
crowdsourced
Source Datasets:
original
Tags:
License:
albertvillanova HF staff commited on
Commit
4cdcefa
1 Parent(s): 025bac8

Convert dataset sizes from base 2 to base 10 in the dataset card (#2)

Browse files

- Convert dataset sizes from base 2 to base 10 in the dataset card (aa1cacd41c23a0e4ad54320a7130798bbd693bd8)

Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -81,9 +81,9 @@ dataset_info:
81
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
  - **Paper:** [WikiQA: A Challenge Dataset for Open-Domain Question Answering](https://aclanthology.org/D15-1237/)
83
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
84
- - **Size of downloaded dataset files:** 6.77 MB
85
- - **Size of the generated dataset:** 6.10 MB
86
- - **Total amount of disk used:** 12.87 MB
87
 
88
  ### Dataset Summary
89
 
@@ -105,9 +105,9 @@ The WikiQA corpus is a publicly available set of question and sentence pairs, co
105
 
106
  #### default
107
 
108
- - **Size of downloaded dataset files:** 6.77 MB
109
- - **Size of the generated dataset:** 6.10 MB
110
- - **Total amount of disk used:** 12.87 MB
111
 
112
  An example of 'train' looks as follows.
113
  ```
81
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
  - **Paper:** [WikiQA: A Challenge Dataset for Open-Domain Question Answering](https://aclanthology.org/D15-1237/)
83
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
84
+ - **Size of downloaded dataset files:** 7.10 MB
85
+ - **Size of the generated dataset:** 6.40 MB
86
+ - **Total amount of disk used:** 13.50 MB
87
 
88
  ### Dataset Summary
89
 
105
 
106
  #### default
107
 
108
+ - **Size of downloaded dataset files:** 7.10 MB
109
+ - **Size of the generated dataset:** 6.40 MB
110
+ - **Total amount of disk used:** 13.50 MB
111
 
112
  An example of 'train' looks as follows.
113
  ```