Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
albertvillanova HF staff commited on
Commit
130ae67
1 Parent(s): 720ee2f

Convert dataset sizes from base 2 to base 10 in the dataset card

Browse files

Convert dataset sizes from base 2 (MiB) to base 10 (MB) in the dataset card, as it is the case in the dataset viewer.
See: https://github.com/huggingface/datasets/issues/5708

Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -85,9 +85,9 @@ dataset_info:
85
  - **Repository:** https://github.com/facebookresearch/EmpatheticDialogues
86
  - **Paper:** [Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset](https://arxiv.org/abs/1811.00207)
87
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
88
- - **Size of downloaded dataset files:** 26.72 MB
89
- - **Size of the generated dataset:** 23.97 MB
90
- - **Total amount of disk used:** 50.69 MB
91
 
92
  ### Dataset Summary
93
 
@@ -107,9 +107,9 @@ PyTorch original implementation of Towards Empathetic Open-domain Conversation M
107
 
108
  #### default
109
 
110
- - **Size of downloaded dataset files:** 26.72 MB
111
- - **Size of the generated dataset:** 23.97 MB
112
- - **Total amount of disk used:** 50.69 MB
113
 
114
  An example of 'train' looks as follows.
115
  ```
 
85
  - **Repository:** https://github.com/facebookresearch/EmpatheticDialogues
86
  - **Paper:** [Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset](https://arxiv.org/abs/1811.00207)
87
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
88
+ - **Size of downloaded dataset files:** 28.02 MB
89
+ - **Size of the generated dataset:** 25.13 MB
90
+ - **Total amount of disk used:** 53.15 MB
91
 
92
  ### Dataset Summary
93
 
 
107
 
108
  #### default
109
 
110
+ - **Size of downloaded dataset files:** 28.02 MB
111
+ - **Size of the generated dataset:** 25.13 MB
112
+ - **Total amount of disk used:** 53.15 MB
113
 
114
  An example of 'train' looks as follows.
115
  ```