Convert dataset sizes from base 2 to base 10 in the dataset card
#3
by
albertvillanova
HF staff
- opened
README.md
CHANGED
@@ -756,9 +756,9 @@ dataset_info:
|
|
756 |
- **Paper:** [TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension](https://arxiv.org/abs/1705.03551)
|
757 |
- **Leaderboard:** [CodaLab Leaderboard](https://competitions.codalab.org/competitions/17208#results)
|
758 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
759 |
-
- **Size of downloaded dataset files:**
|
760 |
-
- **Size of the generated dataset:**
|
761 |
-
- **Total amount of disk used:**
|
762 |
|
763 |
### Dataset Summary
|
764 |
|
@@ -782,9 +782,9 @@ English.
|
|
782 |
|
783 |
#### rc
|
784 |
|
785 |
-
- **Size of downloaded dataset files:**
|
786 |
-
- **Size of the generated dataset:**
|
787 |
-
- **Total amount of disk used:**
|
788 |
|
789 |
An example of 'train' looks as follows.
|
790 |
```
|
@@ -793,9 +793,9 @@ An example of 'train' looks as follows.
|
|
793 |
|
794 |
#### rc.nocontext
|
795 |
|
796 |
-
- **Size of downloaded dataset files:**
|
797 |
-
- **Size of the generated dataset:**
|
798 |
-
- **Total amount of disk used:**
|
799 |
|
800 |
An example of 'train' looks as follows.
|
801 |
```
|
@@ -804,9 +804,9 @@ An example of 'train' looks as follows.
|
|
804 |
|
805 |
#### unfiltered
|
806 |
|
807 |
-
- **Size of downloaded dataset files:**
|
808 |
-
- **Size of the generated dataset:**
|
809 |
-
- **Total amount of disk used:**
|
810 |
|
811 |
An example of 'validation' looks as follows.
|
812 |
```
|
@@ -815,9 +815,9 @@ An example of 'validation' looks as follows.
|
|
815 |
|
816 |
#### unfiltered.nocontext
|
817 |
|
818 |
-
- **Size of downloaded dataset files:**
|
819 |
-
- **Size of the generated dataset:**
|
820 |
-
- **Total amount of disk used:**
|
821 |
|
822 |
An example of 'train' looks as follows.
|
823 |
```
|
|
|
756 |
- **Paper:** [TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension](https://arxiv.org/abs/1705.03551)
|
757 |
- **Leaderboard:** [CodaLab Leaderboard](https://competitions.codalab.org/competitions/17208#results)
|
758 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
759 |
+
- **Size of downloaded dataset files:** 9.26 GB
|
760 |
+
- **Size of the generated dataset:** 45.46 GB
|
761 |
+
- **Total amount of disk used:** 54.72 GB
|
762 |
|
763 |
### Dataset Summary
|
764 |
|
|
|
782 |
|
783 |
#### rc
|
784 |
|
785 |
+
- **Size of downloaded dataset files:** 2.67 GB
|
786 |
+
- **Size of the generated dataset:** 16.02 GB
|
787 |
+
- **Total amount of disk used:** 18.68 GB
|
788 |
|
789 |
An example of 'train' looks as follows.
|
790 |
```
|
|
|
793 |
|
794 |
#### rc.nocontext
|
795 |
|
796 |
+
- **Size of downloaded dataset files:** 2.67 GB
|
797 |
+
- **Size of the generated dataset:** 126.27 MB
|
798 |
+
- **Total amount of disk used:** 2.79 GB
|
799 |
|
800 |
An example of 'train' looks as follows.
|
801 |
```
|
|
|
804 |
|
805 |
#### unfiltered
|
806 |
|
807 |
+
- **Size of downloaded dataset files:** 3.30 GB
|
808 |
+
- **Size of the generated dataset:** 29.24 GB
|
809 |
+
- **Total amount of disk used:** 32.54 GB
|
810 |
|
811 |
An example of 'validation' looks as follows.
|
812 |
```
|
|
|
815 |
|
816 |
#### unfiltered.nocontext
|
817 |
|
818 |
+
- **Size of downloaded dataset files:** 632.55 MB
|
819 |
+
- **Size of the generated dataset:** 74.56 MB
|
820 |
+
- **Total amount of disk used:** 707.11 MB
|
821 |
|
822 |
An example of 'train' looks as follows.
|
823 |
```
|