Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Arabic
Size:
1K - 10K
License:
Convert dataset sizes from base 2 to base 10 in the dataset card
#1
by
albertvillanova
HF staff
- opened
README.md
CHANGED
@@ -81,9 +81,9 @@ dataset_info:
|
|
81 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
82 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
83 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
84 |
-
- **Size of downloaded dataset files:** 1.
|
85 |
-
- **Size of the generated dataset:** 1.
|
86 |
-
- **Total amount of disk used:** 3.
|
87 |
|
88 |
### Dataset Summary
|
89 |
|
@@ -103,9 +103,9 @@ dataset_info:
|
|
103 |
|
104 |
#### plain_text
|
105 |
|
106 |
-
- **Size of downloaded dataset files:** 1.
|
107 |
-
- **Size of the generated dataset:** 1.
|
108 |
-
- **Total amount of disk used:** 3.
|
109 |
|
110 |
An example of 'train' looks as follows.
|
111 |
```
|
|
|
81 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
82 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
83 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
84 |
+
- **Size of downloaded dataset files:** 1.94 MB
|
85 |
+
- **Size of the generated dataset:** 1.70 MB
|
86 |
+
- **Total amount of disk used:** 3.64 MB
|
87 |
|
88 |
### Dataset Summary
|
89 |
|
|
|
103 |
|
104 |
#### plain_text
|
105 |
|
106 |
+
- **Size of downloaded dataset files:** 1.94 MB
|
107 |
+
- **Size of the generated dataset:** 1.70 MB
|
108 |
+
- **Total amount of disk used:** 3.64 MB
|
109 |
|
110 |
An example of 'train' looks as follows.
|
111 |
```
|