Datasets:
Tasks:
Text Classification
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
100K<n<1M
License:
Convert dataset sizes from base 2 to base 10 in the dataset card
#1
by
albertvillanova
HF staff
- opened
README.md
CHANGED
@@ -69,9 +69,9 @@ dataset_info:
|
|
69 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
70 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
71 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
72 |
-
- **Size of downloaded dataset files:**
|
73 |
-
- **Size of the generated dataset:**
|
74 |
-
- **Total amount of disk used:**
|
75 |
|
76 |
### Dataset Summary
|
77 |
|
@@ -91,9 +91,9 @@ The Quora dataset is composed of question pairs, and the task is to determine if
|
|
91 |
|
92 |
#### default
|
93 |
|
94 |
-
- **Size of downloaded dataset files:**
|
95 |
-
- **Size of the generated dataset:**
|
96 |
-
- **Total amount of disk used:**
|
97 |
|
98 |
An example of 'train' looks as follows.
|
99 |
```
|
|
|
69 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
70 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
71 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
72 |
+
- **Size of downloaded dataset files:** 58.17 MB
|
73 |
+
- **Size of the generated dataset:** 58.15 MB
|
74 |
+
- **Total amount of disk used:** 116.33 MB
|
75 |
|
76 |
### Dataset Summary
|
77 |
|
|
|
91 |
|
92 |
#### default
|
93 |
|
94 |
+
- **Size of downloaded dataset files:** 58.17 MB
|
95 |
+
- **Size of the generated dataset:** 58.15 MB
|
96 |
+
- **Total amount of disk used:** 116.33 MB
|
97 |
|
98 |
An example of 'train' looks as follows.
|
99 |
```
|