Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
multi-class-classification
Languages:
English
Size:
100K - 1M
License:
Commit
•
8d5fb2e
1
Parent(s):
09e718e
Convert dataset sizes from base 2 to base 10 in the dataset card (#2)
Browse files- Convert dataset sizes from base 2 to base 10 in the dataset card (b23b4f83856ed5c263e3299572dc946cd3b585d4)
README.md
CHANGED
@@ -89,9 +89,9 @@ dataset_info:
|
|
89 |
- **Repository:** https://github.com/google-research-datasets/coarse-discourse
|
90 |
- **Paper:** [Characterizing Online Discussion Using Coarse Discourse Sequences](https://research.google/pubs/pub46055/)
|
91 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
92 |
-
- **Size of downloaded dataset files:** 4.
|
93 |
-
- **Size of the generated dataset:**
|
94 |
-
- **Total amount of disk used:**
|
95 |
|
96 |
### Dataset Summary
|
97 |
|
@@ -113,9 +113,9 @@ We collect and release a corpus of over 9,000 threads comprising over 100,000 co
|
|
113 |
|
114 |
#### default
|
115 |
|
116 |
-
- **Size of downloaded dataset files:** 4.
|
117 |
-
- **Size of the generated dataset:**
|
118 |
-
- **Total amount of disk used:**
|
119 |
|
120 |
An example of 'train' looks as follows.
|
121 |
```
|
|
|
89 |
- **Repository:** https://github.com/google-research-datasets/coarse-discourse
|
90 |
- **Paper:** [Characterizing Online Discussion Using Coarse Discourse Sequences](https://research.google/pubs/pub46055/)
|
91 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
92 |
+
- **Size of downloaded dataset files:** 4.63 MB
|
93 |
+
- **Size of the generated dataset:** 45.45 MB
|
94 |
+
- **Total amount of disk used:** 50.08 MB
|
95 |
|
96 |
### Dataset Summary
|
97 |
|
|
|
113 |
|
114 |
#### default
|
115 |
|
116 |
+
- **Size of downloaded dataset files:** 4.63 MB
|
117 |
+
- **Size of the generated dataset:** 45.45 MB
|
118 |
+
- **Total amount of disk used:** 50.08 MB
|
119 |
|
120 |
An example of 'train' looks as follows.
|
121 |
```
|