Datasets:
Tasks:
Text Classification
Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
Tags:
License:
rungalileo
commited on
Commit
•
1cb4e80
1
Parent(s):
3455eee
Update README.md
Browse files
README.md
CHANGED
@@ -56,13 +56,32 @@ task_ids:
|
|
56 |
|
57 |
### Dataset Summary
|
58 |
|
59 |
-
This dataset is a version of the [**20 Newsgroups**](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#the-20-newsgroups-text-dataset) dataset fixed with the help of the [**Galileo ML Data Intelligence Platform**](https://www.rungalileo.io/). In a matter of minutes,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
## Dataset Structure
|
62 |
|
63 |
### Data Instances
|
64 |
|
65 |
-
For each data sample, there is the text of the newsgroup post, the corresponding newsgroup where the message was posted (
|
66 |
|
67 |
An example from the dataset looks as follows:
|
68 |
```
|
@@ -82,19 +101,3 @@ An example from the dataset looks as follows:
|
|
82 |
|
83 |
The data is split into a training and test split. To reduce bias and test generalizability across time, data samples are split between train and test depending upon whether their message was posted before or after a specific date, respectively.
|
84 |
|
85 |
-
## Dataset Creation
|
86 |
-
|
87 |
-
### Curation Rationale
|
88 |
-
|
89 |
-
This dataset was created to showcase the power of Galileo as a Data Intelligence Platform. Through Galileo, we identify critical error patterns within the original Newsgroups training dataset - garbage data that do not properly fit any newsgroup label category. Moreover, we observe that these errors permeate throughout the test dataset.
|
90 |
-
|
91 |
-
As a result of our analysis, we propose the addition of a new class to properly categorize and fix the labeling of garbage data samples: a "None" class. Galileo further enables us to quickly make these data sample changes within the training set (changing garbage data labels to None) and helps guide human re-annotation of the test set.
|
92 |
-
|
93 |
-
#### Total Dataset Errors Fixed: 1163 *(6.5% of the dataset)*
|
94 |
-
|Errors / Split. |Overall| Train| Test|
|
95 |
-
|---------------------|------:|---------:|---------:|
|
96 |
-
|Garbage samples fixed| 718| 396| 322|
|
97 |
-
|Empty samples fixed | 445| 254| 254|
|
98 |
-
|Total samples fixed | 1163| 650| 650|
|
99 |
-
|
100 |
-
To learn more about the process of fixing this dataset, please refer to our [**Blog**](https://www.rungalileo.io/blog).
|
|
|
56 |
|
57 |
### Dataset Summary
|
58 |
|
59 |
+
This dataset is a version of the [**20 Newsgroups**](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#the-20-newsgroups-text-dataset) dataset fixed with the help of the [**Galileo ML Data Intelligence Platform**](https://www.rungalileo.io/). In a matter of minutes, Galileo enabled us to uncover and fix a multitude of errors within the original dataset. To fix a large proportion of these errors, we propose the addition of a 21st label category to be assigned to garbage, "unlabelable" samples. In the end, we present this improved dataset as a new standard for natural language experimentation and benchmarking using the Newsgroups dataset.
|
60 |
+
|
61 |
+
|
62 |
+
## Dataset Creation
|
63 |
+
|
64 |
+
### Curation Rationale
|
65 |
+
|
66 |
+
This dataset was created to showcase the power of Galileo as a Data Intelligence Platform. Through Galileo, we identify critical error patterns within the original Newsgroups training dataset - garbage data that do not properly fit any newsgroup label category. Moreover, we observe that these errors permeate throughout the test dataset.
|
67 |
+
|
68 |
+
As a result of our analysis, we propose the addition of a new class to properly categorize and fix the labeling of garbage data samples: a "None" class. Galileo further enables us to quickly make these data sample changes within the training set (changing garbage data labels to None) and helps guide human re-annotation of the test set.
|
69 |
+
|
70 |
+
#### Total Dataset Errors Fixed: 1163 *(6.5% of the dataset)*
|
71 |
+
|Errors / Split. |Overall| Train| Test|
|
72 |
+
|---------------------|------:|---------:|---------:|
|
73 |
+
|Garbage samples fixed| 718| 396| 322|
|
74 |
+
|Empty samples fixed | 445| 254| 254|
|
75 |
+
|Total samples fixed | 1163| 650| 650|
|
76 |
+
|
77 |
+
To learn more about the process of fixing this dataset, please refer to our [**Blog**](https://www.rungalileo.io/blog).
|
78 |
+
|
79 |
|
80 |
## Dataset Structure
|
81 |
|
82 |
### Data Instances
|
83 |
|
84 |
+
For each data sample, there is the text of the newsgroup post, the corresponding newsgroup forum where the message was posted (one of 21 labels including the newly added "None" class), and a data sample id.
|
85 |
|
86 |
An example from the dataset looks as follows:
|
87 |
```
|
|
|
101 |
|
102 |
The data is split into a training and test split. To reduce bias and test generalizability across time, data samples are split between train and test depending upon whether their message was posted before or after a specific date, respectively.
|
103 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|