Datasets:
Tasks:
Text Classification
Sub-tasks:
multi-label-classification
Languages:
English
Size:
100K<n<1M
License:
Sasha Luccioni
commited on
Commit
•
c297970
1
Parent(s):
266cd56
Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment (#4336)
Browse files* Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, PiQA, Poem Sentiment, QAsper
* Update README.md
fixing header
* Update datasets/piqa/README.md
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
* Update README.md
changing MSRA NER metric to `seqeval`
* Update README.md
removing ROUGE args
* Update README.md
removing duplicate information
* Update README.md
removing eval for now
* Update README.md
removing eval for now
Co-authored-by: sashavor <sasha.luccioni@huggingface.co>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
Commit from https://github.com/huggingface/datasets/commit/095d12ff7414df118f60e00cd6494299a881743a
README.md
CHANGED
@@ -19,6 +19,55 @@ task_ids:
|
|
19 |
- multi-label-classification
|
20 |
paperswithcode_id: null
|
21 |
pretty_name: JigsawToxicityPred
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Card for [Dataset Name]
|
@@ -57,7 +106,7 @@ pretty_name: JigsawToxicityPred
|
|
57 |
|
58 |
### Dataset Summary
|
59 |
|
60 |
-
Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments. This dataset consists of a large number of Wikipedia comments which have been labeled by human raters for toxic behavior.
|
61 |
|
62 |
### Supported Tasks and Leaderboards
|
63 |
|
|
|
19 |
- multi-label-classification
|
20 |
paperswithcode_id: null
|
21 |
pretty_name: JigsawToxicityPred
|
22 |
+
train-eval-index:
|
23 |
+
- config: default
|
24 |
+
task: text-classification
|
25 |
+
task_id: binary_classification
|
26 |
+
splits:
|
27 |
+
train_split: train
|
28 |
+
eval_split: test
|
29 |
+
col_mapping:
|
30 |
+
comment_text: text
|
31 |
+
toxic: target
|
32 |
+
metrics:
|
33 |
+
- type: accuracy
|
34 |
+
name: Accuracy
|
35 |
+
- type: f1
|
36 |
+
name: F1 macro
|
37 |
+
args:
|
38 |
+
average: macro
|
39 |
+
- type: f1
|
40 |
+
name: F1 micro
|
41 |
+
args:
|
42 |
+
average: micro
|
43 |
+
- type: f1
|
44 |
+
name: F1 weighted
|
45 |
+
args:
|
46 |
+
average: weighted
|
47 |
+
- type: precision
|
48 |
+
name: Precision macro
|
49 |
+
args:
|
50 |
+
average: macro
|
51 |
+
- type: precision
|
52 |
+
name: Precision micro
|
53 |
+
args:
|
54 |
+
average: micro
|
55 |
+
- type: precision
|
56 |
+
name: Precision weighted
|
57 |
+
args:
|
58 |
+
average: weighted
|
59 |
+
- type: recall
|
60 |
+
name: Recall macro
|
61 |
+
args:
|
62 |
+
average: macro
|
63 |
+
- type: recall
|
64 |
+
name: Recall micro
|
65 |
+
args:
|
66 |
+
average: micro
|
67 |
+
- type: recall
|
68 |
+
name: Recall weighted
|
69 |
+
args:
|
70 |
+
average: weighted
|
71 |
---
|
72 |
|
73 |
# Dataset Card for [Dataset Name]
|
|
|
106 |
|
107 |
### Dataset Summary
|
108 |
|
109 |
+
Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments. This dataset consists of a large number of Wikipedia comments which have been labeled by human raters for toxic behavior.
|
110 |
|
111 |
### Supported Tasks and Leaderboards
|
112 |
|