Update README.md
Browse filescommit changes to main
README.md
CHANGED
@@ -9,7 +9,16 @@ metrics:
|
|
9 |
- accuracy
|
10 |
model-index:
|
11 |
- name: gpt2-toxic-comment-classifier
|
12 |
-
results:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -17,7 +26,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
17 |
|
18 |
# gpt2-toxic-comment-classifier
|
19 |
|
20 |
-
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the
|
21 |
It achieves the following results on the evaluation set:
|
22 |
- Loss: 0.0519
|
23 |
- F1: 0.7212
|
@@ -26,11 +35,20 @@ It achieves the following results on the evaluation set:
|
|
26 |
|
27 |
## Model description
|
28 |
|
29 |
-
|
30 |
|
31 |
## Intended uses & limitations
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Training and evaluation data
|
36 |
|
|
|
9 |
- accuracy
|
10 |
model-index:
|
11 |
- name: gpt2-toxic-comment-classifier
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
type: text-classification
|
15 |
+
metrics:
|
16 |
+
- name: F1
|
17 |
+
type: f1
|
18 |
+
value: 0.7212
|
19 |
+
- name: Accuracy
|
20 |
+
type: accuracy
|
21 |
+
value: 0.9256
|
22 |
---
|
23 |
|
24 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
26 |
|
27 |
# gpt2-toxic-comment-classifier
|
28 |
|
29 |
+
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the [Toxic Comment Classifier](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data) dataset.
|
30 |
It achieves the following results on the evaluation set:
|
31 |
- Loss: 0.0519
|
32 |
- F1: 0.7212
|
|
|
35 |
|
36 |
## Model description
|
37 |
|
38 |
+
This is a `gpt2-medium` model fine-tuned for multi-label text classification on the Jigsaw Toxic Comment Classification Challenge dataset. It is designed to identify and categorize different types of toxic language in online comments, including toxic, severe_toxic, obscene, threat, insult, and identity_hate.
|
39 |
|
40 |
## Intended uses & limitations
|
41 |
|
42 |
+
This model is intended to be used for content moderation and analysis of online conversations to detect and flag potentially harmful language. It can be used as a tool to help maintain a healthier online environment.
|
43 |
+
|
44 |
+
**How to Use:**
|
45 |
+
You can use this model directly with a `text-classification` pipeline:
|
46 |
+
```python
|
47 |
+
from transformers import pipeline
|
48 |
+
|
49 |
+
classifier = pipeline("text-classification", model="raavip/gpt2-toxic-comment-classifier")
|
50 |
+
results = classifier("This is a horrible comment that is very insulting.")
|
51 |
+
print(results)
|
52 |
|
53 |
## Training and evaluation data
|
54 |
|