Adding average benchmark scores
Browse files
README.md
CHANGED
@@ -196,13 +196,13 @@ with torch.no_grad():
|
|
196 |
The Tiny-Toxic-Detector achieves an impressive 90.26% on the Toxigen benchmark and 87.34% on the Jigsaw-Toxic-Comment-Classification-Challenge. Here we compare our results against other toxic classification models:
|
197 |
|
198 |
|
199 |
-
| Model | Size (parameters) | Toxigen (%) | Jigsaw (%) |
|
200 |
-
| --------------------------------- | ----------------- | ----------- | ---------- |
|
201 |
-
| lmsys/toxicchat-t5-large-v1.0 | 738M | 72.67 | 88.82 |
|
202 |
-
| s-nlp/roberta toxicity classifier | 124M | 88.41
|
203 |
-
| mohsenfayyaz/toxicity-classifier | 109M | 81.50 | 83.31 |
|
204 |
-
| martin-ha/toxic-comment-model | 67M
|
205 |
-
| **Tiny-toxic-detector** | **2M** | **90.97** | 86.98 |
|
206 |
|
207 |
|
208 |
|
|
|
196 |
The Tiny-Toxic-Detector achieves an impressive 90.26% on the Toxigen benchmark and 87.34% on the Jigsaw-Toxic-Comment-Classification-Challenge. Here we compare our results against other toxic classification models:
|
197 |
|
198 |
|
199 |
+
| Model | Size (parameters) | Toxigen (%) | Jigsaw (%) | Average (%) |
|
200 |
+
| --------------------------------- | ----------------- | ----------- | ---------- | ----------- |
|
201 |
+
| lmsys/toxicchat-t5-large-v1.0 | 738M | 72.67 | 88.82 | 80.745 |
|
202 |
+
| s-nlp/roberta toxicity classifier | 124M | *88.41* | **94.92** | **91.665** |
|
203 |
+
| mohsenfayyaz/toxicity-classifier | 109M | 81.50 | 83.31 | 82.405 |
|
204 |
+
| martin-ha/toxic-comment-model | *67M* | 68.02 | *91.56* | 79.790 |
|
205 |
+
| **Tiny-toxic-detector** | **2M** | **90.97** | 86.98 | *88.975* |
|
206 |
|
207 |
|
208 |
|