Edit model card

RobCaamano/toxicity_weighted

This model was trained from scratch on Distilbert Base Uncased. It achieves the following results on the evaluation set:

  • Train Loss: 0.0240
  • Train Precision: 0.9522
  • Train Recall: 0.9190
  • Epoch: 11

Model description

Finetuned model that uses Distilbert Base Uncased to detect types of toxic text. These include: "toxic", "severe_toxic", "obscene", "threat", "insult" & "identity_hate".

Intended uses & limitations

Intended to classify text into different types of toxicity when it is detected. Trained off a small dataset with underrepresented categories.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Train Precision Train Recall Epoch
0.0440 0.9059 0.8294 7
0.0380 0.9223 0.8632 8
0.0314 0.9335 0.8838 9
0.0282 0.9437 0.9075 10
0.0240 0.9522 0.9190 11

Framework versions

  • Transformers 4.28.1
  • TensorFlow 2.10.0
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
30
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using RobCaamano/toxicity_weighted 1