Edit model card

distilbert-base-multilingual-cased-misogyny-sexism-decay0.01-fr-outofdomain

This model is a fine-tuned version of distilbert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1385
  • Accuracy: 0.2369
  • F1: 0.1919
  • Precision: 0.1087
  • Recall: 0.8148
  • Mae: 0.7631
  • Tn: 1279
  • Fp: 6491
  • Fn: 180
  • Tp: 792

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall Mae Tn Fp Fn Tp
0.2166 1.0 2233 1.2875 0.3377 0.2025 0.1169 0.7562 0.6623 2217 5553 237 735
0.2068 2.0 4466 1.8399 0.3141 0.2154 0.1234 0.8467 0.6859 1923 5847 149 823
0.2015 3.0 6699 1.5430 0.3543 0.2053 0.1189 0.75 0.6457 2368 5402 243 729
0.1739 4.0 8932 1.8406 0.2815 0.1911 0.1092 0.7634 0.7185 1719 6051 230 742
0.163 5.0 11165 2.0274 0.2170 0.1957 0.1105 0.8570 0.7830 1064 6706 139 833
0.1481 6.0 13398 1.6407 0.2467 0.1931 0.1096 0.8107 0.7533 1369 6401 184 788
0.1334 7.0 15631 3.0800 0.1875 0.1953 0.1097 0.8868 0.8125 777 6993 110 862
0.12 8.0 17864 2.5311 0.2183 0.1962 0.1108 0.8580 0.7817 1074 6696 138 834
0.1104 9.0 20097 2.9522 0.2135 0.1935 0.1092 0.8488 0.7865 1041 6729 147 825
0.0938 10.0 22330 3.1385 0.2369 0.1919 0.1087 0.8148 0.7631 1279 6491 180 792

Framework versions

  • Transformers 4.20.1
  • Pytorch 1.12.0+cu102
  • Datasets 2.3.2
  • Tokenizers 0.12.1
Downloads last month
3