Edit model card

electricidad-small-discriminator-finetuned-usElectionTweets1Jul11Nov-spanish

This model is a fine-tuned version of mrm8488/electricidad-small-discriminator on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3327
  • Accuracy: 0.7642
  • F1: 0.7642

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.88 1.0 1222 0.7491 0.6943 0.6943
0.7292 2.0 2444 0.6253 0.7544 0.7544
0.6346 3.0 3666 0.5292 0.7971 0.7971
0.565 4.0 4888 0.4831 0.8168 0.8168
0.4898 5.0 6110 0.4086 0.8532 0.8532
0.4375 6.0 7332 0.3411 0.8831 0.8831
0.3968 7.0 8554 0.2735 0.9100 0.9100
0.3321 8.0 9776 0.2343 0.9253 0.9253
0.3045 9.0 10998 0.1855 0.9450 0.9450
0.2837 10.0 12220 0.1539 0.9591 0.9591
0.2411 11.0 13442 0.1309 0.9650 0.9650
0.2203 12.0 14664 0.1100 0.9716 0.9716
0.1953 13.0 15886 0.1067 0.9760 0.9760
0.1836 14.0 17108 0.0755 0.9813 0.9813
0.1611 15.0 18330 0.0731 0.9829 0.9829
0.1479 16.0 19552 0.0746 0.9839 0.9839
0.138 17.0 20774 0.0516 0.9895 0.9895
0.129 18.0 21996 0.0481 0.9903 0.9903
0.1182 19.0 23218 0.0401 0.9926 0.9926
0.1065 20.0 24440 0.0488 0.9895 0.9895
0.096 21.0 25662 0.0333 0.9928 0.9928
0.0889 22.0 26884 0.0222 0.9951 0.9951
0.0743 23.0 28106 0.0236 0.9951 0.9951
0.0821 24.0 29328 0.0322 0.9931 0.9931
0.0866 25.0 30550 0.0135 0.9974 0.9974
0.0616 26.0 31772 0.0100 0.9980 0.9980
0.0641 27.0 32994 0.0112 0.9977 0.9977
0.0603 28.0 34216 0.0071 0.9987 0.9987
0.0491 29.0 35438 0.0088 0.9982 0.9982
0.0563 30.0 36660 0.0071 0.9982 0.9982
0.0467 31.0 37882 0.0045 0.9990 0.9990
0.0545 32.0 39104 0.0057 0.9987 0.9987
0.0519 33.0 40326 0.0048 0.9992 0.9992
0.0524 34.0 41548 0.0030 0.9995 0.9995
0.044 35.0 42770 0.0046 0.9990 0.9990
0.0442 36.0 43992 0.0029 0.9995 0.9995
0.0352 37.0 45214 0.0035 0.9995 0.9995
0.0348 38.0 46436 0.0029 0.9995 0.9995
0.0295 39.0 47658 0.0023 0.9995 0.9995
0.0289 40.0 48880 0.0035 0.9995 0.9995
0.0292 41.0 50102 0.0023 0.9995 0.9995
0.0259 42.0 51324 0.0027 0.9995 0.9995
0.0217 43.0 52546 0.0031 0.9995 0.9995
0.0278 44.0 53768 0.0018 0.9995 0.9995
0.0254 45.0 54990 0.0023 0.9995 0.9995
0.0164 46.0 56212 0.0016 0.9997 0.9997
0.0277 47.0 57434 0.0027 0.9997 0.9997
0.0158 48.0 58656 0.0029 0.9997 0.9997
0.0178 49.0 59878 0.0023 0.9997 0.9997
0.022 50.0 61100 0.0019 0.9997 0.9997
0.0167 51.0 62322 0.0018 0.9997 0.9997
0.0159 52.0 63544 0.0017 0.9997 0.9997
0.0105 53.0 64766 0.0016 0.9997 0.9997
0.0111 54.0 65988 0.0015 0.9997 0.9997
0.0139 55.0 67210 0.0021 0.9997 0.9997
0.0152 56.0 68432 0.0026 0.9997 0.9997
0.0191 57.0 69654 0.0022 0.9997 0.9997
0.0075 58.0 70876 0.0017 0.9997 0.9997
0.0141 59.0 72098 0.0016 0.9997 0.9997
0.0086 60.0 73320 0.0014 0.9997 0.9997

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.