Edit model card

electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish

This model is a fine-tuned version of mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3534
  • Accuracy: 0.7585
  • F1: 0.7585

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.8145 1.0 1222 0.7033 0.7168 0.7168
0.7016 2.0 2444 0.5936 0.7731 0.7731
0.6183 3.0 3666 0.5190 0.8046 0.8046
0.5516 4.0 4888 0.4678 0.8301 0.8301
0.4885 5.0 6110 0.3670 0.8713 0.8713
0.4353 6.0 7332 0.3119 0.8987 0.8987
0.3957 7.0 8554 0.2908 0.9084 0.9084
0.3386 8.0 9776 0.2108 0.9348 0.9348
0.2976 9.0 10998 0.1912 0.9422 0.9422
0.2828 10.0 12220 0.1496 0.9591 0.9591
0.243 11.0 13442 0.1326 0.9639 0.9639
0.2049 12.0 14664 0.1249 0.9693 0.9693
0.2041 13.0 15886 0.1049 0.9752 0.9752
0.1855 14.0 17108 0.0816 0.9798 0.9798
0.1637 15.0 18330 0.0733 0.9836 0.9836
0.1531 16.0 19552 0.0577 0.9880 0.9880
0.1221 17.0 20774 0.0581 0.9895 0.9895
0.1207 18.0 21996 0.0463 0.9903 0.9903
0.1152 19.0 23218 0.0472 0.9908 0.9908
0.1028 20.0 24440 0.0356 0.9936 0.9936
0.1027 21.0 25662 0.0278 0.9957 0.9957
0.0915 22.0 26884 0.0344 0.9946 0.9946
0.0887 23.0 28106 0.0243 0.9954 0.9954
0.0713 24.0 29328 0.0208 0.9969 0.9969
0.0749 25.0 30550 0.0198 0.9964 0.9964
0.0699 26.0 31772 0.0153 0.9969 0.9969
0.0567 27.0 32994 0.0144 0.9972 0.9972
0.0613 28.0 34216 0.0105 0.9982 0.9982
0.0567 29.0 35438 0.0117 0.9982 0.9982
0.0483 30.0 36660 0.0072 0.9985 0.9985
0.0469 31.0 37882 0.0063 0.9987 0.9987
0.0485 32.0 39104 0.0067 0.9985 0.9985
0.0464 33.0 40326 0.0020 0.9995 0.9995
0.0472 34.0 41548 0.0036 0.9995 0.9995
0.0388 35.0 42770 0.0016 0.9995 0.9995
0.0248 36.0 43992 0.0047 0.9990 0.9990
0.0396 37.0 45214 0.0004 0.9997 0.9997
0.0331 38.0 46436 0.0020 0.9995 0.9995
0.0292 39.0 47658 0.0000 1.0 1.0
0.0253 40.0 48880 0.0001 1.0 1.0
0.0285 41.0 50102 0.0000 1.0 1.0
0.0319 42.0 51324 0.0000 1.0 1.0
0.0244 43.0 52546 0.0000 1.0 1.0
0.0261 44.0 53768 0.0001 1.0 1.0
0.0256 45.0 54990 0.0000 1.0 1.0
0.0258 46.0 56212 0.0000 1.0 1.0
0.0173 47.0 57434 0.0000 1.0 1.0
0.0253 48.0 58656 0.0000 1.0 1.0
0.0241 49.0 59878 0.0000 1.0 1.0
0.019 50.0 61100 0.0000 1.0 1.0
0.0184 51.0 62322 0.0000 1.0 1.0
0.0139 52.0 63544 0.0000 1.0 1.0
0.0159 53.0 64766 0.0000 1.0 1.0
0.0119 54.0 65988 0.0000 1.0 1.0
0.0253 55.0 67210 0.0000 1.0 1.0
0.0166 56.0 68432 0.0000 1.0 1.0
0.0125 57.0 69654 0.0000 1.0 1.0
0.0155 58.0 70876 0.0000 1.0 1.0
0.0106 59.0 72098 0.0000 1.0 1.0
0.0083 60.0 73320 0.0000 1.0 1.0

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
9
Safetensors
Model size
13.6M params
Tensor type
I64
·
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.