Edit model card

goemotions_bertspanish_finetunig_g

This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2987
  • Accuracy: 0.465
  • F1: 0.3459

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 64

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
2.6699 1.0 2000 2.6095 0.3042 0.0253
2.501 2.0 4000 2.3754 0.3925 0.0803
2.3358 3.0 6000 2.2518 0.4217 0.0978
2.2927 4.0 8000 2.1618 0.4425 0.1188
2.12 5.0 10000 2.0765 0.4575 0.1421
2.0914 6.0 12000 2.0328 0.4583 0.1731
2.0042 7.0 14000 1.9635 0.4783 0.2000
1.9661 8.0 16000 1.9428 0.4792 0.2261
1.8821 9.0 18000 1.9069 0.485 0.2251
1.8648 10.0 20000 1.8867 0.4917 0.2451
1.7464 11.0 22000 1.8609 0.5017 0.2543
1.763 12.0 24000 1.8369 0.5025 0.2698
1.7309 13.0 26000 1.8461 0.4967 0.2780
1.648 14.0 28000 1.8432 0.5017 0.2783
1.6077 15.0 30000 1.8419 0.4892 0.2911
1.5947 16.0 32000 1.8492 0.4983 0.2904
1.5088 17.0 34000 1.8646 0.5125 0.2927
1.5087 18.0 36000 1.8509 0.5025 0.2987
1.4678 19.0 38000 1.8596 0.4958 0.3105
1.3912 20.0 40000 1.8848 0.5025 0.3018
1.3895 21.0 42000 1.8688 0.4883 0.2993
1.3499 22.0 44000 1.9051 0.4992 0.3124
1.3294 23.0 46000 1.9304 0.4942 0.3258
1.2276 24.0 48000 1.9361 0.4933 0.3374
1.2186 25.0 50000 1.9587 0.4925 0.3423
1.2256 26.0 52000 1.9841 0.5 0.3446
1.1503 27.0 54000 2.0229 0.4842 0.3446
1.1663 28.0 56000 2.0386 0.4817 0.3409
1.0824 29.0 58000 2.0353 0.4892 0.3388
1.0552 30.0 60000 2.0629 0.5058 0.3530
1.0633 31.0 62000 2.0942 0.4883 0.3559
0.9907 32.0 64000 2.1115 0.49 0.3479
0.9964 33.0 66000 2.1404 0.4867 0.3523
0.9144 34.0 68000 2.1559 0.4908 0.3476
0.9246 35.0 70000 2.2217 0.475 0.3360
0.8605 36.0 72000 2.2469 0.475 0.3426
0.8902 37.0 74000 2.2601 0.4908 0.3459
0.8197 38.0 76000 2.3256 0.4675 0.3420
0.776 39.0 78000 2.3163 0.4925 0.3483
0.8043 40.0 80000 2.3630 0.4808 0.3499
0.7914 41.0 82000 2.4009 0.475 0.3457
0.7128 42.0 84000 2.4279 0.4767 0.3510
0.7011 43.0 86000 2.4393 0.4758 0.3310
0.6647 44.0 88000 2.4876 0.4808 0.3455
0.6663 45.0 90000 2.5062 0.4858 0.3584
0.6134 46.0 92000 2.5754 0.4758 0.3476
0.6141 47.0 94000 2.5886 0.4775 0.3575
0.6337 48.0 96000 2.6593 0.4675 0.3433
0.6029 49.0 98000 2.6852 0.4658 0.3425
0.5546 50.0 100000 2.6974 0.4708 0.3488
0.5331 51.0 102000 2.7254 0.4708 0.3418
0.5025 52.0 104000 2.7650 0.4742 0.3497
0.4917 53.0 106000 2.7831 0.4767 0.3425
0.4526 54.0 108000 2.8553 0.4667 0.3489
0.4399 55.0 110000 2.9210 0.4667 0.3539
0.4761 56.0 112000 2.9234 0.4717 0.3542
0.3981 57.0 114000 2.9558 0.4708 0.3489
0.3861 58.0 116000 3.0372 0.4725 0.3522
0.4151 59.0 118000 3.0529 0.4683 0.3429
0.3756 60.0 120000 3.1011 0.4725 0.3519
0.3688 61.0 122000 3.1638 0.4667 0.3469
0.3301 62.0 124000 3.2081 0.4708 0.3511
0.3236 63.0 126000 3.2128 0.4733 0.3574
0.3212 64.0 128000 3.2987 0.465 0.3459

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1
Downloads last month
3
Safetensors
Model size
110M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mrovejaxd/goemotions_bertspanish_finetunig_g

Finetuned
this model