sentiment-pt-pl30-2 / README.md
apwic's picture
End of training
a104f4a verified
metadata
language:
  - id
license: mit
base_model: indolem/indobert-base-uncased
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - precision
  - recall
  - f1
model-index:
  - name: sentiment-pt-pl30-2
    results: []

sentiment-pt-pl30-2

This model is a fine-tuned version of indolem/indobert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2998
  • Accuracy: 0.8997
  • Precision: 0.8804
  • Recall: 0.8766
  • F1: 0.8785

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 30
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20.0

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
0.5411 1.0 122 0.4939 0.7368 0.6762 0.6413 0.6509
0.4231 2.0 244 0.3852 0.8246 0.7888 0.8184 0.7995
0.3331 3.0 366 0.3313 0.8471 0.8233 0.7968 0.8081
0.2924 4.0 488 0.3057 0.8822 0.8610 0.8517 0.8561
0.2705 5.0 610 0.3069 0.8747 0.8605 0.8288 0.8422
0.2461 6.0 732 0.3119 0.8747 0.8436 0.8763 0.8562
0.2313 7.0 854 0.2880 0.8872 0.8606 0.8727 0.8662
0.2183 8.0 976 0.2773 0.8922 0.8749 0.8612 0.8676
0.2093 9.0 1098 0.2804 0.8847 0.8648 0.8534 0.8588
0.1986 10.0 1220 0.2890 0.8922 0.8804 0.8537 0.8655
0.1881 11.0 1342 0.2911 0.8872 0.8658 0.8602 0.8629
0.1802 12.0 1464 0.2866 0.8822 0.8596 0.8542 0.8568
0.169 13.0 1586 0.2964 0.8847 0.8697 0.8459 0.8565
0.1709 14.0 1708 0.2944 0.8872 0.8658 0.8602 0.8629
0.1492 15.0 1830 0.2866 0.8872 0.8645 0.8627 0.8636
0.1493 16.0 1952 0.2951 0.8947 0.8708 0.8780 0.8743
0.1425 17.0 2074 0.3048 0.8947 0.8773 0.8655 0.8711
0.1375 18.0 2196 0.2987 0.8997 0.8791 0.8791 0.8791
0.1326 19.0 2318 0.3073 0.8997 0.8819 0.8741 0.8778
0.1365 20.0 2440 0.2998 0.8997 0.8804 0.8766 0.8785

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.15.2