gregorgabrovsek's picture
update model card README.md
0e5a4dd
|
raw
history blame
No virus
2.44 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
  - precision
  - recall
model-index:
  - name: SloBertAA_Top100_WithoutOOC_082023_MultilingualBertBase
    results: []

SloBertAA_Top100_WithoutOOC_082023_MultilingualBertBase

This model is a fine-tuned version of bert-base-multilingual-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8490
  • Accuracy: 0.6964
  • F1: 0.6972
  • Precision: 0.7001
  • Recall: 0.6964

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 12
  • eval_batch_size: 12
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
1.6988 1.0 44675 1.6287 0.5883 0.5902 0.6087 0.5883
1.3829 2.0 89350 1.4305 0.6351 0.6379 0.6563 0.6351
1.1122 3.0 134025 1.3339 0.6635 0.6651 0.6774 0.6635
0.881 4.0 178700 1.3128 0.6799 0.6805 0.6876 0.6799
0.7032 5.0 223375 1.3628 0.6831 0.6840 0.6932 0.6831
0.5454 6.0 268050 1.4343 0.6877 0.6890 0.6956 0.6877
0.408 7.0 312725 1.5546 0.6877 0.6888 0.6958 0.6877
0.2752 8.0 357400 1.6623 0.6932 0.6948 0.6992 0.6932
0.1844 9.0 402075 1.7825 0.6947 0.6959 0.6995 0.6947
0.1506 10.0 446750 1.8490 0.6964 0.6972 0.7001 0.6964

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.8.0
  • Datasets 2.10.1
  • Tokenizers 0.13.2