DioBot2000's picture
End of training
55fbc75 verified
metadata
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - precision
  - recall
  - f1
model-index:
  - name: Frozen11-50epoch-BERT-multilingual-finetuned-CEFR_ner-10000news
    results: []

Frozen11-50epoch-BERT-multilingual-finetuned-CEFR_ner-10000news

This model is a fine-tuned version of bert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3983
  • Accuracy: 0.2881
  • Precision: 0.4062
  • Recall: 0.7266
  • F1: 0.4014

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
0.7274 1.0 500 0.6491 0.2455 0.4102 0.4821 0.2722
0.6213 2.0 1000 0.5746 0.2543 0.3790 0.5244 0.2899
0.5375 3.0 1500 0.5073 0.2626 0.3959 0.5531 0.3168
0.4721 4.0 2000 0.4612 0.2690 0.4067 0.5837 0.3466
0.4231 5.0 2500 0.4297 0.2728 0.3975 0.6080 0.3571
0.3828 6.0 3000 0.4104 0.2751 0.4034 0.6224 0.3628
0.3494 7.0 3500 0.3973 0.2769 0.4011 0.6354 0.3658
0.3223 8.0 4000 0.3764 0.2796 0.4004 0.6546 0.3765
0.2991 9.0 4500 0.3693 0.2807 0.4138 0.6578 0.3874
0.2797 10.0 5000 0.3661 0.2811 0.3998 0.6726 0.3811
0.2623 11.0 5500 0.3571 0.2826 0.4118 0.6765 0.3922
0.2476 12.0 6000 0.3514 0.2833 0.4063 0.6877 0.3914
0.2337 13.0 6500 0.3586 0.2828 0.4046 0.6849 0.3880
0.2198 14.0 7000 0.3480 0.2844 0.4107 0.6904 0.3960
0.2096 15.0 7500 0.3495 0.2847 0.4128 0.6893 0.3968
0.2007 16.0 8000 0.3456 0.2852 0.4106 0.7003 0.3984
0.1894 17.0 8500 0.3543 0.2849 0.4003 0.7058 0.3905
0.1816 18.0 9000 0.3532 0.2851 0.4071 0.7066 0.3966
0.1742 19.0 9500 0.3500 0.2857 0.4138 0.7069 0.4024
0.167 20.0 10000 0.3495 0.286 0.4150 0.7079 0.4047
0.159 21.0 10500 0.3599 0.2859 0.4067 0.7093 0.3973
0.1548 22.0 11000 0.3564 0.2863 0.4061 0.7139 0.3980
0.1492 23.0 11500 0.3587 0.2864 0.4081 0.7132 0.3994
0.1433 24.0 12000 0.3607 0.2867 0.4110 0.7148 0.4022
0.1379 25.0 12500 0.3593 0.2871 0.4133 0.7147 0.4045
0.1336 26.0 13000 0.3689 0.2866 0.4062 0.7164 0.3986
0.1296 27.0 13500 0.3656 0.2872 0.4056 0.7207 0.3996
0.1264 28.0 14000 0.3695 0.2871 0.4104 0.7177 0.4029
0.1223 29.0 14500 0.3700 0.2871 0.4113 0.7185 0.4041
0.119 30.0 15000 0.3732 0.2872 0.4086 0.7206 0.4016
0.115 31.0 15500 0.3765 0.2873 0.4096 0.7198 0.4030
0.1126 32.0 16000 0.3738 0.2878 0.4095 0.7239 0.4040
0.11 33.0 16500 0.3825 0.2874 0.4069 0.7224 0.4007
0.1071 34.0 17000 0.3857 0.2874 0.4029 0.7243 0.3976
0.105 35.0 17500 0.3871 0.2874 0.4069 0.7230 0.4008
0.104 36.0 18000 0.3872 0.2875 0.4046 0.7254 0.3997
0.1021 37.0 18500 0.3890 0.2876 0.4063 0.7236 0.4006
0.0997 38.0 19000 0.3886 0.2877 0.4067 0.7259 0.4017
0.0982 39.0 19500 0.3909 0.2877 0.4084 0.7238 0.4027
0.0964 40.0 20000 0.3951 0.2877 0.4076 0.7245 0.4019
0.0948 41.0 20500 0.3945 0.2879 0.4064 0.7258 0.4011
0.0941 42.0 21000 0.3919 0.2881 0.4096 0.7267 0.4044
0.0932 43.0 21500 0.3937 0.2879 0.4066 0.7262 0.4014
0.0922 44.0 22000 0.3965 0.2879 0.4091 0.7261 0.4038
0.0908 45.0 22500 0.3977 0.2880 0.4061 0.7271 0.4013
0.09 46.0 23000 0.3977 0.2880 0.4063 0.7263 0.4014
0.0906 47.0 23500 0.3978 0.2880 0.4051 0.7274 0.4005
0.0893 48.0 24000 0.3981 0.2881 0.4063 0.7269 0.4015
0.0887 49.0 24500 0.3982 0.2881 0.4067 0.7262 0.4017
0.0883 50.0 25000 0.3983 0.2881 0.4062 0.7266 0.4014

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.2.1
  • Datasets 2.19.1
  • Tokenizers 0.19.1