Edit model card

Visualize in Weights & Biases

bert-base-multilingual-cased-finetuned-imdb

This model is a fine-tuned version of google-bert/bert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.5331

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 3
  • eval_batch_size: 3
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
3.6209 1.0 40 3.2825
3.304 2.0 80 3.2269
3.1823 3.0 120 3.2295
2.9992 4.0 160 3.0505
2.8104 5.0 200 3.0104
2.7694 6.0 240 2.7972
2.7062 7.0 280 3.0067
2.6009 8.0 320 2.9257
2.5196 9.0 360 2.9466
2.3491 10.0 400 2.7385
2.3723 11.0 440 3.0088
2.3695 12.0 480 2.8984
2.345 13.0 520 2.7797
2.3329 14.0 560 2.9869
2.1931 15.0 600 2.8973
2.2524 16.0 640 2.8924
2.0459 17.0 680 3.0102
2.2278 18.0 720 3.0366
2.2139 19.0 760 2.8309
2.0046 20.0 800 2.8795
2.0374 21.0 840 2.7697
1.9617 22.0 880 2.9775
1.8399 23.0 920 2.8696
1.8376 24.0 960 3.0066
1.9448 25.0 1000 2.9601
1.7987 26.0 1040 2.9508
1.831 27.0 1080 2.9520
1.7009 28.0 1120 2.9750
1.6221 29.0 1160 3.0905
1.6337 30.0 1200 3.1441
1.7186 31.0 1240 3.0821
1.5834 32.0 1280 3.3630
1.5151 33.0 1320 3.2569
1.5723 34.0 1360 3.2099
1.5643 35.0 1400 3.0080
1.4931 36.0 1440 3.1520
1.538 37.0 1480 3.1593
1.5864 38.0 1520 3.2731
1.5271 39.0 1560 3.0882
1.405 40.0 1600 3.3693
1.3284 41.0 1640 3.1340
1.3766 42.0 1680 3.1998
1.3215 43.0 1720 3.1084
1.3579 44.0 1760 3.0917
1.43 45.0 1800 3.1623
1.2997 46.0 1840 3.2587
1.3185 47.0 1880 3.0671
1.2956 48.0 1920 3.2483
1.2278 49.0 1960 3.2175
1.2723 50.0 2000 3.1614
1.1428 51.0 2040 3.2662
1.2459 52.0 2080 3.3432
1.1621 53.0 2120 3.3091
1.1364 54.0 2160 3.4556
1.102 55.0 2200 3.3365
1.0964 56.0 2240 3.4058
1.1007 57.0 2280 3.3896
1.1003 58.0 2320 3.0685
1.1128 59.0 2360 3.1233
1.1114 60.0 2400 3.1524
1.088 61.0 2440 3.3241
1.076 62.0 2480 3.5168
1.1122 63.0 2520 3.2393
1.029 64.0 2560 3.3773
1.0952 65.0 2600 3.2705
0.9942 66.0 2640 3.2689
1.0054 67.0 2680 3.3907
1.0139 68.0 2720 3.4308
0.9643 69.0 2760 3.5373
0.9737 70.0 2800 3.3864
0.9087 71.0 2840 3.6037
0.9308 72.0 2880 3.7698
0.9949 73.0 2920 3.4596
0.9078 74.0 2960 3.5413
0.9089 75.0 3000 3.7149
0.9314 76.0 3040 3.3362
0.9288 77.0 3080 3.7032
0.8674 78.0 3120 3.3639
0.8499 79.0 3160 3.4339
0.9049 80.0 3200 3.3585
0.8497 81.0 3240 3.5258
0.8577 82.0 3280 3.7556
0.8967 83.0 3320 3.4828
0.846 84.0 3360 3.3458
0.8555 85.0 3400 3.2745
0.8741 86.0 3440 3.5193
0.8231 87.0 3480 3.6893
0.8077 88.0 3520 3.5333
0.9083 89.0 3560 3.5129
0.8597 90.0 3600 3.6353
0.8447 91.0 3640 3.3729
0.8125 92.0 3680 3.4102
0.8441 93.0 3720 3.7446
0.8151 94.0 3760 3.4279
0.8227 95.0 3800 3.5437
0.828 96.0 3840 3.4946
0.802 97.0 3880 3.3135
0.795 98.0 3920 3.1624
0.8438 99.0 3960 3.4064
0.7173 100.0 4000 3.5357

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
178M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cxfajar197/bert-base-multilingual-cased-finetuned-imdb

Finetuned
this model