Edit model card

bert-base-greek-uncased-v1-finetuned-imdb

This model is a fine-tuned version of nlpaueb/bert-base-greek-uncased-v1 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.3617

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss
1.0877 1.0 45 2.9871
1.2665 2.0 90 2.9228
1.9122 3.0 135 3.1228
2.2564 4.0 180 1.6066
1.9132 5.0 225 2.6351
1.9952 6.0 270 2.2649
1.7895 7.0 315 2.3376
2.0415 8.0 360 1.9894
1.8113 9.0 405 2.2998
1.6944 10.0 450 2.1420
1.7862 11.0 495 2.7167
1.5657 12.0 540 2.5103
1.4576 13.0 585 2.0238
1.3369 14.0 630 2.5880
1.3598 15.0 675 1.8161
1.3407 16.0 720 2.4031
1.3805 17.0 765 2.2539
1.176 18.0 810 3.2901
1.1152 19.0 855 2.3024
1.0629 20.0 900 2.0823
1.1972 21.0 945 2.9957
1.1317 22.0 990 2.5360
1.0396 23.0 1035 1.6268
0.8686 24.0 1080 3.2657
1.0526 25.0 1125 3.0398
0.9023 26.0 1170 2.8197
0.9539 27.0 1215 3.1922
0.8699 28.0 1260 1.6943
0.8669 29.0 1305 2.7801
0.7893 30.0 1350 2.1385
0.7462 31.0 1395 2.2881
0.7627 32.0 1440 3.0789
0.7536 33.0 1485 2.9320
0.8317 34.0 1530 3.4081
0.6749 35.0 1575 2.7531
0.789 36.0 1620 2.9154
0.6609 37.0 1665 2.1821
0.6795 38.0 1710 2.5330
0.6408 39.0 1755 3.4374
0.6827 40.0 1800 2.3127
0.6188 41.0 1845 2.0818
0.6085 42.0 1890 2.2737
0.6978 43.0 1935 2.9629
0.6164 44.0 1980 2.5250
0.6273 45.0 2025 2.3866
0.7064 46.0 2070 2.0937
0.6561 47.0 2115 2.4984
0.7341 48.0 2160 3.1911
0.6271 49.0 2205 2.2692
0.6757 50.0 2250 2.2642

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
1