Edit model card

arabert_baseline_relevance_task2_fold1

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1509
  • Qwk: 0.1747
  • Mse: 0.1416

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.3333 2 0.7029 -0.0252 0.7093
No log 0.6667 4 0.1462 -0.1951 0.1344
No log 1.0 6 0.2440 0.0 0.2429
No log 1.3333 8 0.1359 0.0 0.1325
No log 1.6667 10 0.1162 0.0219 0.1100
No log 2.0 12 0.1085 0.0345 0.1016
No log 2.3333 14 0.1197 0.0 0.1139
No log 2.6667 16 0.1465 0.0 0.1427
No log 3.0 18 0.1499 0.0 0.1462
No log 3.3333 20 0.1428 0.0 0.1389
No log 3.6667 22 0.1187 0.0483 0.1125
No log 4.0 24 0.1323 0.1217 0.1220
No log 4.3333 26 0.1627 -0.1667 0.1498
No log 4.6667 28 0.1511 0.2075 0.1392
No log 5.0 30 0.1317 0.1217 0.1222
No log 5.3333 32 0.1331 0.0637 0.1256
No log 5.6667 34 0.1407 0.0105 0.1341
No log 6.0 36 0.1544 0.0105 0.1484
No log 6.3333 38 0.1639 0.0219 0.1582
No log 6.6667 40 0.1595 0.0483 0.1529
No log 7.0 42 0.1480 0.0808 0.1402
No log 7.3333 44 0.1440 0.1000 0.1358
No log 7.6667 46 0.1462 0.1000 0.1382
No log 8.0 48 0.1490 0.1747 0.1404
No log 8.3333 50 0.1535 0.1747 0.1441
No log 8.6667 52 0.1570 0.0094 0.1473
No log 9.0 54 0.1553 0.0094 0.1456
No log 9.3333 56 0.1529 0.1747 0.1436
No log 9.6667 58 0.1515 0.1747 0.1422
No log 10.0 60 0.1509 0.1747 0.1416

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_baseline_relevance_task2_fold1

Finetuned
(702)
this model