Edit model card

arabert_cross_organization_task7_fold6

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6600
  • Qwk: 0.5581
  • Mse: 0.6587

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.125 2 2.0463 0.0838 2.0448
No log 0.25 4 1.0746 0.1939 1.0725
No log 0.375 6 1.0643 0.3449 1.0639
No log 0.5 8 0.8689 0.5214 0.8684
No log 0.625 10 0.7524 0.3263 0.7518
No log 0.75 12 0.6423 0.3754 0.6420
No log 0.875 14 0.5644 0.5844 0.5643
No log 1.0 16 0.5224 0.6557 0.5223
No log 1.125 18 0.4855 0.6292 0.4850
No log 1.25 20 0.5779 0.5418 0.5767
No log 1.375 22 0.5208 0.6043 0.5197
No log 1.5 24 0.5175 0.7174 0.5174
No log 1.625 26 0.4998 0.7107 0.4998
No log 1.75 28 0.4818 0.6457 0.4809
No log 1.875 30 0.4990 0.6364 0.4979
No log 2.0 32 0.5085 0.6403 0.5073
No log 2.125 34 0.4978 0.6611 0.4969
No log 2.25 36 0.4811 0.6848 0.4805
No log 2.375 38 0.4675 0.6672 0.4669
No log 2.5 40 0.4889 0.6232 0.4881
No log 2.625 42 0.5071 0.6102 0.5062
No log 2.75 44 0.5162 0.6263 0.5151
No log 2.875 46 0.5184 0.6317 0.5172
No log 3.0 48 0.5229 0.6543 0.5219
No log 3.125 50 0.5389 0.6233 0.5377
No log 3.25 52 0.5879 0.5675 0.5861
No log 3.375 54 0.6183 0.5488 0.6164
No log 3.5 56 0.5578 0.5898 0.5563
No log 3.625 58 0.5612 0.6909 0.5607
No log 3.75 60 0.5964 0.7100 0.5962
No log 3.875 62 0.5615 0.6815 0.5609
No log 4.0 64 0.5730 0.5963 0.5716
No log 4.125 66 0.6867 0.5243 0.6849
No log 4.25 68 0.6700 0.5276 0.6682
No log 4.375 70 0.5889 0.5659 0.5873
No log 4.5 72 0.5446 0.6149 0.5434
No log 4.625 74 0.5556 0.6355 0.5547
No log 4.75 76 0.5886 0.6034 0.5871
No log 4.875 78 0.6730 0.5568 0.6709
No log 5.0 80 0.6892 0.5344 0.6871
No log 5.125 82 0.6046 0.5665 0.6029
No log 5.25 84 0.5605 0.6134 0.5591
No log 5.375 86 0.5415 0.6417 0.5404
No log 5.5 88 0.5515 0.6247 0.5504
No log 5.625 90 0.5964 0.5762 0.5948
No log 5.75 92 0.6466 0.5489 0.6449
No log 5.875 94 0.6325 0.5648 0.6310
No log 6.0 96 0.6036 0.6097 0.6022
No log 6.125 98 0.5955 0.6483 0.5944
No log 6.25 100 0.6017 0.6168 0.6005
No log 6.375 102 0.6349 0.5846 0.6335
No log 6.5 104 0.6941 0.5277 0.6925
No log 6.625 106 0.6740 0.5262 0.6724
No log 6.75 108 0.6043 0.5829 0.6030
No log 6.875 110 0.5813 0.6039 0.5802
No log 7.0 112 0.5847 0.6056 0.5836
No log 7.125 114 0.6031 0.5987 0.6019
No log 7.25 116 0.6490 0.5645 0.6474
No log 7.375 118 0.6772 0.5326 0.6756
No log 7.5 120 0.6849 0.5311 0.6833
No log 7.625 122 0.6620 0.5393 0.6606
No log 7.75 124 0.6230 0.5696 0.6217
No log 7.875 126 0.5912 0.5983 0.5901
No log 8.0 128 0.5924 0.5983 0.5913
No log 8.125 130 0.6124 0.5864 0.6112
No log 8.25 132 0.6364 0.5615 0.6351
No log 8.375 134 0.6650 0.5476 0.6635
No log 8.5 136 0.6693 0.5397 0.6678
No log 8.625 138 0.6639 0.5516 0.6624
No log 8.75 140 0.6658 0.5467 0.6643
No log 8.875 142 0.6772 0.5437 0.6757
No log 9.0 144 0.6778 0.5489 0.6763
No log 9.125 146 0.6641 0.5504 0.6627
No log 9.25 148 0.6614 0.5557 0.6600
No log 9.375 150 0.6564 0.5609 0.6551
No log 9.5 152 0.6530 0.5618 0.6517
No log 9.625 154 0.6533 0.5618 0.6520
No log 9.75 156 0.6546 0.5581 0.6533
No log 9.875 158 0.6579 0.5581 0.6566
No log 10.0 160 0.6600 0.5581 0.6587

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task7_fold6

Finetuned
(702)
this model