Edit model card

arabert_cross_relevance_task2_fold6

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5925
  • Qwk: 0.1247
  • Mse: 0.5913

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.125 2 1.0484 0.0144 1.0495
No log 0.25 4 0.4722 0.0661 0.4712
No log 0.375 6 0.3587 0.0337 0.3586
No log 0.5 8 0.3104 0.0426 0.3106
No log 0.625 10 0.3347 0.1842 0.3346
No log 0.75 12 0.4174 0.1571 0.4170
No log 0.875 14 0.3844 0.2015 0.3843
No log 1.0 16 0.2810 0.2211 0.2812
No log 1.125 18 0.2939 0.2330 0.2941
No log 1.25 20 0.3046 0.2330 0.3047
No log 1.375 22 0.2859 0.2445 0.2860
No log 1.5 24 0.2753 0.2406 0.2755
No log 1.625 26 0.2512 0.2194 0.2515
No log 1.75 28 0.2794 0.2445 0.2795
No log 1.875 30 0.3449 0.2103 0.3448
No log 2.0 32 0.3052 0.2252 0.3052
No log 2.125 34 0.2622 0.2406 0.2624
No log 2.25 36 0.2752 0.2445 0.2754
No log 2.375 38 0.3529 0.2096 0.3530
No log 2.5 40 0.3863 0.2053 0.3862
No log 2.625 42 0.3238 0.2239 0.3238
No log 2.75 44 0.2611 0.2325 0.2612
No log 2.875 46 0.2541 0.2194 0.2540
No log 3.0 48 0.2822 0.2224 0.2820
No log 3.125 50 0.3796 0.2141 0.3790
No log 3.25 52 0.4184 0.1975 0.4178
No log 3.375 54 0.3500 0.2226 0.3497
No log 3.5 56 0.2922 0.2252 0.2922
No log 3.625 58 0.3018 0.2200 0.3019
No log 3.75 60 0.3826 0.2125 0.3825
No log 3.875 62 0.4484 0.2078 0.4482
No log 4.0 64 0.4422 0.1971 0.4419
No log 4.125 66 0.4471 0.1900 0.4468
No log 4.25 68 0.3883 0.2089 0.3881
No log 4.375 70 0.3712 0.2053 0.3709
No log 4.5 72 0.4054 0.2125 0.4050
No log 4.625 74 0.4224 0.1935 0.4219
No log 4.75 76 0.4248 0.2008 0.4243
No log 4.875 78 0.4469 0.1830 0.4463
No log 5.0 80 0.5131 0.1339 0.5123
No log 5.125 82 0.5222 0.1283 0.5214
No log 5.25 84 0.5110 0.1302 0.5102
No log 5.375 86 0.4800 0.1441 0.4793
No log 5.5 88 0.4837 0.1503 0.4829
No log 5.625 90 0.5706 0.1391 0.5695
No log 5.75 92 0.6503 0.0885 0.6490
No log 5.875 94 0.6117 0.1127 0.6104
No log 6.0 96 0.5654 0.1391 0.5642
No log 6.125 98 0.5250 0.1527 0.5240
No log 6.25 100 0.5182 0.1469 0.5172
No log 6.375 102 0.5641 0.1230 0.5630
No log 6.5 104 0.6136 0.0980 0.6124
No log 6.625 106 0.5738 0.1178 0.5727
No log 6.75 108 0.5142 0.1455 0.5133
No log 6.875 110 0.5024 0.1441 0.5015
No log 7.0 112 0.5299 0.1360 0.5289
No log 7.125 114 0.5277 0.1322 0.5267
No log 7.25 116 0.5830 0.1356 0.5818
No log 7.375 118 0.6347 0.1213 0.6334
No log 7.5 120 0.6357 0.1162 0.6344
No log 7.625 122 0.5716 0.1320 0.5704
No log 7.75 124 0.5196 0.1381 0.5186
No log 7.875 126 0.4886 0.1528 0.4877
No log 8.0 128 0.4936 0.1503 0.4927
No log 8.125 130 0.5221 0.1576 0.5212
No log 8.25 132 0.5852 0.1247 0.5841
No log 8.375 134 0.6353 0.0968 0.6341
No log 8.5 136 0.6372 0.0968 0.6360
No log 8.625 138 0.6395 0.0968 0.6383
No log 8.75 140 0.6416 0.0968 0.6403
No log 8.875 142 0.6267 0.1112 0.6254
No log 9.0 144 0.5882 0.1211 0.5871
No log 9.125 146 0.5690 0.1320 0.5679
No log 9.25 148 0.5639 0.1376 0.5629
No log 9.375 150 0.5739 0.1265 0.5728
No log 9.5 152 0.5772 0.1265 0.5761
No log 9.625 154 0.5840 0.1247 0.5828
No log 9.75 156 0.5884 0.1247 0.5873
No log 9.875 158 0.5912 0.1247 0.5900
No log 10.0 160 0.5925 0.1247 0.5913

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_relevance_task2_fold6

Finetuned
(438)
this model