Edit model card

arabert_cross_relevance_task6_fold4

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2650
  • Qwk: 0.3603
  • Mse: 0.2650

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.125 2 0.5208 0.1951 0.5208
No log 0.25 4 0.5167 0.1743 0.5167
No log 0.375 6 0.4163 0.2124 0.4163
No log 0.5 8 0.3864 0.2931 0.3864
No log 0.625 10 0.3687 0.3169 0.3687
No log 0.75 12 0.3018 0.3399 0.3018
No log 0.875 14 0.3682 0.2424 0.3682
No log 1.0 16 0.4150 0.2379 0.4150
No log 1.125 18 0.3832 0.2446 0.3832
No log 1.25 20 0.3293 0.3301 0.3293
No log 1.375 22 0.2923 0.4174 0.2923
No log 1.5 24 0.2780 0.3858 0.2780
No log 1.625 26 0.2804 0.3481 0.2804
No log 1.75 28 0.2982 0.3235 0.2982
No log 1.875 30 0.3255 0.3037 0.3255
No log 2.0 32 0.3232 0.3223 0.3232
No log 2.125 34 0.3010 0.3265 0.3010
No log 2.25 36 0.2972 0.3385 0.2972
No log 2.375 38 0.2953 0.3744 0.2953
No log 2.5 40 0.2914 0.3362 0.2914
No log 2.625 42 0.2775 0.3356 0.2775
No log 2.75 44 0.2717 0.3488 0.2717
No log 2.875 46 0.2682 0.3516 0.2682
No log 3.0 48 0.2717 0.3659 0.2717
No log 3.125 50 0.2762 0.4330 0.2762
No log 3.25 52 0.2974 0.4860 0.2974
No log 3.375 54 0.3155 0.4427 0.3155
No log 3.5 56 0.2887 0.3242 0.2887
No log 3.625 58 0.2903 0.2974 0.2903
No log 3.75 60 0.2966 0.3011 0.2966
No log 3.875 62 0.3108 0.3189 0.3108
No log 4.0 64 0.3002 0.3729 0.3002
No log 4.125 66 0.3025 0.4184 0.3025
No log 4.25 68 0.2768 0.4398 0.2768
No log 4.375 70 0.2840 0.4468 0.2840
No log 4.5 72 0.2805 0.4330 0.2805
No log 4.625 74 0.2739 0.3745 0.2739
No log 4.75 76 0.2753 0.3395 0.2753
No log 4.875 78 0.2722 0.3153 0.2722
No log 5.0 80 0.2714 0.3164 0.2714
No log 5.125 82 0.2809 0.3280 0.2809
No log 5.25 84 0.2844 0.3541 0.2844
No log 5.375 86 0.2818 0.3541 0.2818
No log 5.5 88 0.2663 0.3932 0.2663
No log 5.625 90 0.2624 0.3872 0.2624
No log 5.75 92 0.2639 0.3941 0.2639
No log 5.875 94 0.2817 0.3331 0.2817
No log 6.0 96 0.3073 0.3225 0.3073
No log 6.125 98 0.3094 0.3274 0.3094
No log 6.25 100 0.2931 0.3261 0.2931
No log 6.375 102 0.2777 0.3289 0.2777
No log 6.5 104 0.2693 0.3611 0.2693
No log 6.625 106 0.2685 0.3510 0.2685
No log 6.75 108 0.2718 0.3630 0.2718
No log 6.875 110 0.2628 0.3742 0.2628
No log 7.0 112 0.2633 0.3688 0.2633
No log 7.125 114 0.2771 0.3635 0.2771
No log 7.25 116 0.2913 0.3936 0.2913
No log 7.375 118 0.2849 0.4111 0.2849
No log 7.5 120 0.2790 0.4053 0.2790
No log 7.625 122 0.2834 0.4053 0.2834
No log 7.75 124 0.2822 0.3877 0.2822
No log 7.875 126 0.2858 0.3818 0.2858
No log 8.0 128 0.2754 0.3690 0.2754
No log 8.125 130 0.2679 0.3880 0.2679
No log 8.25 132 0.2657 0.4144 0.2657
No log 8.375 134 0.2684 0.4198 0.2684
No log 8.5 136 0.2677 0.4184 0.2677
No log 8.625 138 0.2659 0.4077 0.2659
No log 8.75 140 0.2673 0.3853 0.2673
No log 8.875 142 0.2664 0.3686 0.2664
No log 9.0 144 0.2685 0.3582 0.2685
No log 9.125 146 0.2677 0.3582 0.2677
No log 9.25 148 0.2671 0.3582 0.2671
No log 9.375 150 0.2663 0.3582 0.2663
No log 9.5 152 0.2659 0.3526 0.2659
No log 9.625 154 0.2647 0.3697 0.2647
No log 9.75 156 0.2651 0.3705 0.2651
No log 9.875 158 0.2648 0.3705 0.2648
No log 10.0 160 0.2650 0.3603 0.2650

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_relevance_task6_fold4

Finetuned
(702)
this model