Edit model card

arabert_cross_relevance_task2_fold1

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2539
  • Qwk: 0.0
  • Mse: 0.2539

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.125 2 0.8503 0.0176 0.8493
No log 0.25 4 0.3494 0.1224 0.3496
No log 0.375 6 0.5004 0.0515 0.5009
No log 0.5 8 0.3323 -0.0164 0.3328
No log 0.625 10 0.3005 0.0 0.3007
No log 0.75 12 0.3118 0.0122 0.3119
No log 0.875 14 0.3876 0.0454 0.3880
No log 1.0 16 0.4447 -0.0079 0.4455
No log 1.125 18 0.4798 -0.1341 0.4806
No log 1.25 20 0.5158 -0.0658 0.5166
No log 1.375 22 0.3941 -0.0135 0.3946
No log 1.5 24 0.3295 -0.0042 0.3298
No log 1.625 26 0.3244 0.0 0.3247
No log 1.75 28 0.3471 -0.0164 0.3476
No log 1.875 30 0.3538 -0.0042 0.3543
No log 2.0 32 0.3130 0.0 0.3134
No log 2.125 34 0.2809 0.0 0.2811
No log 2.25 36 0.2793 0.0 0.2795
No log 2.375 38 0.3047 0.0 0.3050
No log 2.5 40 0.3696 0.0582 0.3702
No log 2.625 42 0.3992 0.1381 0.3999
No log 2.75 44 0.3456 -0.0172 0.3461
No log 2.875 46 0.2805 0.0 0.2807
No log 3.0 48 0.2675 0.0 0.2675
No log 3.125 50 0.2713 0.0 0.2712
No log 3.25 52 0.2691 0.0 0.2691
No log 3.375 54 0.2726 0.0 0.2727
No log 3.5 56 0.2951 0.0 0.2954
No log 3.625 58 0.3397 0.0452 0.3402
No log 3.75 60 0.3175 0.0450 0.3180
No log 3.875 62 0.2714 0.0 0.2716
No log 4.0 64 0.2574 0.0 0.2573
No log 4.125 66 0.2608 0.0 0.2606
No log 4.25 68 0.2590 0.0 0.2590
No log 4.375 70 0.2813 0.0 0.2815
No log 4.5 72 0.3090 0.0 0.3094
No log 4.625 74 0.3106 0.0 0.3110
No log 4.75 76 0.2951 0.0 0.2954
No log 4.875 78 0.2767 0.0 0.2769
No log 5.0 80 0.2684 0.0 0.2685
No log 5.125 82 0.2825 0.0122 0.2829
No log 5.25 84 0.3266 0.0746 0.3272
No log 5.375 86 0.3486 0.0841 0.3493
No log 5.5 88 0.3299 0.0833 0.3305
No log 5.625 90 0.2905 0.0245 0.2909
No log 5.75 92 0.2675 0.0 0.2677
No log 5.875 94 0.2613 0.0 0.2614
No log 6.0 96 0.2609 0.0 0.2610
No log 6.125 98 0.2647 0.0 0.2648
No log 6.25 100 0.2690 0.0 0.2692
No log 6.375 102 0.2672 0.0 0.2675
No log 6.5 104 0.2632 0.0 0.2633
No log 6.625 106 0.2554 0.0 0.2554
No log 6.75 108 0.2637 0.0 0.2635
No log 6.875 110 0.2670 -0.0118 0.2667
No log 7.0 112 0.2544 0.0 0.2543
No log 7.125 114 0.2718 0.0245 0.2720
No log 7.25 116 0.2966 0.0161 0.2970
No log 7.375 118 0.2940 0.0161 0.2944
No log 7.5 120 0.2714 0.0245 0.2717
No log 7.625 122 0.2558 0.0 0.2559
No log 7.75 124 0.2559 0.0 0.2558
No log 7.875 126 0.2567 0.0 0.2565
No log 8.0 128 0.2560 0.0 0.2559
No log 8.125 130 0.2526 0.0 0.2526
No log 8.25 132 0.2523 0.0 0.2524
No log 8.375 134 0.2528 0.0 0.2528
No log 8.5 136 0.2536 0.0 0.2537
No log 8.625 138 0.2528 0.0 0.2529
No log 8.75 140 0.2516 0.0 0.2516
No log 8.875 142 0.2517 0.0 0.2516
No log 9.0 144 0.2521 0.0 0.2521
No log 9.125 146 0.2526 0.0 0.2526
No log 9.25 148 0.2531 0.0 0.2531
No log 9.375 150 0.2534 0.0 0.2534
No log 9.5 152 0.2534 0.0 0.2534
No log 9.625 154 0.2536 0.0 0.2536
No log 9.75 156 0.2537 0.0 0.2538
No log 9.875 158 0.2538 0.0 0.2539
No log 10.0 160 0.2539 0.0 0.2539

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_relevance_task2_fold1

Finetuned
(438)
this model