salbatarni's picture
Training in progress, step 65
b2ae807 verified
|
raw
history blame
3.85 kB
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_organization_task4_fold2
    results: []

arabert_cross_organization_task4_fold2

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3326
  • Qwk: -0.0168
  • Mse: 1.3108

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0282 2 5.7271 0.0134 5.7480
No log 0.0563 4 2.8011 0.0108 2.8128
No log 0.0845 6 1.1853 0.0747 1.1830
No log 0.1127 8 0.8362 0.0389 0.8275
No log 0.1408 10 0.8333 0.1288 0.8213
No log 0.1690 12 0.8688 0.1267 0.8595
No log 0.1972 14 0.9519 0.1054 0.9385
No log 0.2254 16 0.9970 0.0306 0.9822
No log 0.2535 18 1.0292 0.0258 1.0103
No log 0.2817 20 1.0334 -0.0628 1.0157
No log 0.3099 22 1.0686 -0.1440 1.0532
No log 0.3380 24 1.1200 0.0189 1.0994
No log 0.3662 26 1.2013 0.0103 1.1805
No log 0.3944 28 1.2303 -0.0460 1.2082
No log 0.4225 30 1.2347 -0.0420 1.2132
No log 0.4507 32 1.1787 -0.0368 1.1561
No log 0.4789 34 1.1677 -0.0346 1.1449
No log 0.5070 36 1.1550 0.0020 1.1329
No log 0.5352 38 1.1940 -0.0866 1.1731
No log 0.5634 40 1.1923 -0.0408 1.1735
No log 0.5915 42 1.1642 -0.0408 1.1467
No log 0.6197 44 1.0921 -0.0468 1.0757
No log 0.6479 46 1.0848 -0.0448 1.0687
No log 0.6761 48 1.0785 -0.0317 1.0615
No log 0.7042 50 1.0877 0.0101 1.0705
No log 0.7324 52 1.1100 -0.0298 1.0927
No log 0.7606 54 1.1436 -0.0217 1.1254
No log 0.7887 56 1.1970 -0.0235 1.1785
No log 0.8169 58 1.2422 -0.0168 1.2238
No log 0.8451 60 1.2565 -0.0168 1.2371
No log 0.8732 62 1.2821 -0.0168 1.2624
No log 0.9014 64 1.3042 -0.0168 1.2840
No log 0.9296 66 1.3287 -0.0168 1.3079
No log 0.9577 68 1.3368 -0.0168 1.3154
No log 0.9859 70 1.3326 -0.0168 1.3108

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1