arabert_cross_vocabulary_task3_fold4
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4192
- Qwk: 0.8233
- Mse: 0.4192
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.0299 | 2 | 4.0265 | 0.0 | 4.0265 |
No log | 0.0597 | 4 | 2.4108 | 0.0455 | 2.4108 |
No log | 0.0896 | 6 | 1.3064 | 0.0872 | 1.3064 |
No log | 0.1194 | 8 | 0.9388 | 0.4130 | 0.9388 |
No log | 0.1493 | 10 | 0.9972 | 0.3780 | 0.9972 |
No log | 0.1791 | 12 | 1.1007 | 0.5752 | 1.1007 |
No log | 0.2090 | 14 | 1.0611 | 0.7027 | 1.0611 |
No log | 0.2388 | 16 | 0.6193 | 0.7582 | 0.6193 |
No log | 0.2687 | 18 | 0.4209 | 0.6963 | 0.4209 |
No log | 0.2985 | 20 | 0.4578 | 0.7913 | 0.4578 |
No log | 0.3284 | 22 | 0.4800 | 0.8053 | 0.4800 |
No log | 0.3582 | 24 | 0.5069 | 0.8041 | 0.5069 |
No log | 0.3881 | 26 | 0.4981 | 0.7915 | 0.4981 |
No log | 0.4179 | 28 | 0.5835 | 0.7871 | 0.5835 |
No log | 0.4478 | 30 | 0.5946 | 0.7901 | 0.5946 |
No log | 0.4776 | 32 | 0.5436 | 0.8100 | 0.5436 |
No log | 0.5075 | 34 | 0.3946 | 0.8230 | 0.3946 |
No log | 0.5373 | 36 | 0.3388 | 0.7655 | 0.3388 |
No log | 0.5672 | 38 | 0.3601 | 0.7844 | 0.3601 |
No log | 0.5970 | 40 | 0.4543 | 0.8225 | 0.4543 |
No log | 0.6269 | 42 | 0.5032 | 0.8298 | 0.5032 |
No log | 0.6567 | 44 | 0.5322 | 0.8162 | 0.5322 |
No log | 0.6866 | 46 | 0.4794 | 0.8088 | 0.4794 |
No log | 0.7164 | 48 | 0.4186 | 0.7649 | 0.4186 |
No log | 0.7463 | 50 | 0.3706 | 0.7631 | 0.3706 |
No log | 0.7761 | 52 | 0.3514 | 0.7600 | 0.3514 |
No log | 0.8060 | 54 | 0.3463 | 0.7632 | 0.3463 |
No log | 0.8358 | 56 | 0.3582 | 0.7921 | 0.3582 |
No log | 0.8657 | 58 | 0.3851 | 0.8300 | 0.3851 |
No log | 0.8955 | 60 | 0.3925 | 0.8314 | 0.3925 |
No log | 0.9254 | 62 | 0.4042 | 0.8280 | 0.4042 |
No log | 0.9552 | 64 | 0.4190 | 0.8257 | 0.4190 |
No log | 0.9851 | 66 | 0.4192 | 0.8233 | 0.4192 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for salbatarni/arabert_cross_vocabulary_task3_fold4
Base model
aubmindlab/bert-base-arabertv02