--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: ABSA-SentencePair-corrected-domainAdapt-Stack-Semeval-Adapter-houlsby-run3 results: [] --- # ABSA-SentencePair-corrected-domainAdapt-Stack-Semeval-Adapter-houlsby-run3 This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-msa](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3204 - Accuracy: 0.8752 - F1: 0.8752 - Precision: 0.8752 - Recall: 0.8752 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 23 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.4834 | 1.0 | 265 | 0.3768 | 0.8677 | 0.8677 | 0.8677 | 0.8677 | | 0.3888 | 2.0 | 530 | 0.3437 | 0.8719 | 0.8719 | 0.8719 | 0.8719 | | 0.3592 | 3.0 | 795 | 0.3512 | 0.8738 | 0.8738 | 0.8738 | 0.8738 | | 0.3312 | 4.0 | 1060 | 0.3288 | 0.8677 | 0.8677 | 0.8677 | 0.8677 | | 0.322 | 5.0 | 1325 | 0.3393 | 0.8677 | 0.8677 | 0.8677 | 0.8677 | | 0.3052 | 6.0 | 1590 | 0.3245 | 0.8790 | 0.8790 | 0.8790 | 0.8790 | | 0.2962 | 7.0 | 1855 | 0.3204 | 0.8752 | 0.8752 | 0.8752 | 0.8752 | | 0.2834 | 8.0 | 2120 | 0.3324 | 0.8828 | 0.8828 | 0.8828 | 0.8828 | | 0.2687 | 9.0 | 2385 | 0.3211 | 0.8743 | 0.8743 | 0.8743 | 0.8743 | | 0.2647 | 10.0 | 2650 | 0.3453 | 0.8648 | 0.8648 | 0.8648 | 0.8648 | | 0.2502 | 11.0 | 2915 | 0.3282 | 0.8743 | 0.8743 | 0.8743 | 0.8743 | | 0.2441 | 12.0 | 3180 | 0.3430 | 0.8700 | 0.8700 | 0.8700 | 0.8700 | | 0.2353 | 13.0 | 3445 | 0.3519 | 0.8767 | 0.8767 | 0.8767 | 0.8767 | | 0.2286 | 14.0 | 3710 | 0.3481 | 0.8738 | 0.8738 | 0.8738 | 0.8738 | | 0.2197 | 15.0 | 3975 | 0.3649 | 0.8771 | 0.8771 | 0.8771 | 0.8771 | | 0.2191 | 16.0 | 4240 | 0.3608 | 0.8729 | 0.8729 | 0.8729 | 0.8729 | | 0.2145 | 17.0 | 4505 | 0.3587 | 0.8724 | 0.8724 | 0.8724 | 0.8724 | | 0.2075 | 18.0 | 4770 | 0.3577 | 0.8762 | 0.8762 | 0.8762 | 0.8762 | | 0.2032 | 19.0 | 5035 | 0.3630 | 0.8724 | 0.8724 | 0.8724 | 0.8724 | | 0.1975 | 20.0 | 5300 | 0.3636 | 0.8719 | 0.8719 | 0.8719 | 0.8719 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3