--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: sa_BERT_48_qqp results: - task: name: Text Classification type: text-classification dataset: name: GLUE QQP type: glue config: qqp split: validation args: qqp metrics: - name: Accuracy type: accuracy value: 0.8510017313875835 - name: F1 type: f1 value: 0.799640790261425 --- # sa_BERT_48_qqp This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.3425 - Accuracy: 0.8510 - F1: 0.7996 - Combined Score: 0.8253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.4679 | 1.0 | 3791 | 0.3795 | 0.8222 | 0.7705 | 0.7964 | | 0.3469 | 2.0 | 7582 | 0.3580 | 0.8447 | 0.7963 | 0.8205 | | 0.2868 | 3.0 | 11373 | 0.3425 | 0.8510 | 0.7996 | 0.8253 | | 0.2372 | 4.0 | 15164 | 0.3706 | 0.8561 | 0.8149 | 0.8355 | | 0.1938 | 5.0 | 18955 | 0.3679 | 0.8625 | 0.8197 | 0.8411 | | 0.1567 | 6.0 | 22746 | 0.4246 | 0.8639 | 0.8214 | 0.8427 | | 0.1294 | 7.0 | 26537 | 0.4047 | 0.8585 | 0.8189 | 0.8387 | | 0.1059 | 8.0 | 30328 | 0.5063 | 0.8579 | 0.8181 | 0.8380 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.0 - Tokenizers 0.13.3