--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: Variome_2e-05_250 results: [] --- # Variome_2e-05_250 This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0798 - Precision: 0.4740 - Recall: 0.4133 - F1: 0.4416 - Accuracy: 0.9830 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.052 | 0.35 | 25 | 0.1874 | 0.0 | 0.0 | 0.0 | 0.9760 | | 0.1879 | 0.69 | 50 | 0.1794 | 0.0 | 0.0 | 0.0 | 0.9760 | | 0.1625 | 1.04 | 75 | 0.1736 | 0.0 | 0.0 | 0.0 | 0.9760 | | 0.1643 | 1.39 | 100 | 0.1323 | 0.0 | 0.0 | 0.0 | 0.9760 | | 0.1228 | 1.74 | 125 | 0.1183 | 0.2137 | 0.0854 | 0.1220 | 0.9769 | | 0.1165 | 2.08 | 150 | 0.1113 | 0.2017 | 0.1230 | 0.1528 | 0.9774 | | 0.0989 | 2.43 | 175 | 0.1072 | 0.3520 | 0.2092 | 0.2625 | 0.9792 | | 0.1057 | 2.78 | 200 | 0.1008 | 0.3322 | 0.2528 | 0.2871 | 0.9795 | | 0.0997 | 3.12 | 225 | 0.0961 | 0.3952 | 0.2801 | 0.3278 | 0.9804 | | 0.0895 | 3.47 | 250 | 0.0930 | 0.4115 | 0.2938 | 0.3428 | 0.9807 | | 0.0813 | 3.82 | 275 | 0.0904 | 0.3897 | 0.3305 | 0.3577 | 0.9810 | | 0.0767 | 4.17 | 300 | 0.0885 | 0.4294 | 0.3348 | 0.3762 | 0.9815 | | 0.0763 | 4.51 | 325 | 0.0851 | 0.4277 | 0.3715 | 0.3976 | 0.9817 | | 0.0714 | 4.86 | 350 | 0.0836 | 0.4361 | 0.3698 | 0.4002 | 0.9822 | | 0.0714 | 5.21 | 375 | 0.0825 | 0.4862 | 0.3766 | 0.4244 | 0.9828 | | 0.0678 | 5.56 | 400 | 0.0814 | 0.4684 | 0.3920 | 0.4268 | 0.9828 | | 0.0674 | 5.9 | 425 | 0.0802 | 0.4638 | 0.3988 | 0.4288 | 0.9830 | | 0.0688 | 6.25 | 450 | 0.0792 | 0.4672 | 0.4073 | 0.4352 | 0.9828 | | 0.0646 | 6.6 | 475 | 0.0802 | 0.4847 | 0.4056 | 0.4417 | 0.9831 | | 0.0607 | 6.94 | 500 | 0.0798 | 0.4740 | 0.4133 | 0.4416 | 0.9830 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.1+cu116 - Datasets 2.11.0 - Tokenizers 0.13.2