--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: tmvar_5e-05_ES12 results: [] --- # tmvar_5e-05_ES12 This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0154 - Precision: 0.8549 - Recall: 0.8919 - F1: 0.8730 - Accuracy: 0.9967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2914 | 1.47 | 25 | 0.1017 | 0.0 | 0.0 | 0.0 | 0.9843 | | 0.0755 | 2.94 | 50 | 0.0449 | 0.2811 | 0.2811 | 0.2811 | 0.9857 | | 0.0322 | 4.41 | 75 | 0.0272 | 0.3946 | 0.4757 | 0.4314 | 0.9906 | | 0.0218 | 5.88 | 100 | 0.0227 | 0.5191 | 0.6595 | 0.5810 | 0.9924 | | 0.0081 | 7.35 | 125 | 0.0144 | 0.7949 | 0.8378 | 0.8158 | 0.9963 | | 0.0038 | 8.82 | 150 | 0.0129 | 0.8639 | 0.8919 | 0.8777 | 0.9971 | | 0.0021 | 10.29 | 175 | 0.0129 | 0.865 | 0.9351 | 0.8987 | 0.9976 | | 0.0014 | 11.76 | 200 | 0.0122 | 0.8923 | 0.9405 | 0.9158 | 0.9980 | | 0.001 | 13.24 | 225 | 0.0121 | 0.8677 | 0.8865 | 0.8770 | 0.9976 | | 0.001 | 14.71 | 250 | 0.0118 | 0.8934 | 0.9514 | 0.9215 | 0.9981 | | 0.001 | 16.18 | 275 | 0.0162 | 0.8901 | 0.8757 | 0.8828 | 0.9967 | | 0.0005 | 17.65 | 300 | 0.0117 | 0.8838 | 0.9459 | 0.9138 | 0.9982 | | 0.0008 | 19.12 | 325 | 0.0119 | 0.8788 | 0.9405 | 0.9086 | 0.9981 | | 0.0007 | 20.59 | 350 | 0.0139 | 0.8958 | 0.9297 | 0.9125 | 0.9978 | | 0.0009 | 22.06 | 375 | 0.0141 | 0.8673 | 0.9189 | 0.8924 | 0.9975 | | 0.0008 | 23.53 | 400 | 0.0136 | 0.8964 | 0.9351 | 0.9153 | 0.9977 | | 0.0005 | 25.0 | 425 | 0.0140 | 0.8953 | 0.9243 | 0.9096 | 0.9976 | | 0.0005 | 26.47 | 450 | 0.0132 | 0.8744 | 0.9405 | 0.9062 | 0.9981 | | 0.0005 | 27.94 | 475 | 0.0132 | 0.8788 | 0.9405 | 0.9086 | 0.9978 | | 0.0014 | 29.41 | 500 | 0.0170 | 0.8610 | 0.8703 | 0.8656 | 0.9968 | | 0.0023 | 30.88 | 525 | 0.0258 | 0.7845 | 0.7676 | 0.7760 | 0.9955 | | 0.0007 | 32.35 | 550 | 0.0168 | 0.8135 | 0.8486 | 0.8307 | 0.9962 | | 0.0006 | 33.82 | 575 | 0.0193 | 0.8804 | 0.8757 | 0.8780 | 0.9969 | | 0.0004 | 35.29 | 600 | 0.0154 | 0.8549 | 0.8919 | 0.8730 | 0.9967 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.2