|
--- |
|
license: mit |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- precision |
|
- recall |
|
- f1 |
|
- accuracy |
|
model-index: |
|
- name: tmvar_2e-05 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# tmvar_2e-05 |
|
|
|
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.0136 |
|
- Precision: 0.8308 |
|
- Recall: 0.8757 |
|
- F1: 0.8526 |
|
- Accuracy: 0.9968 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 16 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- training_steps: 500 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |
|
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| |
|
| 0.5077 | 1.47 | 25 | 0.1015 | 0.0 | 0.0 | 0.0 | 0.9843 | |
|
| 0.0834 | 2.94 | 50 | 0.0463 | 0.3581 | 0.4162 | 0.3850 | 0.9877 | |
|
| 0.0348 | 4.41 | 75 | 0.0315 | 0.3846 | 0.4324 | 0.4071 | 0.9896 | |
|
| 0.0285 | 5.88 | 100 | 0.0234 | 0.5157 | 0.6216 | 0.5637 | 0.9927 | |
|
| 0.0149 | 7.35 | 125 | 0.0174 | 0.7801 | 0.8054 | 0.7926 | 0.9957 | |
|
| 0.0104 | 8.82 | 150 | 0.0156 | 0.78 | 0.8432 | 0.8104 | 0.9959 | |
|
| 0.0059 | 10.29 | 175 | 0.0160 | 0.8360 | 0.8541 | 0.8449 | 0.9960 | |
|
| 0.005 | 11.76 | 200 | 0.0139 | 0.8333 | 0.8649 | 0.8488 | 0.9964 | |
|
| 0.003 | 13.24 | 225 | 0.0164 | 0.8263 | 0.8486 | 0.8373 | 0.9961 | |
|
| 0.0024 | 14.71 | 250 | 0.0146 | 0.7980 | 0.8541 | 0.8251 | 0.9964 | |
|
| 0.0023 | 16.18 | 275 | 0.0132 | 0.8267 | 0.9027 | 0.8630 | 0.9969 | |
|
| 0.0016 | 17.65 | 300 | 0.0133 | 0.8274 | 0.8811 | 0.8534 | 0.9971 | |
|
| 0.0015 | 19.12 | 325 | 0.0129 | 0.8235 | 0.9081 | 0.8638 | 0.9971 | |
|
| 0.0014 | 20.59 | 350 | 0.0163 | 0.8703 | 0.8703 | 0.8703 | 0.9968 | |
|
| 0.0013 | 22.06 | 375 | 0.0141 | 0.8402 | 0.8811 | 0.8602 | 0.9969 | |
|
| 0.0013 | 23.53 | 400 | 0.0145 | 0.8438 | 0.8757 | 0.8594 | 0.9968 | |
|
| 0.0011 | 25.0 | 425 | 0.0149 | 0.8482 | 0.8757 | 0.8617 | 0.9969 | |
|
| 0.0011 | 26.47 | 450 | 0.0138 | 0.8351 | 0.8757 | 0.8549 | 0.9968 | |
|
| 0.0011 | 27.94 | 475 | 0.0136 | 0.8308 | 0.8757 | 0.8526 | 0.9968 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.27.4 |
|
- Pytorch 2.0.0+cu118 |
|
- Datasets 2.11.0 |
|
- Tokenizers 0.13.2 |
|
|