Edit model card

Yepes_0.0001_250

This model is a fine-tuned version of microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1555
  • Precision: 0.5922
  • Recall: 0.4552
  • F1: 0.5148
  • Accuracy: 0.9768

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
0.4065 1.39 25 0.2115 0.0 0.0 0.0 0.9672
0.1995 2.78 50 0.2120 0.0 0.0 0.0 0.9672
0.1995 4.17 75 0.2108 0.0 0.0 0.0 0.9672
0.1694 5.56 100 0.1646 0.0 0.0 0.0 0.9672
0.1493 6.94 125 0.1513 0.0 0.0 0.0 0.9672
0.1266 8.33 150 0.1446 0.0 0.0 0.0 0.9672
0.106 9.72 175 0.1396 0.4019 0.2139 0.2792 0.9704
0.086 11.11 200 0.1162 0.5037 0.3408 0.4065 0.9740
0.0613 12.5 225 0.1230 0.5015 0.4104 0.4514 0.9740
0.047 13.89 250 0.1306 0.5333 0.4378 0.4809 0.9753
0.0351 15.28 275 0.1351 0.5629 0.4453 0.4972 0.9757
0.0266 16.67 300 0.1453 0.5617 0.4303 0.4873 0.9765
0.02 18.06 325 0.1441 0.5573 0.4478 0.4966 0.9757
0.0153 19.44 350 0.1555 0.5922 0.4552 0.5148 0.9768

Framework versions

  • Transformers 4.27.4
  • Pytorch 1.13.1+cu116
  • Datasets 2.11.0
  • Tokenizers 0.13.2
Downloads last month
8