Edit model card

MLMA_lab9_task2

This model is a fine-tuned version of microsoft/biogpt on the ncbi_disease dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2509
  • Precision: 0.0159
  • Recall: 0.1487
  • F1: 0.0287
  • Accuracy: 0.6365

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
1.153 1.0 680 1.0671 0.0122 0.1258 0.0223 0.5452
1.02 2.0 1360 1.0418 0.0098 0.0203 0.0132 0.6791
0.9552 3.0 2040 1.0269 0.0135 0.1677 0.0250 0.5282
0.926 4.0 2720 1.0390 0.0143 0.0940 0.0248 0.6686
0.9156 5.0 3400 1.0200 0.0135 0.2046 0.0253 0.4679
0.8791 6.0 4080 1.0543 0.0131 0.2745 0.0250 0.3149
0.8672 7.0 4760 1.0545 0.0141 0.2732 0.0267 0.3471
0.8627 8.0 5440 1.0734 0.0145 0.0826 0.0246 0.7220
0.8375 9.0 6120 1.1068 0.0156 0.1410 0.0281 0.6451
0.8235 10.0 6800 1.0796 0.0158 0.1537 0.0286 0.6210
0.8157 11.0 7480 1.1476 0.0143 0.1690 0.0263 0.5737
0.7957 12.0 8160 1.1369 0.0143 0.1525 0.0262 0.6155
0.7937 13.0 8840 1.2014 0.0151 0.1741 0.0278 0.5808
0.7765 14.0 9520 1.2249 0.0160 0.1449 0.0289 0.6443
0.7661 15.0 10200 1.2509 0.0159 0.1487 0.0287 0.6365

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
1

Dataset used to train daijin219/MLMA_lab9_task2

Evaluation results