A-Bar's picture
Model save
517dddb verified
|
raw
history blame
No virus
6.73 kB
metadata
license: mit
base_model: hongpingjun98/BioMedNLP_DeBERTa
tags:
  - generated_from_trainer
datasets:
  - sem_eval_2024_task_2
metrics:
  - accuracy
  - precision
  - recall
  - f1
model-index:
  - name: BioMedNLP_DeBERTa_all_updates
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: sem_eval_2024_task_2
          type: sem_eval_2024_task_2
          config: sem_eval_2024_task_2_source
          split: validation
          args: sem_eval_2024_task_2_source
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.655
          - name: Precision
            type: precision
            value: 0.6551396256630968
          - name: Recall
            type: recall
            value: 0.655
          - name: F1
            type: f1
            value: 0.6549223575304444

BioMedNLP_DeBERTa_all_updates

This model is a fine-tuned version of hongpingjun98/BioMedNLP_DeBERTa on the sem_eval_2024_task_2 dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5118
  • Accuracy: 0.655
  • Precision: 0.6551
  • Recall: 0.655
  • F1: 0.6549

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
No log 1.0 9 0.6482 0.62 0.6403 0.62 0.6058
0.7604 2.0 18 0.6376 0.635 0.6515 0.635 0.6248
0.7485 3.0 27 0.6256 0.655 0.6672 0.655 0.6486
0.7114 4.0 36 0.6188 0.675 0.6790 0.675 0.6732
0.6906 5.0 45 0.6181 0.705 0.7050 0.705 0.7050
0.5355 6.0 54 0.6257 0.68 0.6803 0.6800 0.6799
0.5411 7.0 63 0.6258 0.675 0.6754 0.675 0.6748
0.4849 8.0 72 0.6376 0.665 0.6670 0.665 0.6640
0.4386 9.0 81 0.6507 0.68 0.6826 0.6800 0.6788
0.3565 10.0 90 0.6631 0.685 0.6850 0.685 0.6850
0.3565 11.0 99 0.7089 0.66 0.6616 0.6600 0.6591
0.2992 12.0 108 0.7791 0.67 0.6717 0.6700 0.6692
0.2092 13.0 117 0.8224 0.68 0.6803 0.6800 0.6799
0.1643 14.0 126 0.9128 0.675 0.6750 0.675 0.6750
0.0811 15.0 135 1.0458 0.67 0.6701 0.67 0.6700
0.0502 16.0 144 1.2061 0.67 0.6701 0.67 0.6700
0.011 17.0 153 1.3763 0.655 0.6558 0.655 0.6546
0.0261 18.0 162 1.4862 0.655 0.6558 0.655 0.6546
0.0057 19.0 171 1.5609 0.665 0.6651 0.665 0.6649
0.0026 20.0 180 1.6435 0.655 0.6550 0.655 0.6550
0.0026 21.0 189 1.7122 0.655 0.6550 0.655 0.6550
0.0019 22.0 198 1.7682 0.655 0.6550 0.655 0.6550
0.0016 23.0 207 1.8163 0.655 0.6550 0.655 0.6550
0.0013 24.0 216 1.8590 0.655 0.6550 0.655 0.6550
0.0012 25.0 225 1.8883 0.66 0.6601 0.66 0.6600
0.001 26.0 234 1.9199 0.665 0.6651 0.665 0.6649
0.0008 27.0 243 1.9548 0.665 0.6651 0.665 0.6649
0.0007 28.0 252 1.9958 0.665 0.6658 0.665 0.6646
0.0007 29.0 261 2.0427 0.665 0.6658 0.665 0.6646
0.0006 30.0 270 2.0890 0.66 0.6601 0.66 0.6600
0.0006 31.0 279 2.1265 0.66 0.6601 0.66 0.6600
0.0005 32.0 288 2.1537 0.66 0.6601 0.66 0.6600
0.0077 33.0 297 2.1871 0.655 0.6550 0.655 0.6550
0.0004 34.0 306 2.2152 0.66 0.66 0.66 0.66
0.0004 35.0 315 2.2393 0.66 0.6601 0.66 0.6600
0.0003 36.0 324 2.2641 0.66 0.6601 0.66 0.6600
0.0003 37.0 333 2.2881 0.66 0.6601 0.66 0.6600
0.0008 38.0 342 2.3215 0.645 0.6462 0.645 0.6443
0.0005 39.0 351 2.3445 0.665 0.6650 0.665 0.6650
0.0426 40.0 360 2.3033 0.68 0.6818 0.6800 0.6792
0.0426 41.0 369 2.3582 0.66 0.6601 0.66 0.6600
0.0005 42.0 378 2.3550 0.66 0.6603 0.66 0.6599
0.0402 43.0 387 2.3575 0.665 0.6654 0.665 0.6648
0.0003 44.0 396 2.3372 0.675 0.6752 0.675 0.6749
0.0135 45.0 405 2.3467 0.66 0.6603 0.66 0.6599
0.0007 46.0 414 2.3033 0.685 0.6859 0.685 0.6846
0.0003 47.0 423 2.2770 0.675 0.6764 0.675 0.6743
0.0003 48.0 432 2.3131 0.68 0.6807 0.6800 0.6797
0.0002 49.0 441 2.4371 0.66 0.6601 0.66 0.6600
0.0004 50.0 450 2.5118 0.655 0.6551 0.655 0.6549

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0