OnePoint16's picture
update model card README.md
85d557e
metadata
license: apache-2.0
base_model: distilbert-base-uncased
tags:
  - generated_from_trainer
model-index:
  - name: distilbert-medical-question_answer
    results: []

distilbert-medical-question_answer

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 5.6100

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 21 4.2930
No log 2.0 42 3.3634
No log 3.0 63 3.2834
No log 4.0 84 3.2596
No log 5.0 105 3.2594
No log 6.0 126 3.2574
No log 7.0 147 3.2845
No log 8.0 168 3.2187
No log 9.0 189 3.3233
No log 10.0 210 3.3231
No log 11.0 231 3.3505
No log 12.0 252 3.5721
No log 13.0 273 3.5463
No log 14.0 294 3.5413
No log 15.0 315 3.6203
No log 16.0 336 3.6025
No log 17.0 357 3.6301
No log 18.0 378 3.8150
No log 19.0 399 4.0084
No log 20.0 420 3.9528
No log 21.0 441 4.0350
No log 22.0 462 3.9436
No log 23.0 483 4.0115
1.7508 24.0 504 4.0571
1.7508 25.0 525 4.0290
1.7508 26.0 546 4.0609
1.7508 27.0 567 4.2875
1.7508 28.0 588 4.0578
1.7508 29.0 609 4.1743
1.7508 30.0 630 4.1155
1.7508 31.0 651 4.2136
1.7508 32.0 672 4.3880
1.7508 33.0 693 4.4454
1.7508 34.0 714 4.3621
1.7508 35.0 735 4.1829
1.7508 36.0 756 4.2985
1.7508 37.0 777 4.5783
1.7508 38.0 798 4.4504
1.7508 39.0 819 4.6955
1.7508 40.0 840 4.5165
1.7508 41.0 861 4.3018
1.7508 42.0 882 4.5299
1.7508 43.0 903 4.6147
1.7508 44.0 924 4.4756
1.7508 45.0 945 4.6782
1.7508 46.0 966 4.6168
1.7508 47.0 987 4.7553
0.2318 48.0 1008 4.8580
0.2318 49.0 1029 4.8970
0.2318 50.0 1050 4.8502
0.2318 51.0 1071 4.7219
0.2318 52.0 1092 4.9355
0.2318 53.0 1113 5.0003
0.2318 54.0 1134 5.1603
0.2318 55.0 1155 5.0398
0.2318 56.0 1176 5.1349
0.2318 57.0 1197 5.1403
0.2318 58.0 1218 5.0170
0.2318 59.0 1239 5.0553
0.2318 60.0 1260 5.2331
0.2318 61.0 1281 5.0543
0.2318 62.0 1302 5.1769
0.2318 63.0 1323 5.4024
0.2318 64.0 1344 5.2960
0.2318 65.0 1365 5.2071
0.2318 66.0 1386 5.1635
0.2318 67.0 1407 5.2613
0.2318 68.0 1428 5.3370
0.2318 69.0 1449 5.3725
0.2318 70.0 1470 5.2739
0.2318 71.0 1491 5.2887
0.0363 72.0 1512 5.4713
0.0363 73.0 1533 5.4102
0.0363 74.0 1554 5.3190
0.0363 75.0 1575 5.3406
0.0363 76.0 1596 5.4775
0.0363 77.0 1617 5.4636
0.0363 78.0 1638 5.4894
0.0363 79.0 1659 5.5111
0.0363 80.0 1680 5.5769
0.0363 81.0 1701 5.5069
0.0363 82.0 1722 5.5296
0.0363 83.0 1743 5.5471
0.0363 84.0 1764 5.5630
0.0363 85.0 1785 5.5563
0.0363 86.0 1806 5.5700
0.0363 87.0 1827 5.6082
0.0363 88.0 1848 5.5808
0.0363 89.0 1869 5.5351
0.0363 90.0 1890 5.4856
0.0363 91.0 1911 5.5007
0.0363 92.0 1932 5.5076
0.0363 93.0 1953 5.5377
0.0363 94.0 1974 5.5612
0.0363 95.0 1995 5.5754
0.0067 96.0 2016 5.5861
0.0067 97.0 2037 5.5973
0.0067 98.0 2058 5.6035
0.0067 99.0 2079 5.6073
0.0067 100.0 2100 5.6100

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3