afaji's picture
fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa_EVAL_gpqa
29fe8bb verified
|
raw
history blame
No virus
2.58 kB
metadata
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa_EVAL_gpqa
    results: []

fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa_EVAL_gpqa

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 9.4859
  • Accuracy: 0.6111

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 321
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 63 15.9159 0.2727
No log 2.0 126 13.1758 0.3838
No log 3.0 189 12.5917 0.4596
No log 4.0 252 11.4595 0.4899
No log 5.0 315 11.0792 0.5152
No log 6.0 378 10.3175 0.5455
No log 7.0 441 10.4051 0.5505
2.9311 8.0 504 10.6829 0.5556
2.9311 9.0 567 10.7602 0.5253
2.9311 10.0 630 10.2955 0.5707
2.9311 11.0 693 9.8900 0.5606
2.9311 12.0 756 9.5430 0.5960
2.9311 13.0 819 9.8694 0.5909
2.9311 14.0 882 9.6516 0.5758
2.9311 15.0 945 9.4962 0.6010
0.3764 16.0 1008 9.8449 0.5960
0.3764 17.0 1071 9.5987 0.6061
0.3764 18.0 1134 9.6737 0.6061
0.3764 19.0 1197 9.5190 0.6061
0.3764 20.0 1260 9.4859 0.6111

Framework versions

  • Transformers 4.34.0.dev0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.14.0