Edit model card

pairwise-reward-zephyr-7b-sft-qlora-ultrafeedback-binarized-20240925-122042

This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4788
  • Accuracy: 0.7536

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.5e-05
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6633 0.0526 100 0.6175 0.6929
0.6204 0.1052 200 0.5689 0.7255
0.5895 0.1578 300 0.5263 0.7341
0.5285 0.2104 400 0.5183 0.7431
0.4919 0.2630 500 0.5192 0.7356
0.5085 0.3155 600 0.5057 0.7531
0.5322 0.3681 700 0.5066 0.7486
0.4976 0.4207 800 0.4962 0.7561
0.549 0.4733 900 0.5012 0.7647
0.5175 0.5259 1000 0.4887 0.7587
0.4525 0.5785 1100 0.4980 0.7551
0.4847 0.6311 1200 0.4848 0.7516
0.5429 0.6837 1300 0.4878 0.7481
0.4348 0.7363 1400 0.4844 0.7551
0.4346 0.7889 1500 0.4848 0.7521
0.513 0.8414 1600 0.4837 0.7566
0.442 0.8940 1700 0.4814 0.7561
0.4531 0.9466 1800 0.4796 0.7607
0.4533 0.9992 1900 0.4788 0.7536

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
14
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for sahandrez/pairwise-reward-zephyr-7b-sft-qlora-ultrafeedback-binarized-20240925-122042

Adapter
(1185)
this model