Edit model card

Weni/WeniGPT-2.10.1-Zephyr-7B-DPO-prompt-binarized-GPTQ

This model is a fine-tuned version of [Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT] on the dataset HuggingFaceH4/ultrafeedback_binarized with the DPO trainer. It is part of the WeniGPT project for Weni.

It achieves the following results on the evaluation set: {'eval_loss': 1.9277071952819824, 'eval_runtime': 94.1066, 'eval_samples_per_second': 2.125, 'eval_steps_per_second': 0.531, 'eval_rewards/chosen': 18.78862190246582, 'eval_rewards/rejected': 13.151841163635254, 'eval_rewards/accuracies': 0.5350000262260437, 'eval_rewards/margins': 5.63677978515625, 'eval_logps/rejected': -329.4024658203125, 'eval_logps/chosen': -334.51287841796875, 'eval_logits/rejected': -2.746166467666626, 'eval_logits/chosen': -2.7368459701538086, 'epoch': 7.96}

Intended uses & limitations

This model has not been trained to avoid specific intructions.

Training procedure

Finetuning was done on the model Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT with the following prompt:

Prompt:
<|user|>{prompt}</s>


Chosen:
<|assistant|>{chosen}</s>


Rejected:
<|assistant|>{rejected}</s>

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • gradient_accumulation_steps: 4
  • num_gpus: 1
  • total_train_batch_size: 16
  • optimizer: AdamW
  • lr_scheduler_type: cosine
  • num_steps: 896
  • quantization_type: gptq
  • LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 16\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",)

Training results

Framework versions

Hardware

  • Cloud provided: runpod.io
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from