sfulay's picture
Model save
44bba64 verified
|
raw
history blame
3.59 kB
metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
    results: []

zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo

This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0327
  • Rewards/chosen: -0.1258
  • Rewards/rejected: -0.4253
  • Rewards/accuracies: 0.7543
  • Rewards/margins: 0.2995
  • Logps/rejected: -289.0509
  • Logps/chosen: -297.6692
  • Logits/rejected: -1.4762
  • Logits/chosen: -1.6863

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 55
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.0487 0.1147 50 0.0455 0.0300 -0.0845 0.7026 0.1145 -254.9693 -282.0912 -2.4928 -2.5717
0.0408 0.2294 100 0.0391 -0.1071 -0.3089 0.6897 0.2018 -277.4089 -295.8004 -1.7416 -1.8404
0.0379 0.3440 150 0.0365 -0.1406 -0.4136 0.7155 0.2730 -287.8874 -299.1519 -1.6357 -1.7904
0.0365 0.4587 200 0.0350 -0.0650 -0.3264 0.7543 0.2614 -279.1631 -291.5866 -1.7552 -1.9178
0.0346 0.5734 250 0.0337 -0.1319 -0.4539 0.7543 0.3220 -291.9156 -298.2828 -1.4871 -1.7192
0.036 0.6881 300 0.0331 -0.1336 -0.4291 0.75 0.2955 -289.4286 -298.4504 -1.4842 -1.6835
0.0359 0.8028 350 0.0327 -0.1378 -0.4472 0.7586 0.3094 -291.2399 -298.8666 -1.4658 -1.6786
0.0351 0.9174 400 0.0327 -0.1258 -0.4253 0.7543 0.2995 -289.0509 -297.6692 -1.4762 -1.6863

Framework versions

  • Transformers 4.44.0.dev0
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1