tsavage68's picture
End of training
1e99699 verified
|
raw
history blame
No virus
5.83 kB
metadata
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: Summary_L3_1000steps_1e6rate_03beta_CSFTDPO
    results: []

Summary_L3_1000steps_1e6rate_03beta_CSFTDPO

This model is a fine-tuned version of tsavage68/Summary_L3_1000steps_1e7rate_SFT2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5961
  • Rewards/chosen: 0.0294
  • Rewards/rejected: -2.5656
  • Rewards/accuracies: 0.1400
  • Rewards/margins: 2.5950
  • Logps/rejected: -23.8158
  • Logps/chosen: -9.2849
  • Logits/rejected: -1.1435
  • Logits/chosen: -1.1436

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.5553 0.2004 50 0.5962 0.0778 -1.2696 0.1400 1.3473 -19.4956 -9.1236 -1.1038 -1.1053
0.6585 0.4008 100 0.5962 0.0854 -1.4439 0.1400 1.5292 -20.0766 -9.0982 -1.1078 -1.1092
0.6238 0.6012 150 0.5961 0.0687 -2.1556 0.1400 2.2243 -22.4490 -9.1538 -1.1298 -1.1306
0.6065 0.8016 200 0.5961 0.0322 -2.5726 0.1400 2.6048 -23.8390 -9.2754 -1.1437 -1.1438
0.6238 1.0020 250 0.5961 0.0294 -2.5678 0.1400 2.5971 -23.8230 -9.2849 -1.1438 -1.1440
0.6238 1.2024 300 0.5961 0.0279 -2.5674 0.1400 2.5953 -23.8219 -9.2899 -1.1439 -1.1440
0.6238 1.4028 350 0.5961 0.0304 -2.5648 0.1400 2.5952 -23.8131 -9.2814 -1.1438 -1.1439
0.5718 1.6032 400 0.5961 0.0304 -2.5648 0.1400 2.5952 -23.8131 -9.2814 -1.1438 -1.1439
0.5892 1.8036 450 0.5961 0.0338 -2.5715 0.1400 2.6052 -23.8353 -9.2702 -1.1435 -1.1436
0.5718 2.0040 500 0.5961 0.0279 -2.5720 0.1400 2.5999 -23.8372 -9.2897 -1.1434 -1.1435
0.5718 2.2044 550 0.5961 0.0266 -2.5750 0.1400 2.6016 -23.8472 -9.2942 -1.1438 -1.1440
0.5545 2.4048 600 0.5961 0.0271 -2.5761 0.1400 2.6032 -23.8507 -9.2925 -1.1438 -1.1440
0.5199 2.6052 650 0.5961 0.0271 -2.5761 0.1400 2.6032 -23.8507 -9.2925 -1.1438 -1.1440
0.6238 2.8056 700 0.5961 0.0270 -2.5764 0.1400 2.6035 -23.8519 -9.2928 -1.1438 -1.1440
0.6065 3.0060 750 0.5961 0.0315 -2.5674 0.1400 2.5989 -23.8216 -9.2777 -1.1434 -1.1436
0.6412 3.2064 800 0.5961 0.0276 -2.5662 0.1400 2.5937 -23.8176 -9.2909 -1.1434 -1.1436
0.6585 3.4068 850 0.5961 0.0277 -2.5666 0.1400 2.5943 -23.8191 -9.2903 -1.1434 -1.1436
0.6238 3.6072 900 0.5961 0.0281 -2.5670 0.1400 2.5952 -23.8205 -9.2891 -1.1434 -1.1436
0.5372 3.8076 950 0.5961 0.0310 -2.5656 0.1400 2.5966 -23.8159 -9.2795 -1.1435 -1.1436
0.6238 4.0080 1000 0.5961 0.0294 -2.5656 0.1400 2.5950 -23.8158 -9.2849 -1.1435 -1.1436

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.0.0+cu117
  • Datasets 2.20.0
  • Tokenizers 0.19.1