chat_1000_STEPS_01beta_1e7rate_CDPOSFT
This model is a fine-tuned version of tsavage68/chat_600STEPS_1e8rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6923
- Rewards/chosen: -0.0014
- Rewards/rejected: -0.0031
- Rewards/accuracies: 0.4352
- Rewards/margins: 0.0018
- Logps/rejected: -18.8334
- Logps/chosen: -16.7684
- Logits/rejected: -0.5994
- Logits/chosen: -0.5993
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6944 | 0.0977 | 50 | 0.6937 | -0.0002 | 0.0007 | 0.3846 | -0.0010 | -18.7946 | -16.7570 | -0.5974 | -0.5972 |
0.6929 | 0.1953 | 100 | 0.6932 | -0.0013 | -0.0013 | 0.4352 | 0.0000 | -18.8149 | -16.7673 | -0.5987 | -0.5985 |
0.6937 | 0.2930 | 150 | 0.6929 | -0.0008 | -0.0013 | 0.4242 | 0.0005 | -18.8152 | -16.7631 | -0.5980 | -0.5979 |
0.6909 | 0.3906 | 200 | 0.6929 | -0.0011 | -0.0016 | 0.4110 | 0.0005 | -18.8177 | -16.7654 | -0.5980 | -0.5979 |
0.6939 | 0.4883 | 250 | 0.6925 | -0.0009 | -0.0022 | 0.4527 | 0.0013 | -18.8240 | -16.7635 | -0.5982 | -0.5981 |
0.6914 | 0.5859 | 300 | 0.6925 | -0.0020 | -0.0035 | 0.4308 | 0.0014 | -18.8366 | -16.7748 | -0.5990 | -0.5989 |
0.6922 | 0.6836 | 350 | 0.6926 | -0.0031 | -0.0043 | 0.4527 | 0.0012 | -18.8453 | -16.7857 | -0.5985 | -0.5984 |
0.6926 | 0.7812 | 400 | 0.6924 | -0.0021 | -0.0036 | 0.4440 | 0.0015 | -18.8380 | -16.7757 | -0.5992 | -0.5991 |
0.6912 | 0.8789 | 450 | 0.6922 | -0.0021 | -0.0041 | 0.4615 | 0.0021 | -18.8432 | -16.7752 | -0.5984 | -0.5982 |
0.6918 | 0.9766 | 500 | 0.6921 | -0.0018 | -0.0040 | 0.4418 | 0.0022 | -18.8422 | -16.7723 | -0.5986 | -0.5985 |
0.69 | 1.0742 | 550 | 0.6918 | -0.0017 | -0.0045 | 0.4637 | 0.0028 | -18.8469 | -16.7718 | -0.5988 | -0.5987 |
0.6882 | 1.1719 | 600 | 0.6923 | -0.0013 | -0.0031 | 0.4659 | 0.0018 | -18.8330 | -16.7675 | -0.5994 | -0.5993 |
0.6887 | 1.2695 | 650 | 0.6924 | -0.0019 | -0.0036 | 0.4308 | 0.0016 | -18.8375 | -16.7739 | -0.5988 | -0.5987 |
0.6886 | 1.3672 | 700 | 0.6918 | -0.0003 | -0.0030 | 0.4549 | 0.0028 | -18.8325 | -16.7572 | -0.5991 | -0.5989 |
0.6876 | 1.4648 | 750 | 0.6919 | -0.0005 | -0.0031 | 0.4725 | 0.0026 | -18.8327 | -16.7592 | -0.5994 | -0.5993 |
0.6921 | 1.5625 | 800 | 0.6914 | -0.0001 | -0.0038 | 0.4725 | 0.0037 | -18.8396 | -16.7556 | -0.5994 | -0.5992 |
0.6882 | 1.6602 | 850 | 0.6920 | -0.0006 | -0.0029 | 0.4945 | 0.0023 | -18.8307 | -16.7602 | -0.5996 | -0.5994 |
0.69 | 1.7578 | 900 | 0.6920 | -0.0010 | -0.0033 | 0.4505 | 0.0023 | -18.8350 | -16.7647 | -0.5995 | -0.5993 |
0.6888 | 1.8555 | 950 | 0.6923 | -0.0014 | -0.0032 | 0.4352 | 0.0018 | -18.8340 | -16.7686 | -0.5994 | -0.5993 |
0.6878 | 1.9531 | 1000 | 0.6923 | -0.0014 | -0.0031 | 0.4352 | 0.0018 | -18.8334 | -16.7684 | -0.5994 | -0.5993 |
Framework versions
- Transformers 4.40.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 0