|
--- |
|
base_model: tsavage68/chat_600STEPS_1e8rate_SFT |
|
tags: |
|
- trl |
|
- dpo |
|
- generated_from_trainer |
|
model-index: |
|
- name: chat_1000_STEPS_05beta_1e7rate_CDPOSFT |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# chat_1000_STEPS_05beta_1e7rate_CDPOSFT |
|
|
|
This model is a fine-tuned version of [tsavage68/chat_600STEPS_1e8rate_SFT](https://huggingface.co/tsavage68/chat_600STEPS_1e8rate_SFT) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.6899 |
|
- Rewards/chosen: -0.0048 |
|
- Rewards/rejected: -0.0138 |
|
- Rewards/accuracies: 0.4527 |
|
- Rewards/margins: 0.0090 |
|
- Logps/rejected: -18.8295 |
|
- Logps/chosen: -16.7641 |
|
- Logits/rejected: -0.5988 |
|
- Logits/chosen: -0.5987 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-07 |
|
- train_batch_size: 4 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 2 |
|
- total_train_batch_size: 8 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_steps: 100 |
|
- training_steps: 1000 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |
|
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| |
|
| 0.6929 | 0.0977 | 50 | 0.6947 | -0.0000 | 0.0016 | 0.4066 | -0.0016 | -18.7989 | -16.7547 | -0.5985 | -0.5983 | |
|
| 0.694 | 0.1953 | 100 | 0.6903 | 0.0030 | -0.0047 | 0.4527 | 0.0076 | -18.8113 | -16.7487 | -0.5976 | -0.5975 | |
|
| 0.6922 | 0.2930 | 150 | 0.6941 | -0.0056 | -0.0053 | 0.4044 | -0.0003 | -18.8127 | -16.7659 | -0.5978 | -0.5977 | |
|
| 0.7012 | 0.3906 | 200 | 0.6957 | -0.0099 | -0.0065 | 0.4132 | -0.0034 | -18.8151 | -16.7744 | -0.5982 | -0.5980 | |
|
| 0.6992 | 0.4883 | 250 | 0.6932 | -0.0081 | -0.0099 | 0.4484 | 0.0017 | -18.8217 | -16.7709 | -0.5975 | -0.5974 | |
|
| 0.6872 | 0.5859 | 300 | 0.6918 | -0.0096 | -0.0144 | 0.4440 | 0.0048 | -18.8309 | -16.7738 | -0.5990 | -0.5989 | |
|
| 0.6875 | 0.6836 | 350 | 0.6894 | -0.0116 | -0.0209 | 0.4484 | 0.0093 | -18.8438 | -16.7778 | -0.5985 | -0.5984 | |
|
| 0.6918 | 0.7812 | 400 | 0.6878 | -0.0070 | -0.0200 | 0.4462 | 0.0129 | -18.8419 | -16.7687 | -0.5987 | -0.5985 | |
|
| 0.6868 | 0.8789 | 450 | 0.6897 | -0.0052 | -0.0141 | 0.4396 | 0.0089 | -18.8302 | -16.7651 | -0.5982 | -0.5981 | |
|
| 0.6867 | 0.9766 | 500 | 0.6904 | -0.0080 | -0.0160 | 0.4176 | 0.0080 | -18.8339 | -16.7706 | -0.5988 | -0.5987 | |
|
| 0.6744 | 1.0742 | 550 | 0.6883 | -0.0035 | -0.0157 | 0.4527 | 0.0123 | -18.8334 | -16.7616 | -0.5985 | -0.5984 | |
|
| 0.6791 | 1.1719 | 600 | 0.6897 | -0.0033 | -0.0127 | 0.4484 | 0.0094 | -18.8275 | -16.7612 | -0.5988 | -0.5987 | |
|
| 0.6793 | 1.2695 | 650 | 0.6887 | -0.0077 | -0.0191 | 0.4418 | 0.0114 | -18.8402 | -16.7700 | -0.5985 | -0.5983 | |
|
| 0.6696 | 1.3672 | 700 | 0.6863 | -0.0015 | -0.0176 | 0.4527 | 0.0161 | -18.8372 | -16.7576 | -0.5988 | -0.5986 | |
|
| 0.6689 | 1.4648 | 750 | 0.6873 | -0.0024 | -0.0167 | 0.4593 | 0.0143 | -18.8353 | -16.7594 | -0.5983 | -0.5982 | |
|
| 0.6808 | 1.5625 | 800 | 0.6879 | -0.0050 | -0.0179 | 0.4637 | 0.0129 | -18.8378 | -16.7646 | -0.5992 | -0.5991 | |
|
| 0.6718 | 1.6602 | 850 | 0.6902 | -0.0058 | -0.0139 | 0.4462 | 0.0082 | -18.8299 | -16.7662 | -0.5985 | -0.5984 | |
|
| 0.678 | 1.7578 | 900 | 0.6872 | -0.0008 | -0.0151 | 0.4571 | 0.0144 | -18.8323 | -16.7562 | -0.5989 | -0.5988 | |
|
| 0.6745 | 1.8555 | 950 | 0.6899 | -0.0048 | -0.0138 | 0.4527 | 0.0090 | -18.8295 | -16.7641 | -0.5988 | -0.5987 | |
|
| 0.6759 | 1.9531 | 1000 | 0.6899 | -0.0048 | -0.0138 | 0.4527 | 0.0090 | -18.8295 | -16.7641 | -0.5988 | -0.5987 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.40.1 |
|
- Pytorch 2.0.0+cu117 |
|
- Datasets 2.19.1 |
|
- Tokenizers 0.19.1 |
|
|