IE_M2_1000steps_1e8rate_03beta_cSFTDPO
This model is a fine-tuned version of tsavage68/IE_M2_1000steps_1e7rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6337
- Rewards/chosen: -0.0061
- Rewards/rejected: -0.1351
- Rewards/accuracies: 0.4600
- Rewards/margins: 0.1290
- Logps/rejected: -41.4720
- Logps/chosen: -42.2258
- Logits/rejected: -2.9153
- Logits/chosen: -2.8540
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6998 | 0.4 | 50 | 0.6949 | 0.0058 | 0.0085 | 0.2050 | -0.0028 | -40.9934 | -42.1863 | -2.9160 | -2.8547 |
0.6925 | 0.8 | 100 | 0.6906 | 0.0017 | -0.0041 | 0.2600 | 0.0059 | -41.0355 | -42.1997 | -2.9159 | -2.8546 |
0.6741 | 1.2 | 150 | 0.6750 | -0.0010 | -0.0391 | 0.375 | 0.0381 | -41.1523 | -42.2090 | -2.9154 | -2.8542 |
0.6585 | 1.6 | 200 | 0.6623 | -0.0019 | -0.0668 | 0.4300 | 0.0649 | -41.2446 | -42.2118 | -2.9155 | -2.8542 |
0.657 | 2.0 | 250 | 0.6474 | 0.0017 | -0.0959 | 0.4550 | 0.0976 | -41.3415 | -42.1999 | -2.9156 | -2.8543 |
0.6613 | 2.4 | 300 | 0.6405 | -0.0071 | -0.1204 | 0.4600 | 0.1133 | -41.4230 | -42.2291 | -2.9154 | -2.8540 |
0.6445 | 2.8 | 350 | 0.6394 | -0.0035 | -0.1196 | 0.4550 | 0.1161 | -41.4205 | -42.2173 | -2.9151 | -2.8538 |
0.6464 | 3.2 | 400 | 0.6368 | -0.0015 | -0.1235 | 0.4550 | 0.1220 | -41.4335 | -42.2105 | -2.9152 | -2.8540 |
0.6408 | 3.6 | 450 | 0.6354 | -0.0022 | -0.1277 | 0.4550 | 0.1255 | -41.4475 | -42.2130 | -2.9155 | -2.8542 |
0.6526 | 4.0 | 500 | 0.6336 | -0.0017 | -0.1309 | 0.4600 | 0.1293 | -41.4583 | -42.2112 | -2.9154 | -2.8541 |
0.6218 | 4.4 | 550 | 0.6340 | -0.0033 | -0.1314 | 0.4600 | 0.1282 | -41.4599 | -42.2164 | -2.9153 | -2.8539 |
0.627 | 4.8 | 600 | 0.6351 | -0.0035 | -0.1294 | 0.4550 | 0.1259 | -41.4532 | -42.2173 | -2.9153 | -2.8540 |
0.6447 | 5.2 | 650 | 0.6341 | -0.0023 | -0.1304 | 0.4600 | 0.1281 | -41.4564 | -42.2130 | -2.9155 | -2.8542 |
0.6443 | 5.6 | 700 | 0.6331 | -0.0066 | -0.1368 | 0.4600 | 0.1303 | -41.4779 | -42.2274 | -2.9153 | -2.8540 |
0.6333 | 6.0 | 750 | 0.6355 | -0.0057 | -0.1308 | 0.4550 | 0.1251 | -41.4578 | -42.2246 | -2.9151 | -2.8538 |
0.6042 | 6.4 | 800 | 0.6352 | -0.0005 | -0.1265 | 0.4550 | 0.1259 | -41.4434 | -42.2073 | -2.9152 | -2.8539 |
0.6503 | 6.8 | 850 | 0.6338 | -0.0058 | -0.1347 | 0.4600 | 0.1289 | -41.4707 | -42.2247 | -2.9153 | -2.8540 |
0.6237 | 7.2 | 900 | 0.6337 | -0.0061 | -0.1351 | 0.4600 | 0.1290 | -41.4720 | -42.2258 | -2.9153 | -2.8540 |
0.6269 | 7.6 | 950 | 0.6337 | -0.0061 | -0.1351 | 0.4600 | 0.1290 | -41.4720 | -42.2258 | -2.9153 | -2.8540 |
0.6276 | 8.0 | 1000 | 0.6337 | -0.0061 | -0.1351 | 0.4600 | 0.1290 | -41.4720 | -42.2258 | -2.9153 | -2.8540 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.0.0+cu117
- Datasets 3.0.0
- Tokenizers 0.19.1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for tsavage68/IE_M2_1000steps_1e8rate_03beta_cSFTDPO
Base model
mistralai/Mistral-7B-Instruct-v0.2
Finetuned
tsavage68/IE_M2_1000steps_1e7rate_SFT