chchen commited on
Commit
bce554d
1 Parent(s): d85b286

Model save

Browse files
Files changed (2) hide show
  1. README.md +20 -16
  2. trainer_log.jsonl +19 -0
README.md CHANGED
@@ -2,10 +2,10 @@
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
5
- - llama-factory
6
- - lora
7
  - trl
8
  - dpo
 
 
9
  - generated_from_trainer
10
  base_model: mistralai/Mistral-7B-Instruct-v0.2
11
  model-index:
@@ -18,19 +18,19 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # Mistral-7B-Instruct-v0.2-ORPO
20
 
21
- This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the dpo_mix_en dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 1.1249
24
- - Rewards/chosen: -0.1062
25
- - Rewards/rejected: -0.1197
26
- - Rewards/accuracies: 0.4400
27
- - Rewards/margins: 0.0135
28
- - Logps/rejected: -1.1975
29
- - Logps/chosen: -1.0620
30
- - Logits/rejected: -2.6819
31
- - Logits/chosen: -2.6777
32
- - Sft Loss: 1.0620
33
- - Odds Ratio Loss: 0.6295
34
 
35
  ## Model description
36
 
@@ -51,7 +51,7 @@ More information needed
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-06
53
  - train_batch_size: 2
54
- - eval_batch_size: 4
55
  - seed: 42
56
  - gradient_accumulation_steps: 8
57
  - total_train_batch_size: 16
@@ -59,10 +59,14 @@ The following hyperparameters were used during training:
59
  - lr_scheduler_type: cosine
60
  - lr_scheduler_warmup_steps: 0.1
61
  - num_epochs: 3.0
62
- - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
 
 
 
 
 
66
 
67
 
68
  ### Framework versions
 
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
 
 
5
  - trl
6
  - dpo
7
+ - llama-factory
8
+ - lora
9
  - generated_from_trainer
10
  base_model: mistralai/Mistral-7B-Instruct-v0.2
11
  model-index:
 
18
 
19
  # Mistral-7B-Instruct-v0.2-ORPO
20
 
21
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.8975
24
+ - Rewards/chosen: -0.0835
25
+ - Rewards/rejected: -0.1074
26
+ - Rewards/accuracies: 0.5900
27
+ - Rewards/margins: 0.0238
28
+ - Logps/rejected: -1.0737
29
+ - Logps/chosen: -0.8352
30
+ - Logits/rejected: -2.8721
31
+ - Logits/chosen: -2.8461
32
+ - Sft Loss: 0.8352
33
+ - Odds Ratio Loss: 0.6231
34
 
35
  ## Model description
36
 
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-06
53
  - train_batch_size: 2
54
+ - eval_batch_size: 2
55
  - seed: 42
56
  - gradient_accumulation_steps: 8
57
  - total_train_batch_size: 16
 
59
  - lr_scheduler_type: cosine
60
  - lr_scheduler_warmup_steps: 0.1
61
  - num_epochs: 3.0
 
62
 
63
  ### Training results
64
 
65
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss |
66
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:|
67
+ | 1.0001 | 0.8891 | 500 | 0.9318 | -0.0869 | -0.1112 | 0.5920 | 0.0243 | -1.1123 | -0.8690 | -2.8936 | -2.8713 | 0.8690 | 0.6284 |
68
+ | 0.906 | 1.7782 | 1000 | 0.9039 | -0.0841 | -0.1081 | 0.5780 | 0.0240 | -1.0811 | -0.8415 | -2.8783 | -2.8533 | 0.8415 | 0.6243 |
69
+ | 0.9019 | 2.6673 | 1500 | 0.8975 | -0.0835 | -0.1074 | 0.5900 | 0.0238 | -1.0737 | -0.8352 | -2.8721 | -2.8461 | 0.8352 | 0.6231 |
70
 
71
 
72
  ### Framework versions
trainer_log.jsonl CHANGED
@@ -151,3 +151,22 @@
151
  {"current_steps": 1490, "total_steps": 1686, "loss": 0.8593, "accuracy": 0.625, "learning_rate": 1.6490167940538343e-07, "epoch": 2.6494776617026004, "percentage": 88.37, "elapsed_time": "4:10:52", "remaining_time": "0:33:00"}
152
  {"current_steps": 1500, "total_steps": 1686, "loss": 0.9019, "accuracy": 0.5874999761581421, "learning_rate": 1.4866882516191339e-07, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "4:12:33", "remaining_time": "0:31:18"}
153
  {"current_steps": 1500, "total_steps": 1686, "eval_loss": 0.8975116014480591, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "4:15:42", "remaining_time": "0:31:42"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  {"current_steps": 1490, "total_steps": 1686, "loss": 0.8593, "accuracy": 0.625, "learning_rate": 1.6490167940538343e-07, "epoch": 2.6494776617026004, "percentage": 88.37, "elapsed_time": "4:10:52", "remaining_time": "0:33:00"}
152
  {"current_steps": 1500, "total_steps": 1686, "loss": 0.9019, "accuracy": 0.5874999761581421, "learning_rate": 1.4866882516191339e-07, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "4:12:33", "remaining_time": "0:31:18"}
153
  {"current_steps": 1500, "total_steps": 1686, "eval_loss": 0.8975116014480591, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "4:15:42", "remaining_time": "0:31:42"}
154
+ {"current_steps": 1510, "total_steps": 1686, "loss": 0.8823, "accuracy": 0.5874999761581421, "learning_rate": 1.3325243551706057e-07, "epoch": 2.685041120248944, "percentage": 89.56, "elapsed_time": "4:17:20", "remaining_time": "0:29:59"}
155
+ {"current_steps": 1520, "total_steps": 1686, "loss": 0.8691, "accuracy": 0.5874999761581421, "learning_rate": 1.1865786358165737e-07, "epoch": 2.702822849522116, "percentage": 90.15, "elapsed_time": "4:18:56", "remaining_time": "0:28:16"}
156
+ {"current_steps": 1530, "total_steps": 1686, "loss": 0.9517, "accuracy": 0.6187499761581421, "learning_rate": 1.0489017710262311e-07, "epoch": 2.720604578795288, "percentage": 90.75, "elapsed_time": "4:20:35", "remaining_time": "0:26:34"}
157
+ {"current_steps": 1540, "total_steps": 1686, "loss": 0.8916, "accuracy": 0.5625, "learning_rate": 9.195415670326446e-08, "epoch": 2.73838630806846, "percentage": 91.34, "elapsed_time": "4:22:15", "remaining_time": "0:24:51"}
158
+ {"current_steps": 1550, "total_steps": 1686, "loss": 0.9067, "accuracy": 0.53125, "learning_rate": 7.985429422327384e-08, "epoch": 2.7561680373416317, "percentage": 91.93, "elapsed_time": "4:23:50", "remaining_time": "0:23:09"}
159
+ {"current_steps": 1560, "total_steps": 1686, "loss": 0.9293, "accuracy": 0.581250011920929, "learning_rate": 6.859479115900818e-08, "epoch": 2.773949766614803, "percentage": 92.53, "elapsed_time": "4:25:28", "remaining_time": "0:21:26"}
160
+ {"current_steps": 1570, "total_steps": 1686, "loss": 0.9455, "accuracy": 0.5562499761581421, "learning_rate": 5.817955720457902e-08, "epoch": 2.791731495887975, "percentage": 93.12, "elapsed_time": "4:27:06", "remaining_time": "0:19:44"}
161
+ {"current_steps": 1580, "total_steps": 1686, "loss": 0.9195, "accuracy": 0.612500011920929, "learning_rate": 4.861220889427199e-08, "epoch": 2.809513225161147, "percentage": 93.71, "elapsed_time": "4:28:40", "remaining_time": "0:18:01"}
162
+ {"current_steps": 1590, "total_steps": 1686, "loss": 0.9115, "accuracy": 0.5625, "learning_rate": 3.9896068346758074e-08, "epoch": 2.827294954434319, "percentage": 94.31, "elapsed_time": "4:30:16", "remaining_time": "0:16:19"}
163
+ {"current_steps": 1600, "total_steps": 1686, "loss": 0.9131, "accuracy": 0.5249999761581421, "learning_rate": 3.203416211153832e-08, "epoch": 2.8450766837074903, "percentage": 94.9, "elapsed_time": "4:31:56", "remaining_time": "0:14:36"}
164
+ {"current_steps": 1610, "total_steps": 1686, "loss": 0.929, "accuracy": 0.5687500238418579, "learning_rate": 2.5029220118019393e-08, "epoch": 2.8628584129806622, "percentage": 95.49, "elapsed_time": "4:33:36", "remaining_time": "0:12:54"}
165
+ {"current_steps": 1620, "total_steps": 1686, "loss": 0.8816, "accuracy": 0.6312500238418579, "learning_rate": 1.8883674727586122e-08, "epoch": 2.880640142253834, "percentage": 96.09, "elapsed_time": "4:35:12", "remaining_time": "0:11:12"}
166
+ {"current_steps": 1630, "total_steps": 1686, "loss": 0.8593, "accuracy": 0.5375000238418579, "learning_rate": 1.3599659889000639e-08, "epoch": 2.898421871527006, "percentage": 96.68, "elapsed_time": "4:36:52", "remaining_time": "0:09:30"}
167
+ {"current_steps": 1640, "total_steps": 1686, "loss": 0.8816, "accuracy": 0.4937500059604645, "learning_rate": 9.179010397421528e-09, "epoch": 2.916203600800178, "percentage": 97.27, "elapsed_time": "4:38:37", "remaining_time": "0:07:48"}
168
+ {"current_steps": 1650, "total_steps": 1686, "loss": 0.8358, "accuracy": 0.5625, "learning_rate": 5.623261257296509e-09, "epoch": 2.93398533007335, "percentage": 97.86, "elapsed_time": "4:40:14", "remaining_time": "0:06:06"}
169
+ {"current_steps": 1660, "total_steps": 1686, "loss": 0.8679, "accuracy": 0.543749988079071, "learning_rate": 2.933647149357122e-09, "epoch": 2.9517670593465217, "percentage": 98.46, "elapsed_time": "4:41:52", "remaining_time": "0:04:24"}
170
+ {"current_steps": 1670, "total_steps": 1686, "loss": 0.9301, "accuracy": 0.53125, "learning_rate": 1.1111020018930717e-09, "epoch": 2.969548788619693, "percentage": 99.05, "elapsed_time": "4:43:31", "remaining_time": "0:02:42"}
171
+ {"current_steps": 1680, "total_steps": 1686, "loss": 0.8775, "accuracy": 0.6187499761581421, "learning_rate": 1.5625866646051813e-10, "epoch": 2.987330517892865, "percentage": 99.64, "elapsed_time": "4:45:08", "remaining_time": "0:01:01"}
172
+ {"current_steps": 1686, "total_steps": 1686, "epoch": 2.997999555456768, "percentage": 100.0, "elapsed_time": "4:46:10", "remaining_time": "0:00:00"}