PEFT
Safetensors
qwen2
alignment-handbook
trl
dpo
Generated from Trainer
khongtrunght commited on
Commit
844f260
1 Parent(s): e303bc6

Model save

Browse files
Files changed (4) hide show
  1. README.md +83 -0
  2. all_results.json +9 -0
  3. train_results.json +9 -0
  4. trainer_state.json +1852 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: slm-research-vn/Qwen2-7B-Instruct-SPPO-Function-call-v2.5
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: Qwen2-7B-Instruct-SPPO-Function-call-v2.6
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # Qwen2-7B-Instruct-SPPO-Function-call-v2.6
17
+
18
+ This model is a fine-tuned version of [slm-research-vn/Qwen2-7B-Instruct-SPPO-Function-call-v2.5](https://huggingface.co/slm-research-vn/Qwen2-7B-Instruct-SPPO-Function-call-v2.5) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.2999
21
+ - Rewards/chosen: 1.6798
22
+ - Rewards/rejected: -0.4929
23
+ - Rewards/accuracies: 0.8844
24
+ - Rewards/margins: 2.1726
25
+ - Logps/rejected: -276.8312
26
+ - Logps/chosen: -200.8157
27
+ - Logits/rejected: -0.6690
28
+ - Logits/chosen: -0.6635
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 1e-06
48
+ - train_batch_size: 1
49
+ - eval_batch_size: 1
50
+ - seed: 42
51
+ - distributed_type: multi-GPU
52
+ - num_devices: 8
53
+ - gradient_accumulation_steps: 4
54
+ - total_train_batch_size: 32
55
+ - total_eval_batch_size: 8
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: cosine
58
+ - lr_scheduler_warmup_ratio: 0.1
59
+ - num_epochs: 1
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 0.6437 | 0.0916 | 100 | 0.6128 | 0.3050 | 0.0739 | 0.7254 | 0.2311 | -265.4963 | -228.3116 | -0.7319 | -0.7206 |
66
+ | 0.5175 | 0.1832 | 200 | 0.4987 | 1.1265 | 0.2914 | 0.8237 | 0.8351 | -261.1460 | -211.8815 | -0.7134 | -0.7068 |
67
+ | 0.3903 | 0.2749 | 300 | 0.4279 | 1.7297 | 0.4889 | 0.8468 | 1.2408 | -257.1960 | -199.8173 | -0.6700 | -0.6642 |
68
+ | 0.3712 | 0.3665 | 400 | 0.3781 | 1.7272 | 0.2255 | 0.8468 | 1.5017 | -262.4645 | -199.8672 | -0.6756 | -0.6691 |
69
+ | 0.3064 | 0.4581 | 500 | 0.3477 | 1.7220 | -0.0183 | 0.8613 | 1.7403 | -267.3389 | -199.9704 | -0.6642 | -0.6488 |
70
+ | 0.3054 | 0.5497 | 600 | 0.3271 | 1.6469 | -0.1977 | 0.8671 | 1.8447 | -270.9281 | -201.4723 | -0.6576 | -0.6407 |
71
+ | 0.2919 | 0.6413 | 700 | 0.3144 | 1.7376 | -0.3034 | 0.8642 | 2.0410 | -273.0414 | -199.6590 | -0.6753 | -0.6672 |
72
+ | 0.314 | 0.7329 | 800 | 0.3056 | 1.7037 | -0.4229 | 0.8671 | 2.1266 | -275.4323 | -200.3379 | -0.6685 | -0.6574 |
73
+ | 0.3014 | 0.8246 | 900 | 0.3020 | 1.6807 | -0.4632 | 0.8699 | 2.1439 | -276.2374 | -200.7971 | -0.6702 | -0.6641 |
74
+ | 0.268 | 0.9162 | 1000 | 0.2999 | 1.6798 | -0.4929 | 0.8844 | 2.1726 | -276.8312 | -200.8157 | -0.6690 | -0.6635 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - PEFT 0.12.0
80
+ - Transformers 4.44.0
81
+ - Pytorch 2.3.1+cu121
82
+ - Datasets 2.20.0
83
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9995419147961521,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.37405444834616947,
5
+ "train_runtime": 8738.9435,
6
+ "train_samples": 34924,
7
+ "train_samples_per_second": 3.996,
8
+ "train_steps_per_second": 0.125
9
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9995419147961521,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.37405444834616947,
5
+ "train_runtime": 8738.9435,
6
+ "train_samples": 34924,
7
+ "train_samples_per_second": 3.996,
8
+ "train_steps_per_second": 0.125
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1852 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9995419147961521,
5
+ "eval_steps": 100,
6
+ "global_step": 1091,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0009161704076958314,
13
+ "grad_norm": 2.538351535797119,
14
+ "learning_rate": 9.09090909090909e-09,
15
+ "logits/chosen": -0.612647533416748,
16
+ "logits/rejected": -0.43005383014678955,
17
+ "logps/chosen": -269.1338195800781,
18
+ "logps/rejected": -265.996826171875,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.009161704076958314,
28
+ "grad_norm": 3.99981427192688,
29
+ "learning_rate": 9.09090909090909e-08,
30
+ "logits/chosen": -0.7947888374328613,
31
+ "logits/rejected": -0.8272799849510193,
32
+ "logps/chosen": -172.9255828857422,
33
+ "logps/rejected": -192.15106201171875,
34
+ "loss": 0.6942,
35
+ "rewards/accuracies": 0.5,
36
+ "rewards/chosen": -0.012904312461614609,
37
+ "rewards/margins": -0.015133652836084366,
38
+ "rewards/rejected": 0.002229340374469757,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.01832340815391663,
43
+ "grad_norm": 1.3273310661315918,
44
+ "learning_rate": 1.818181818181818e-07,
45
+ "logits/chosen": -0.7671724557876587,
46
+ "logits/rejected": -0.7643235921859741,
47
+ "logps/chosen": -177.4793243408203,
48
+ "logps/rejected": -201.07656860351562,
49
+ "loss": 0.6933,
50
+ "rewards/accuracies": 0.5,
51
+ "rewards/chosen": -0.006979703903198242,
52
+ "rewards/margins": -0.00390571728348732,
53
+ "rewards/rejected": -0.003073985455557704,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.027485112230874943,
58
+ "grad_norm": 2.06701922416687,
59
+ "learning_rate": 2.727272727272727e-07,
60
+ "logits/chosen": -0.7477768659591675,
61
+ "logits/rejected": -0.770847737789154,
62
+ "logps/chosen": -219.7743377685547,
63
+ "logps/rejected": -253.83798217773438,
64
+ "loss": 0.6933,
65
+ "rewards/accuracies": 0.574999988079071,
66
+ "rewards/chosen": -0.007790957577526569,
67
+ "rewards/margins": 0.0053612953051924706,
68
+ "rewards/rejected": -0.013152251951396465,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.03664681630783326,
73
+ "grad_norm": 2.176619052886963,
74
+ "learning_rate": 3.636363636363636e-07,
75
+ "logits/chosen": -0.6840143203735352,
76
+ "logits/rejected": -0.6969764828681946,
77
+ "logps/chosen": -159.89796447753906,
78
+ "logps/rejected": -218.1041717529297,
79
+ "loss": 0.6902,
80
+ "rewards/accuracies": 0.375,
81
+ "rewards/chosen": 0.0005666827782988548,
82
+ "rewards/margins": 0.012707608751952648,
83
+ "rewards/rejected": -0.012140927836298943,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.04580852038479157,
88
+ "grad_norm": 2.182387590408325,
89
+ "learning_rate": 4.545454545454545e-07,
90
+ "logits/chosen": -0.5698288083076477,
91
+ "logits/rejected": -0.7136921286582947,
92
+ "logps/chosen": -203.5851287841797,
93
+ "logps/rejected": -231.3826446533203,
94
+ "loss": 0.6878,
95
+ "rewards/accuracies": 0.550000011920929,
96
+ "rewards/chosen": 0.01093050092458725,
97
+ "rewards/margins": 0.021884005516767502,
98
+ "rewards/rejected": -0.010953502729535103,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.054970224461749886,
103
+ "grad_norm": 1.757957935333252,
104
+ "learning_rate": 5.454545454545454e-07,
105
+ "logits/chosen": -0.7015949487686157,
106
+ "logits/rejected": -0.7727667689323425,
107
+ "logps/chosen": -233.5804901123047,
108
+ "logps/rejected": -278.2530212402344,
109
+ "loss": 0.6876,
110
+ "rewards/accuracies": 0.44999998807907104,
111
+ "rewards/chosen": 0.021485041826963425,
112
+ "rewards/margins": 0.0006370929768308997,
113
+ "rewards/rejected": 0.020847950130701065,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.0641319285387082,
118
+ "grad_norm": 2.0394017696380615,
119
+ "learning_rate": 6.363636363636363e-07,
120
+ "logits/chosen": -0.8689748644828796,
121
+ "logits/rejected": -0.808820366859436,
122
+ "logps/chosen": -147.0337371826172,
123
+ "logps/rejected": -209.3056182861328,
124
+ "loss": 0.6815,
125
+ "rewards/accuracies": 0.6000000238418579,
126
+ "rewards/chosen": 0.05839823558926582,
127
+ "rewards/margins": 0.03576263412833214,
128
+ "rewards/rejected": 0.022635603323578835,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.07329363261566652,
133
+ "grad_norm": 3.2475733757019043,
134
+ "learning_rate": 7.272727272727272e-07,
135
+ "logits/chosen": -0.7972738742828369,
136
+ "logits/rejected": -0.8733490109443665,
137
+ "logps/chosen": -202.31976318359375,
138
+ "logps/rejected": -230.676513671875,
139
+ "loss": 0.6706,
140
+ "rewards/accuracies": 0.625,
141
+ "rewards/chosen": 0.0555371418595314,
142
+ "rewards/margins": 0.04019620642066002,
143
+ "rewards/rejected": 0.015340929850935936,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.08245533669262482,
148
+ "grad_norm": 2.1004042625427246,
149
+ "learning_rate": 8.181818181818182e-07,
150
+ "logits/chosen": -0.7756220698356628,
151
+ "logits/rejected": -0.7883769869804382,
152
+ "logps/chosen": -181.07911682128906,
153
+ "logps/rejected": -253.35226440429688,
154
+ "loss": 0.657,
155
+ "rewards/accuracies": 0.574999988079071,
156
+ "rewards/chosen": 0.09145340323448181,
157
+ "rewards/margins": 0.05497417598962784,
158
+ "rewards/rejected": 0.03647923097014427,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.09161704076958314,
163
+ "grad_norm": 1.711387276649475,
164
+ "learning_rate": 9.09090909090909e-07,
165
+ "logits/chosen": -0.6988880634307861,
166
+ "logits/rejected": -0.695970356464386,
167
+ "logps/chosen": -210.9344940185547,
168
+ "logps/rejected": -292.63885498046875,
169
+ "loss": 0.6437,
170
+ "rewards/accuracies": 0.7250000238418579,
171
+ "rewards/chosen": 0.2389262169599533,
172
+ "rewards/margins": 0.13863424956798553,
173
+ "rewards/rejected": 0.10029196739196777,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.09161704076958314,
178
+ "eval_logits/chosen": -0.7206078171730042,
179
+ "eval_logits/rejected": -0.7319283485412598,
180
+ "eval_logps/chosen": -228.31155395507812,
181
+ "eval_logps/rejected": -265.4963073730469,
182
+ "eval_loss": 0.6128209829330444,
183
+ "eval_rewards/accuracies": 0.7254335284233093,
184
+ "eval_rewards/chosen": 0.30496734380722046,
185
+ "eval_rewards/margins": 0.23110172152519226,
186
+ "eval_rewards/rejected": 0.0738656222820282,
187
+ "eval_runtime": 264.1865,
188
+ "eval_samples_per_second": 10.459,
189
+ "eval_steps_per_second": 1.31,
190
+ "step": 100
191
+ },
192
+ {
193
+ "epoch": 0.10077874484654145,
194
+ "grad_norm": 2.3657712936401367,
195
+ "learning_rate": 1e-06,
196
+ "logits/chosen": -0.9458168745040894,
197
+ "logits/rejected": -0.912136435508728,
198
+ "logps/chosen": -266.46087646484375,
199
+ "logps/rejected": -255.1190643310547,
200
+ "loss": 0.6226,
201
+ "rewards/accuracies": 0.675000011920929,
202
+ "rewards/chosen": 0.24092476069927216,
203
+ "rewards/margins": 0.15844842791557312,
204
+ "rewards/rejected": 0.08247633278369904,
205
+ "step": 110
206
+ },
207
+ {
208
+ "epoch": 0.10994044892349977,
209
+ "grad_norm": 1.905785083770752,
210
+ "learning_rate": 9.997436315234263e-07,
211
+ "logits/chosen": -0.7388381958007812,
212
+ "logits/rejected": -0.7873945236206055,
213
+ "logps/chosen": -167.81883239746094,
214
+ "logps/rejected": -176.68295288085938,
215
+ "loss": 0.6136,
216
+ "rewards/accuracies": 0.8500000238418579,
217
+ "rewards/chosen": 0.3122307062149048,
218
+ "rewards/margins": 0.20933406054973602,
219
+ "rewards/rejected": 0.10289661586284637,
220
+ "step": 120
221
+ },
222
+ {
223
+ "epoch": 0.11910215300045808,
224
+ "grad_norm": 1.5484569072723389,
225
+ "learning_rate": 9.989747889928883e-07,
226
+ "logits/chosen": -0.7524275183677673,
227
+ "logits/rejected": -0.8292320370674133,
228
+ "logps/chosen": -197.0650177001953,
229
+ "logps/rejected": -237.57839965820312,
230
+ "loss": 0.5938,
231
+ "rewards/accuracies": 0.8500000238418579,
232
+ "rewards/chosen": 0.5554567575454712,
233
+ "rewards/margins": 0.4092523455619812,
234
+ "rewards/rejected": 0.14620442688465118,
235
+ "step": 130
236
+ },
237
+ {
238
+ "epoch": 0.1282638570774164,
239
+ "grad_norm": 1.4368153810501099,
240
+ "learning_rate": 9.976942608363393e-07,
241
+ "logits/chosen": -0.632690966129303,
242
+ "logits/rejected": -0.7571207284927368,
243
+ "logps/chosen": -173.34666442871094,
244
+ "logps/rejected": -211.22317504882812,
245
+ "loss": 0.5709,
246
+ "rewards/accuracies": 0.8999999761581421,
247
+ "rewards/chosen": 0.58498615026474,
248
+ "rewards/margins": 0.40937310457229614,
249
+ "rewards/rejected": 0.17561307549476624,
250
+ "step": 140
251
+ },
252
+ {
253
+ "epoch": 0.1374255611543747,
254
+ "grad_norm": 1.5536640882492065,
255
+ "learning_rate": 9.9590336020199e-07,
256
+ "logits/chosen": -0.6405803561210632,
257
+ "logits/rejected": -0.7340038418769836,
258
+ "logps/chosen": -182.8199462890625,
259
+ "logps/rejected": -243.0964813232422,
260
+ "loss": 0.5731,
261
+ "rewards/accuracies": 0.8500000238418579,
262
+ "rewards/chosen": 0.6845705509185791,
263
+ "rewards/margins": 0.4665129780769348,
264
+ "rewards/rejected": 0.2180575579404831,
265
+ "step": 150
266
+ },
267
+ {
268
+ "epoch": 0.14658726523133303,
269
+ "grad_norm": 1.1569141149520874,
270
+ "learning_rate": 9.936039236117095e-07,
271
+ "logits/chosen": -0.8644550442695618,
272
+ "logits/rejected": -0.8035632967948914,
273
+ "logps/chosen": -200.20794677734375,
274
+ "logps/rejected": -239.16915893554688,
275
+ "loss": 0.544,
276
+ "rewards/accuracies": 0.75,
277
+ "rewards/chosen": 0.826056957244873,
278
+ "rewards/margins": 0.5113142132759094,
279
+ "rewards/rejected": 0.3147428035736084,
280
+ "step": 160
281
+ },
282
+ {
283
+ "epoch": 0.15574896930829135,
284
+ "grad_norm": 1.5735410451889038,
285
+ "learning_rate": 9.907983090777206e-07,
286
+ "logits/chosen": -0.7325465083122253,
287
+ "logits/rejected": -0.7175777554512024,
288
+ "logps/chosen": -195.422607421875,
289
+ "logps/rejected": -211.19418334960938,
290
+ "loss": 0.5275,
291
+ "rewards/accuracies": 0.8500000238418579,
292
+ "rewards/chosen": 0.7453492879867554,
293
+ "rewards/margins": 0.558306872844696,
294
+ "rewards/rejected": 0.1870425045490265,
295
+ "step": 170
296
+ },
297
+ {
298
+ "epoch": 0.16491067338524965,
299
+ "grad_norm": 1.4354184865951538,
300
+ "learning_rate": 9.874893936845187e-07,
301
+ "logits/chosen": -0.6600767374038696,
302
+ "logits/rejected": -0.7184512615203857,
303
+ "logps/chosen": -214.3909149169922,
304
+ "logps/rejected": -291.69512939453125,
305
+ "loss": 0.5309,
306
+ "rewards/accuracies": 0.925000011920929,
307
+ "rewards/chosen": 0.8096879124641418,
308
+ "rewards/margins": 0.5819981098175049,
309
+ "rewards/rejected": 0.2276897132396698,
310
+ "step": 180
311
+ },
312
+ {
313
+ "epoch": 0.17407237746220797,
314
+ "grad_norm": 1.9378727674484253,
315
+ "learning_rate": 9.836805706384983e-07,
316
+ "logits/chosen": -0.796898603439331,
317
+ "logits/rejected": -0.7903488874435425,
318
+ "logps/chosen": -151.75074768066406,
319
+ "logps/rejected": -193.89720153808594,
320
+ "loss": 0.4979,
321
+ "rewards/accuracies": 0.824999988079071,
322
+ "rewards/chosen": 0.9862753748893738,
323
+ "rewards/margins": 0.6403971910476685,
324
+ "rewards/rejected": 0.3458781838417053,
325
+ "step": 190
326
+ },
327
+ {
328
+ "epoch": 0.1832340815391663,
329
+ "grad_norm": 1.368870735168457,
330
+ "learning_rate": 9.793757457883061e-07,
331
+ "logits/chosen": -0.7233944535255432,
332
+ "logits/rejected": -0.7641991972923279,
333
+ "logps/chosen": -131.52737426757812,
334
+ "logps/rejected": -177.9940185546875,
335
+ "loss": 0.5175,
336
+ "rewards/accuracies": 0.875,
337
+ "rewards/chosen": 0.935263454914093,
338
+ "rewards/margins": 0.7129745483398438,
339
+ "rewards/rejected": 0.2222888469696045,
340
+ "step": 200
341
+ },
342
+ {
343
+ "epoch": 0.1832340815391663,
344
+ "eval_logits/chosen": -0.7067926526069641,
345
+ "eval_logits/rejected": -0.7133929133415222,
346
+ "eval_logps/chosen": -211.8815460205078,
347
+ "eval_logps/rejected": -261.1459655761719,
348
+ "eval_loss": 0.4986709654331207,
349
+ "eval_rewards/accuracies": 0.823699414730072,
350
+ "eval_rewards/chosen": 1.1264668703079224,
351
+ "eval_rewards/margins": 0.8350852131843567,
352
+ "eval_rewards/rejected": 0.29138168692588806,
353
+ "eval_runtime": 253.2388,
354
+ "eval_samples_per_second": 10.911,
355
+ "eval_steps_per_second": 1.366,
356
+ "step": 200
357
+ },
358
+ {
359
+ "epoch": 0.1923957856161246,
360
+ "grad_norm": 1.181227207183838,
361
+ "learning_rate": 9.745793336194975e-07,
362
+ "logits/chosen": -0.744937539100647,
363
+ "logits/rejected": -0.753220796585083,
364
+ "logps/chosen": -157.19473266601562,
365
+ "logps/rejected": -238.6236114501953,
366
+ "loss": 0.485,
367
+ "rewards/accuracies": 0.824999988079071,
368
+ "rewards/chosen": 0.9487007260322571,
369
+ "rewards/margins": 0.5822451710700989,
370
+ "rewards/rejected": 0.366455614566803,
371
+ "step": 210
372
+ },
373
+ {
374
+ "epoch": 0.2015574896930829,
375
+ "grad_norm": 2.322333812713623,
376
+ "learning_rate": 9.69296252727595e-07,
377
+ "logits/chosen": -0.7686578035354614,
378
+ "logits/rejected": -0.7208544015884399,
379
+ "logps/chosen": -161.76914978027344,
380
+ "logps/rejected": -203.6949005126953,
381
+ "loss": 0.486,
382
+ "rewards/accuracies": 0.800000011920929,
383
+ "rewards/chosen": 1.1700208187103271,
384
+ "rewards/margins": 0.8085041046142578,
385
+ "rewards/rejected": 0.36151671409606934,
386
+ "step": 220
387
+ },
388
+ {
389
+ "epoch": 0.21071919377004122,
390
+ "grad_norm": 3.2851109504699707,
391
+ "learning_rate": 9.63531920774199e-07,
392
+ "logits/chosen": -0.8623464703559875,
393
+ "logits/rejected": -0.864261269569397,
394
+ "logps/chosen": -132.91622924804688,
395
+ "logps/rejected": -190.5310516357422,
396
+ "loss": 0.4889,
397
+ "rewards/accuracies": 0.8500000238418579,
398
+ "rewards/chosen": 1.2793647050857544,
399
+ "rewards/margins": 0.8540099859237671,
400
+ "rewards/rejected": 0.4253547787666321,
401
+ "step": 230
402
+ },
403
+ {
404
+ "epoch": 0.21988089784699955,
405
+ "grad_norm": 2.092069387435913,
406
+ "learning_rate": 9.572922489313142e-07,
407
+ "logits/chosen": -0.8346965909004211,
408
+ "logits/rejected": -0.8602925539016724,
409
+ "logps/chosen": -177.9022216796875,
410
+ "logps/rejected": -209.01220703125,
411
+ "loss": 0.4294,
412
+ "rewards/accuracies": 0.75,
413
+ "rewards/chosen": 1.2863993644714355,
414
+ "rewards/margins": 0.5576584935188293,
415
+ "rewards/rejected": 0.7287408709526062,
416
+ "step": 240
417
+ },
418
+ {
419
+ "epoch": 0.22904260192395787,
420
+ "grad_norm": 1.2932628393173218,
421
+ "learning_rate": 9.505836358195993e-07,
422
+ "logits/chosen": -0.7524776458740234,
423
+ "logits/rejected": -0.8281770944595337,
424
+ "logps/chosen": -144.74905395507812,
425
+ "logps/rejected": -226.6658172607422,
426
+ "loss": 0.4293,
427
+ "rewards/accuracies": 0.8500000238418579,
428
+ "rewards/chosen": 1.3644707202911377,
429
+ "rewards/margins": 0.8554983139038086,
430
+ "rewards/rejected": 0.5089724659919739,
431
+ "step": 250
432
+ },
433
+ {
434
+ "epoch": 0.23820430600091616,
435
+ "grad_norm": 1.3950133323669434,
436
+ "learning_rate": 9.434129609467483e-07,
437
+ "logits/chosen": -0.6876672506332397,
438
+ "logits/rejected": -0.6876662969589233,
439
+ "logps/chosen": -263.18585205078125,
440
+ "logps/rejected": -264.4542541503906,
441
+ "loss": 0.4518,
442
+ "rewards/accuracies": 0.824999988079071,
443
+ "rewards/chosen": 1.4175243377685547,
444
+ "rewards/margins": 0.8856536746025085,
445
+ "rewards/rejected": 0.5318707227706909,
446
+ "step": 260
447
+ },
448
+ {
449
+ "epoch": 0.24736601007787448,
450
+ "grad_norm": 1.4819687604904175,
451
+ "learning_rate": 9.357875776527333e-07,
452
+ "logits/chosen": -0.6820667386054993,
453
+ "logits/rejected": -0.6322587132453918,
454
+ "logps/chosen": -173.28085327148438,
455
+ "logps/rejected": -196.96194458007812,
456
+ "loss": 0.4433,
457
+ "rewards/accuracies": 0.875,
458
+ "rewards/chosen": 1.1709938049316406,
459
+ "rewards/margins": 0.7361399531364441,
460
+ "rewards/rejected": 0.4348538815975189,
461
+ "step": 270
462
+ },
463
+ {
464
+ "epoch": 0.2565277141548328,
465
+ "grad_norm": 1.2729604244232178,
466
+ "learning_rate": 9.27715305569148e-07,
467
+ "logits/chosen": -0.6576655507087708,
468
+ "logits/rejected": -0.6413074731826782,
469
+ "logps/chosen": -159.5121307373047,
470
+ "logps/rejected": -207.47195434570312,
471
+ "loss": 0.4029,
472
+ "rewards/accuracies": 0.925000011920929,
473
+ "rewards/chosen": 1.8569538593292236,
474
+ "rewards/margins": 1.4031217098236084,
475
+ "rewards/rejected": 0.45383185148239136,
476
+ "step": 280
477
+ },
478
+ {
479
+ "epoch": 0.2656894182317911,
480
+ "grad_norm": 1.4309152364730835,
481
+ "learning_rate": 9.192044226003788e-07,
482
+ "logits/chosen": -0.7165778875350952,
483
+ "logits/rejected": -0.7207854390144348,
484
+ "logps/chosen": -171.72616577148438,
485
+ "logps/rejected": -212.6331024169922,
486
+ "loss": 0.4505,
487
+ "rewards/accuracies": 0.824999988079071,
488
+ "rewards/chosen": 1.8912508487701416,
489
+ "rewards/margins": 1.2686805725097656,
490
+ "rewards/rejected": 0.6225701570510864,
491
+ "step": 290
492
+ },
493
+ {
494
+ "epoch": 0.2748511223087494,
495
+ "grad_norm": 1.5633419752120972,
496
+ "learning_rate": 9.102636564348294e-07,
497
+ "logits/chosen": -0.5978932976722717,
498
+ "logits/rejected": -0.7183943390846252,
499
+ "logps/chosen": -173.94984436035156,
500
+ "logps/rejected": -202.29295349121094,
501
+ "loss": 0.3903,
502
+ "rewards/accuracies": 0.949999988079071,
503
+ "rewards/chosen": 1.945762276649475,
504
+ "rewards/margins": 1.3720638751983643,
505
+ "rewards/rejected": 0.5736981630325317,
506
+ "step": 300
507
+ },
508
+ {
509
+ "epoch": 0.2748511223087494,
510
+ "eval_logits/chosen": -0.6642194390296936,
511
+ "eval_logits/rejected": -0.6700440049171448,
512
+ "eval_logps/chosen": -199.8172607421875,
513
+ "eval_logps/rejected": -257.196044921875,
514
+ "eval_loss": 0.4279369115829468,
515
+ "eval_rewards/accuracies": 0.8468208312988281,
516
+ "eval_rewards/chosen": 1.7296818494796753,
517
+ "eval_rewards/margins": 1.240803837776184,
518
+ "eval_rewards/rejected": 0.48887789249420166,
519
+ "eval_runtime": 253.1385,
520
+ "eval_samples_per_second": 10.915,
521
+ "eval_steps_per_second": 1.367,
522
+ "step": 300
523
+ },
524
+ {
525
+ "epoch": 0.28401282638570774,
526
+ "grad_norm": 1.3390766382217407,
527
+ "learning_rate": 9.009021755949051e-07,
528
+ "logits/chosen": -0.6982103586196899,
529
+ "logits/rejected": -0.7174701690673828,
530
+ "logps/chosen": -159.46409606933594,
531
+ "logps/rejected": -160.19491577148438,
532
+ "loss": 0.4083,
533
+ "rewards/accuracies": 0.8500000238418579,
534
+ "rewards/chosen": 1.7644752264022827,
535
+ "rewards/margins": 1.0205423831939697,
536
+ "rewards/rejected": 0.7439330220222473,
537
+ "step": 310
538
+ },
539
+ {
540
+ "epoch": 0.29317453046266606,
541
+ "grad_norm": 1.3620954751968384,
542
+ "learning_rate": 8.911295800349314e-07,
543
+ "logits/chosen": -0.6473032832145691,
544
+ "logits/rejected": -0.6676048040390015,
545
+ "logps/chosen": -232.72787475585938,
546
+ "logps/rejected": -252.5911407470703,
547
+ "loss": 0.4178,
548
+ "rewards/accuracies": 0.875,
549
+ "rewards/chosen": 1.7409747838974,
550
+ "rewards/margins": 1.134825348854065,
551
+ "rewards/rejected": 0.6061495542526245,
552
+ "step": 320
553
+ },
554
+ {
555
+ "epoch": 0.3023362345396244,
556
+ "grad_norm": 0.9761648774147034,
557
+ "learning_rate": 8.809558912966519e-07,
558
+ "logits/chosen": -0.6019878387451172,
559
+ "logits/rejected": -0.6780513525009155,
560
+ "logps/chosen": -134.56356811523438,
561
+ "logps/rejected": -185.90939331054688,
562
+ "loss": 0.354,
563
+ "rewards/accuracies": 0.875,
564
+ "rewards/chosen": 2.3775978088378906,
565
+ "rewards/margins": 1.7786915302276611,
566
+ "rewards/rejected": 0.5989062786102295,
567
+ "step": 330
568
+ },
569
+ {
570
+ "epoch": 0.3114979386165827,
571
+ "grad_norm": 1.6643030643463135,
572
+ "learning_rate": 8.703915422323984e-07,
573
+ "logits/chosen": -0.5226669907569885,
574
+ "logits/rejected": -0.5020606517791748,
575
+ "logps/chosen": -184.17132568359375,
576
+ "logps/rejected": -203.7013397216797,
577
+ "loss": 0.4022,
578
+ "rewards/accuracies": 0.875,
579
+ "rewards/chosen": 1.763214111328125,
580
+ "rewards/margins": 1.1016991138458252,
581
+ "rewards/rejected": 0.661514937877655,
582
+ "step": 340
583
+ },
584
+ {
585
+ "epoch": 0.320659642693541,
586
+ "grad_norm": 1.045599102973938,
587
+ "learning_rate": 8.594473663064734e-07,
588
+ "logits/chosen": -0.7285621762275696,
589
+ "logits/rejected": -0.7740557193756104,
590
+ "logps/chosen": -133.10691833496094,
591
+ "logps/rejected": -191.98646545410156,
592
+ "loss": 0.3784,
593
+ "rewards/accuracies": 0.875,
594
+ "rewards/chosen": 1.8301376104354858,
595
+ "rewards/margins": 1.3312700986862183,
596
+ "rewards/rejected": 0.49886733293533325,
597
+ "step": 350
598
+ },
599
+ {
600
+ "epoch": 0.3298213467704993,
601
+ "grad_norm": 2.110759973526001,
602
+ "learning_rate": 8.481345864857146e-07,
603
+ "logits/chosen": -0.5418592095375061,
604
+ "logits/rejected": -0.588280975818634,
605
+ "logps/chosen": -179.9706573486328,
606
+ "logps/rejected": -242.3267822265625,
607
+ "loss": 0.401,
608
+ "rewards/accuracies": 0.824999988079071,
609
+ "rewards/chosen": 1.6830288171768188,
610
+ "rewards/margins": 1.2477144002914429,
611
+ "rewards/rejected": 0.43531447649002075,
612
+ "step": 360
613
+ },
614
+ {
615
+ "epoch": 0.3389830508474576,
616
+ "grad_norm": 1.335124135017395,
617
+ "learning_rate": 8.36464803730636e-07,
618
+ "logits/chosen": -0.8127607107162476,
619
+ "logits/rejected": -0.8397665023803711,
620
+ "logps/chosen": -143.43109130859375,
621
+ "logps/rejected": -185.8011932373047,
622
+ "loss": 0.3572,
623
+ "rewards/accuracies": 0.875,
624
+ "rewards/chosen": 1.7833560705184937,
625
+ "rewards/margins": 1.1723896265029907,
626
+ "rewards/rejected": 0.6109665632247925,
627
+ "step": 370
628
+ },
629
+ {
630
+ "epoch": 0.34814475492441593,
631
+ "grad_norm": 1.2357772588729858,
632
+ "learning_rate": 8.244499850989451e-07,
633
+ "logits/chosen": -0.7481725811958313,
634
+ "logits/rejected": -0.756425678730011,
635
+ "logps/chosen": -117.90687561035156,
636
+ "logps/rejected": -198.514892578125,
637
+ "loss": 0.3691,
638
+ "rewards/accuracies": 0.875,
639
+ "rewards/chosen": 1.9100993871688843,
640
+ "rewards/margins": 1.3691816329956055,
641
+ "rewards/rejected": 0.5409177541732788,
642
+ "step": 380
643
+ },
644
+ {
645
+ "epoch": 0.35730645900137425,
646
+ "grad_norm": 1.2337630987167358,
647
+ "learning_rate": 8.121024514736377e-07,
648
+ "logits/chosen": -0.5506582260131836,
649
+ "logits/rejected": -0.6542531251907349,
650
+ "logps/chosen": -116.30818176269531,
651
+ "logps/rejected": -186.07508850097656,
652
+ "loss": 0.3132,
653
+ "rewards/accuracies": 0.8999999761581421,
654
+ "rewards/chosen": 2.1005172729492188,
655
+ "rewards/margins": 1.9795573949813843,
656
+ "rewards/rejected": 0.12095997482538223,
657
+ "step": 390
658
+ },
659
+ {
660
+ "epoch": 0.3664681630783326,
661
+ "grad_norm": 1.261998176574707,
662
+ "learning_rate": 7.994348649282532e-07,
663
+ "logits/chosen": -0.6467943787574768,
664
+ "logits/rejected": -0.6284207701683044,
665
+ "logps/chosen": -167.55819702148438,
666
+ "logps/rejected": -237.6490936279297,
667
+ "loss": 0.3712,
668
+ "rewards/accuracies": 0.824999988079071,
669
+ "rewards/chosen": 1.9129873514175415,
670
+ "rewards/margins": 1.4685484170913696,
671
+ "rewards/rejected": 0.4444388747215271,
672
+ "step": 400
673
+ },
674
+ {
675
+ "epoch": 0.3664681630783326,
676
+ "eval_logits/chosen": -0.6691383123397827,
677
+ "eval_logits/rejected": -0.6756234765052795,
678
+ "eval_logps/chosen": -199.86724853515625,
679
+ "eval_logps/rejected": -262.46453857421875,
680
+ "eval_loss": 0.37812188267707825,
681
+ "eval_rewards/accuracies": 0.8468208312988281,
682
+ "eval_rewards/chosen": 1.7271815538406372,
683
+ "eval_rewards/margins": 1.5017303228378296,
684
+ "eval_rewards/rejected": 0.22545117139816284,
685
+ "eval_runtime": 253.3553,
686
+ "eval_samples_per_second": 10.906,
687
+ "eval_steps_per_second": 1.366,
688
+ "step": 400
689
+ },
690
+ {
691
+ "epoch": 0.3756298671552909,
692
+ "grad_norm": 1.2864240407943726,
693
+ "learning_rate": 7.8646021574225e-07,
694
+ "logits/chosen": -0.5721285343170166,
695
+ "logits/rejected": -0.5845418572425842,
696
+ "logps/chosen": -163.110595703125,
697
+ "logps/rejected": -217.90377807617188,
698
+ "loss": 0.3708,
699
+ "rewards/accuracies": 0.8999999761581421,
700
+ "rewards/chosen": 2.0738914012908936,
701
+ "rewards/margins": 1.6767698526382446,
702
+ "rewards/rejected": 0.397121787071228,
703
+ "step": 410
704
+ },
705
+ {
706
+ "epoch": 0.3847915712322492,
707
+ "grad_norm": 1.066874623298645,
708
+ "learning_rate": 7.731918090798113e-07,
709
+ "logits/chosen": -0.6550859212875366,
710
+ "logits/rejected": -0.6960932016372681,
711
+ "logps/chosen": -151.92108154296875,
712
+ "logps/rejected": -187.91384887695312,
713
+ "loss": 0.3284,
714
+ "rewards/accuracies": 0.8999999761581421,
715
+ "rewards/chosen": 1.930013656616211,
716
+ "rewards/margins": 1.5344407558441162,
717
+ "rewards/rejected": 0.39557284116744995,
718
+ "step": 420
719
+ },
720
+ {
721
+ "epoch": 0.39395327530920754,
722
+ "grad_norm": 1.4810634851455688,
723
+ "learning_rate": 7.596432513457482e-07,
724
+ "logits/chosen": -0.7274349927902222,
725
+ "logits/rejected": -0.7099400162696838,
726
+ "logps/chosen": -145.75698852539062,
727
+ "logps/rejected": -185.64678955078125,
728
+ "loss": 0.3359,
729
+ "rewards/accuracies": 0.8999999761581421,
730
+ "rewards/chosen": 1.9275859594345093,
731
+ "rewards/margins": 1.4946165084838867,
732
+ "rewards/rejected": 0.43296942114830017,
733
+ "step": 430
734
+ },
735
+ {
736
+ "epoch": 0.4031149793861658,
737
+ "grad_norm": 1.650376319885254,
738
+ "learning_rate": 7.458284362324842e-07,
739
+ "logits/chosen": -0.5438752174377441,
740
+ "logits/rejected": -0.6026032567024231,
741
+ "logps/chosen": -132.947021484375,
742
+ "logps/rejected": -219.47055053710938,
743
+ "loss": 0.3276,
744
+ "rewards/accuracies": 0.8500000238418579,
745
+ "rewards/chosen": 1.8638055324554443,
746
+ "rewards/margins": 2.066131353378296,
747
+ "rewards/rejected": -0.20232591032981873,
748
+ "step": 440
749
+ },
750
+ {
751
+ "epoch": 0.4122766834631241,
752
+ "grad_norm": 0.992289125919342,
753
+ "learning_rate": 7.317615304724387e-07,
754
+ "logits/chosen": -0.6501020193099976,
755
+ "logits/rejected": -0.680503249168396,
756
+ "logps/chosen": -157.24925231933594,
757
+ "logps/rejected": -180.8325958251953,
758
+ "loss": 0.3249,
759
+ "rewards/accuracies": 0.949999988079071,
760
+ "rewards/chosen": 1.7174543142318726,
761
+ "rewards/margins": 1.594322681427002,
762
+ "rewards/rejected": 0.12313172966241837,
763
+ "step": 450
764
+ },
765
+ {
766
+ "epoch": 0.42143838754008245,
767
+ "grad_norm": 1.2098772525787354,
768
+ "learning_rate": 7.174569593104108e-07,
769
+ "logits/chosen": -0.7602720260620117,
770
+ "logits/rejected": -0.7695900201797485,
771
+ "logps/chosen": -174.65504455566406,
772
+ "logps/rejected": -225.5537567138672,
773
+ "loss": 0.3396,
774
+ "rewards/accuracies": 0.925000011920929,
775
+ "rewards/chosen": 1.8735862970352173,
776
+ "rewards/margins": 1.4029169082641602,
777
+ "rewards/rejected": 0.47066912055015564,
778
+ "step": 460
779
+ },
780
+ {
781
+ "epoch": 0.43060009161704077,
782
+ "grad_norm": 1.896801471710205,
783
+ "learning_rate": 7.029293917108677e-07,
784
+ "logits/chosen": -0.6379343271255493,
785
+ "logits/rejected": -0.629081130027771,
786
+ "logps/chosen": -264.4695739746094,
787
+ "logps/rejected": -247.133056640625,
788
+ "loss": 0.3216,
789
+ "rewards/accuracies": 0.8500000238418579,
790
+ "rewards/chosen": 1.6404482126235962,
791
+ "rewards/margins": 1.7144851684570312,
792
+ "rewards/rejected": -0.07403700053691864,
793
+ "step": 470
794
+ },
795
+ {
796
+ "epoch": 0.4397617956939991,
797
+ "grad_norm": 2.521409273147583,
798
+ "learning_rate": 6.881937253153051e-07,
799
+ "logits/chosen": -0.7469202876091003,
800
+ "logits/rejected": -0.7621224522590637,
801
+ "logps/chosen": -165.92147827148438,
802
+ "logps/rejected": -223.8385772705078,
803
+ "loss": 0.3246,
804
+ "rewards/accuracies": 0.8500000238418579,
805
+ "rewards/chosen": 2.0734469890594482,
806
+ "rewards/margins": 2.012188673019409,
807
+ "rewards/rejected": 0.061258465051651,
808
+ "step": 480
809
+ },
810
+ {
811
+ "epoch": 0.4489234997709574,
812
+ "grad_norm": 1.2489752769470215,
813
+ "learning_rate": 6.732650711651031e-07,
814
+ "logits/chosen": -0.5696443319320679,
815
+ "logits/rejected": -0.618468165397644,
816
+ "logps/chosen": -189.3000030517578,
817
+ "logps/rejected": -246.3811798095703,
818
+ "loss": 0.2909,
819
+ "rewards/accuracies": 0.925000011920929,
820
+ "rewards/chosen": 1.7483714818954468,
821
+ "rewards/margins": 2.0019850730895996,
822
+ "rewards/rejected": -0.2536138892173767,
823
+ "step": 490
824
+ },
825
+ {
826
+ "epoch": 0.45808520384791573,
827
+ "grad_norm": 1.3916680812835693,
828
+ "learning_rate": 6.581587382055491e-07,
829
+ "logits/chosen": -0.761835515499115,
830
+ "logits/rejected": -0.7702199220657349,
831
+ "logps/chosen": -151.83370971679688,
832
+ "logps/rejected": -229.975341796875,
833
+ "loss": 0.3064,
834
+ "rewards/accuracies": 0.8999999761581421,
835
+ "rewards/chosen": 1.8061788082122803,
836
+ "rewards/margins": 1.7005802392959595,
837
+ "rewards/rejected": 0.10559873282909393,
838
+ "step": 500
839
+ },
840
+ {
841
+ "epoch": 0.45808520384791573,
842
+ "eval_logits/chosen": -0.6487900018692017,
843
+ "eval_logits/rejected": -0.6642398834228516,
844
+ "eval_logps/chosen": -199.9703826904297,
845
+ "eval_logps/rejected": -267.3388671875,
846
+ "eval_loss": 0.347669780254364,
847
+ "eval_rewards/accuracies": 0.8612716794013977,
848
+ "eval_rewards/chosen": 1.7220263481140137,
849
+ "eval_rewards/margins": 1.7402905225753784,
850
+ "eval_rewards/rejected": -0.01826408877968788,
851
+ "eval_runtime": 253.4005,
852
+ "eval_samples_per_second": 10.904,
853
+ "eval_steps_per_second": 1.365,
854
+ "step": 500
855
+ },
856
+ {
857
+ "epoch": 0.467246907924874,
858
+ "grad_norm": 1.3511228561401367,
859
+ "learning_rate": 6.428902175869126e-07,
860
+ "logits/chosen": -0.680508553981781,
861
+ "logits/rejected": -0.6774734258651733,
862
+ "logps/chosen": -177.41610717773438,
863
+ "logps/rejected": -221.32138061523438,
864
+ "loss": 0.3258,
865
+ "rewards/accuracies": 0.8999999761581421,
866
+ "rewards/chosen": 2.022366762161255,
867
+ "rewards/margins": 2.05668044090271,
868
+ "rewards/rejected": -0.03431398794054985,
869
+ "step": 510
870
+ },
871
+ {
872
+ "epoch": 0.4764086120018323,
873
+ "grad_norm": 0.9841328263282776,
874
+ "learning_rate": 6.274751667786761e-07,
875
+ "logits/chosen": -0.6339150667190552,
876
+ "logits/rejected": -0.5782631635665894,
877
+ "logps/chosen": -230.2734832763672,
878
+ "logps/rejected": -311.7682800292969,
879
+ "loss": 0.3229,
880
+ "rewards/accuracies": 0.8999999761581421,
881
+ "rewards/chosen": 1.7927030324935913,
882
+ "rewards/margins": 1.7812907695770264,
883
+ "rewards/rejected": 0.011412340216338634,
884
+ "step": 520
885
+ },
886
+ {
887
+ "epoch": 0.48557031607879064,
888
+ "grad_norm": 2.7643489837646484,
889
+ "learning_rate": 6.119293935132075e-07,
890
+ "logits/chosen": -0.6203776597976685,
891
+ "logits/rejected": -0.6682835817337036,
892
+ "logps/chosen": -151.23324584960938,
893
+ "logps/rejected": -187.77114868164062,
894
+ "loss": 0.3034,
895
+ "rewards/accuracies": 0.8999999761581421,
896
+ "rewards/chosen": 1.8519824743270874,
897
+ "rewards/margins": 2.0718624591827393,
898
+ "rewards/rejected": -0.21987971663475037,
899
+ "step": 530
900
+ },
901
+ {
902
+ "epoch": 0.49473202015574896,
903
+ "grad_norm": 1.3088020086288452,
904
+ "learning_rate": 5.962688395753437e-07,
905
+ "logits/chosen": -0.8648042678833008,
906
+ "logits/rejected": -0.9114105105400085,
907
+ "logps/chosen": -137.6197052001953,
908
+ "logps/rejected": -205.31900024414062,
909
+ "loss": 0.2938,
910
+ "rewards/accuracies": 0.949999988079071,
911
+ "rewards/chosen": 1.5214287042617798,
912
+ "rewards/margins": 1.6869144439697266,
913
+ "rewards/rejected": -0.16548602283000946,
914
+ "step": 540
915
+ },
916
+ {
917
+ "epoch": 0.5038937242327073,
918
+ "grad_norm": 1.9471220970153809,
919
+ "learning_rate": 5.80509564454506e-07,
920
+ "logits/chosen": -0.7066371440887451,
921
+ "logits/rejected": -0.6964636445045471,
922
+ "logps/chosen": -106.696533203125,
923
+ "logps/rejected": -200.95729064941406,
924
+ "loss": 0.311,
925
+ "rewards/accuracies": 0.824999988079071,
926
+ "rewards/chosen": 1.6337581872940063,
927
+ "rewards/margins": 1.6309497356414795,
928
+ "rewards/rejected": 0.0028083801735192537,
929
+ "step": 550
930
+ },
931
+ {
932
+ "epoch": 0.5130554283096656,
933
+ "grad_norm": 1.3545409440994263,
934
+ "learning_rate": 5.646677288761132e-07,
935
+ "logits/chosen": -0.653856098651886,
936
+ "logits/rejected": -0.7129830121994019,
937
+ "logps/chosen": -153.89035034179688,
938
+ "logps/rejected": -214.0993194580078,
939
+ "loss": 0.3112,
940
+ "rewards/accuracies": 0.949999988079071,
941
+ "rewards/chosen": 1.5801527500152588,
942
+ "rewards/margins": 1.731884241104126,
943
+ "rewards/rejected": -0.1517314463853836,
944
+ "step": 560
945
+ },
946
+ {
947
+ "epoch": 0.5222171323866239,
948
+ "grad_norm": 2.128307819366455,
949
+ "learning_rate": 5.487595782291784e-07,
950
+ "logits/chosen": -0.6990654468536377,
951
+ "logits/rejected": -0.7342058420181274,
952
+ "logps/chosen": -178.05380249023438,
953
+ "logps/rejected": -225.2575225830078,
954
+ "loss": 0.289,
955
+ "rewards/accuracies": 0.824999988079071,
956
+ "rewards/chosen": 1.7597728967666626,
957
+ "rewards/margins": 1.7312393188476562,
958
+ "rewards/rejected": 0.02853367291390896,
959
+ "step": 570
960
+ },
961
+ {
962
+ "epoch": 0.5313788364635822,
963
+ "grad_norm": 1.2450644969940186,
964
+ "learning_rate": 5.328014259070878e-07,
965
+ "logits/chosen": -0.619064211845398,
966
+ "logits/rejected": -0.6336346864700317,
967
+ "logps/chosen": -188.67752075195312,
968
+ "logps/rejected": -231.69503784179688,
969
+ "loss": 0.3388,
970
+ "rewards/accuracies": 0.925000011920929,
971
+ "rewards/chosen": 1.4915143251419067,
972
+ "rewards/margins": 1.5344158411026,
973
+ "rewards/rejected": -0.042901456356048584,
974
+ "step": 580
975
+ },
976
+ {
977
+ "epoch": 0.5405405405405406,
978
+ "grad_norm": 1.2011312246322632,
979
+ "learning_rate": 5.168096365786402e-07,
980
+ "logits/chosen": -0.7081841230392456,
981
+ "logits/rejected": -0.7206417918205261,
982
+ "logps/chosen": -163.11752319335938,
983
+ "logps/rejected": -235.26065063476562,
984
+ "loss": 0.311,
985
+ "rewards/accuracies": 0.9750000238418579,
986
+ "rewards/chosen": 1.9438568353652954,
987
+ "rewards/margins": 2.347649574279785,
988
+ "rewards/rejected": -0.4037927985191345,
989
+ "step": 590
990
+ },
991
+ {
992
+ "epoch": 0.5497022446174988,
993
+ "grad_norm": 1.7624154090881348,
994
+ "learning_rate": 5.008006094065069e-07,
995
+ "logits/chosen": -0.721166729927063,
996
+ "logits/rejected": -0.7850608229637146,
997
+ "logps/chosen": -165.5583953857422,
998
+ "logps/rejected": -213.85855102539062,
999
+ "loss": 0.3054,
1000
+ "rewards/accuracies": 0.875,
1001
+ "rewards/chosen": 1.5962111949920654,
1002
+ "rewards/margins": 1.4221036434173584,
1003
+ "rewards/rejected": 0.174107626080513,
1004
+ "step": 600
1005
+ },
1006
+ {
1007
+ "epoch": 0.5497022446174988,
1008
+ "eval_logits/chosen": -0.6406525373458862,
1009
+ "eval_logits/rejected": -0.6576036214828491,
1010
+ "eval_logps/chosen": -201.4723358154297,
1011
+ "eval_logps/rejected": -270.9281311035156,
1012
+ "eval_loss": 0.3270590603351593,
1013
+ "eval_rewards/accuracies": 0.8670520186424255,
1014
+ "eval_rewards/chosen": 1.6469277143478394,
1015
+ "eval_rewards/margins": 1.8446547985076904,
1016
+ "eval_rewards/rejected": -0.1977270245552063,
1017
+ "eval_runtime": 253.7625,
1018
+ "eval_samples_per_second": 10.888,
1019
+ "eval_steps_per_second": 1.363,
1020
+ "step": 600
1021
+ },
1022
+ {
1023
+ "epoch": 0.5588639486944572,
1024
+ "grad_norm": 1.902051329612732,
1025
+ "learning_rate": 4.847907612303182e-07,
1026
+ "logits/chosen": -0.7130570411682129,
1027
+ "logits/rejected": -0.7378814816474915,
1028
+ "logps/chosen": -188.9144287109375,
1029
+ "logps/rejected": -259.7423400878906,
1030
+ "loss": 0.3009,
1031
+ "rewards/accuracies": 0.824999988079071,
1032
+ "rewards/chosen": 1.5585817098617554,
1033
+ "rewards/margins": 1.3624755144119263,
1034
+ "rewards/rejected": 0.19610631465911865,
1035
+ "step": 610
1036
+ },
1037
+ {
1038
+ "epoch": 0.5680256527714155,
1039
+ "grad_norm": 1.07755446434021,
1040
+ "learning_rate": 4.687965097316223e-07,
1041
+ "logits/chosen": -0.5912365317344666,
1042
+ "logits/rejected": -0.7409440875053406,
1043
+ "logps/chosen": -126.41873931884766,
1044
+ "logps/rejected": -238.76437377929688,
1045
+ "loss": 0.275,
1046
+ "rewards/accuracies": 0.875,
1047
+ "rewards/chosen": 1.855369210243225,
1048
+ "rewards/margins": 2.4938321113586426,
1049
+ "rewards/rejected": -0.638463020324707,
1050
+ "step": 620
1051
+ },
1052
+ {
1053
+ "epoch": 0.5771873568483737,
1054
+ "grad_norm": 1.3470137119293213,
1055
+ "learning_rate": 4.5283425659798175e-07,
1056
+ "logits/chosen": -0.8164669275283813,
1057
+ "logits/rejected": -0.8073943853378296,
1058
+ "logps/chosen": -201.78501892089844,
1059
+ "logps/rejected": -291.32183837890625,
1060
+ "loss": 0.3179,
1061
+ "rewards/accuracies": 0.875,
1062
+ "rewards/chosen": 1.852560043334961,
1063
+ "rewards/margins": 1.8420225381851196,
1064
+ "rewards/rejected": 0.010537643916904926,
1065
+ "step": 630
1066
+ },
1067
+ {
1068
+ "epoch": 0.5863490609253321,
1069
+ "grad_norm": 1.2435747385025024,
1070
+ "learning_rate": 4.3692037070347123e-07,
1071
+ "logits/chosen": -0.6459102630615234,
1072
+ "logits/rejected": -0.6654535531997681,
1073
+ "logps/chosen": -139.29067993164062,
1074
+ "logps/rejected": -212.83740234375,
1075
+ "loss": 0.3016,
1076
+ "rewards/accuracies": 0.8500000238418579,
1077
+ "rewards/chosen": 1.6537023782730103,
1078
+ "rewards/margins": 2.188789129257202,
1079
+ "rewards/rejected": -0.5350866913795471,
1080
+ "step": 640
1081
+ },
1082
+ {
1083
+ "epoch": 0.5955107650022904,
1084
+ "grad_norm": 1.0110821723937988,
1085
+ "learning_rate": 4.21071171322823e-07,
1086
+ "logits/chosen": -0.6093601584434509,
1087
+ "logits/rejected": -0.5892384648323059,
1088
+ "logps/chosen": -275.14105224609375,
1089
+ "logps/rejected": -322.5516662597656,
1090
+ "loss": 0.3134,
1091
+ "rewards/accuracies": 0.824999988079071,
1092
+ "rewards/chosen": 1.5108485221862793,
1093
+ "rewards/margins": 1.933189034461975,
1094
+ "rewards/rejected": -0.4223404824733734,
1095
+ "step": 650
1096
+ },
1097
+ {
1098
+ "epoch": 0.6046724690792488,
1099
+ "grad_norm": 0.9678570628166199,
1100
+ "learning_rate": 4.0530291139643755e-07,
1101
+ "logits/chosen": -0.8226197957992554,
1102
+ "logits/rejected": -0.8122960925102234,
1103
+ "logps/chosen": -143.47821044921875,
1104
+ "logps/rejected": -207.33187866210938,
1105
+ "loss": 0.2829,
1106
+ "rewards/accuracies": 0.949999988079071,
1107
+ "rewards/chosen": 2.0050506591796875,
1108
+ "rewards/margins": 2.1615443229675293,
1109
+ "rewards/rejected": -0.1564939320087433,
1110
+ "step": 660
1111
+ },
1112
+ {
1113
+ "epoch": 0.613834173156207,
1114
+ "grad_norm": 1.5877900123596191,
1115
+ "learning_rate": 3.8963176086341727e-07,
1116
+ "logits/chosen": -0.6404609084129333,
1117
+ "logits/rejected": -0.7193113565444946,
1118
+ "logps/chosen": -163.66485595703125,
1119
+ "logps/rejected": -219.49063110351562,
1120
+ "loss": 0.2704,
1121
+ "rewards/accuracies": 0.925000011920929,
1122
+ "rewards/chosen": 1.613854169845581,
1123
+ "rewards/margins": 2.1832098960876465,
1124
+ "rewards/rejected": -0.5693557858467102,
1125
+ "step": 670
1126
+ },
1127
+ {
1128
+ "epoch": 0.6229958772331654,
1129
+ "grad_norm": 1.301003098487854,
1130
+ "learning_rate": 3.7407379007971506e-07,
1131
+ "logits/chosen": -0.6765211820602417,
1132
+ "logits/rejected": -0.6541970372200012,
1133
+ "logps/chosen": -199.01051330566406,
1134
+ "logps/rejected": -279.47454833984375,
1135
+ "loss": 0.2896,
1136
+ "rewards/accuracies": 0.949999988079071,
1137
+ "rewards/chosen": 1.5939905643463135,
1138
+ "rewards/margins": 2.2719438076019287,
1139
+ "rewards/rejected": -0.6779531240463257,
1140
+ "step": 680
1141
+ },
1142
+ {
1143
+ "epoch": 0.6321575813101237,
1144
+ "grad_norm": 2.2288615703582764,
1145
+ "learning_rate": 3.586449533384048e-07,
1146
+ "logits/chosen": -0.6064814925193787,
1147
+ "logits/rejected": -0.6105560660362244,
1148
+ "logps/chosen": -137.80624389648438,
1149
+ "logps/rejected": -186.99191284179688,
1150
+ "loss": 0.3041,
1151
+ "rewards/accuracies": 0.875,
1152
+ "rewards/chosen": 1.5289627313613892,
1153
+ "rewards/margins": 1.8546040058135986,
1154
+ "rewards/rejected": -0.32564133405685425,
1155
+ "step": 690
1156
+ },
1157
+ {
1158
+ "epoch": 0.641319285387082,
1159
+ "grad_norm": 5.320913314819336,
1160
+ "learning_rate": 3.433610725089692e-07,
1161
+ "logits/chosen": -0.7031580209732056,
1162
+ "logits/rejected": -0.6630051136016846,
1163
+ "logps/chosen": -185.18197631835938,
1164
+ "logps/rejected": -280.9058532714844,
1165
+ "loss": 0.2919,
1166
+ "rewards/accuracies": 0.8999999761581421,
1167
+ "rewards/chosen": 1.8430522680282593,
1168
+ "rewards/margins": 1.9076077938079834,
1169
+ "rewards/rejected": -0.06455531716346741,
1170
+ "step": 700
1171
+ },
1172
+ {
1173
+ "epoch": 0.641319285387082,
1174
+ "eval_logits/chosen": -0.6672143936157227,
1175
+ "eval_logits/rejected": -0.6753049492835999,
1176
+ "eval_logps/chosen": -199.65896606445312,
1177
+ "eval_logps/rejected": -273.0414123535156,
1178
+ "eval_loss": 0.3144252896308899,
1179
+ "eval_rewards/accuracies": 0.8641618490219116,
1180
+ "eval_rewards/chosen": 1.7375967502593994,
1181
+ "eval_rewards/margins": 2.0409882068634033,
1182
+ "eval_rewards/rejected": -0.30339136719703674,
1183
+ "eval_runtime": 253.1657,
1184
+ "eval_samples_per_second": 10.914,
1185
+ "eval_steps_per_second": 1.367,
1186
+ "step": 700
1187
+ },
1188
+ {
1189
+ "epoch": 0.6504809894640403,
1190
+ "grad_norm": 1.6359617710113525,
1191
+ "learning_rate": 3.2823782081238555e-07,
1192
+ "logits/chosen": -0.7354801893234253,
1193
+ "logits/rejected": -0.7596527338027954,
1194
+ "logps/chosen": -145.60165405273438,
1195
+ "logps/rejected": -207.4108123779297,
1196
+ "loss": 0.3059,
1197
+ "rewards/accuracies": 0.875,
1198
+ "rewards/chosen": 1.8920338153839111,
1199
+ "rewards/margins": 1.903387427330017,
1200
+ "rewards/rejected": -0.011353528127074242,
1201
+ "step": 710
1202
+ },
1203
+ {
1204
+ "epoch": 0.6596426935409986,
1205
+ "grad_norm": 1.6316208839416504,
1206
+ "learning_rate": 3.132907067486471e-07,
1207
+ "logits/chosen": -0.6749696135520935,
1208
+ "logits/rejected": -0.7063171863555908,
1209
+ "logps/chosen": -158.86273193359375,
1210
+ "logps/rejected": -211.850341796875,
1211
+ "loss": 0.2837,
1212
+ "rewards/accuracies": 0.875,
1213
+ "rewards/chosen": 1.6359964609146118,
1214
+ "rewards/margins": 1.919615387916565,
1215
+ "rewards/rejected": -0.2836189270019531,
1216
+ "step": 720
1217
+ },
1218
+ {
1219
+ "epoch": 0.668804397617957,
1220
+ "grad_norm": 2.493192672729492,
1221
+ "learning_rate": 2.985350581932005e-07,
1222
+ "logits/chosen": -0.7640320062637329,
1223
+ "logits/rejected": -0.7546281814575195,
1224
+ "logps/chosen": -185.2474822998047,
1225
+ "logps/rejected": -231.47119140625,
1226
+ "loss": 0.3068,
1227
+ "rewards/accuracies": 0.8999999761581421,
1228
+ "rewards/chosen": 2.423882007598877,
1229
+ "rewards/margins": 2.5569610595703125,
1230
+ "rewards/rejected": -0.13307929039001465,
1231
+ "step": 730
1232
+ },
1233
+ {
1234
+ "epoch": 0.6779661016949152,
1235
+ "grad_norm": 1.0160990953445435,
1236
+ "learning_rate": 2.839860066786103e-07,
1237
+ "logits/chosen": -0.7123746871948242,
1238
+ "logits/rejected": -0.7270351648330688,
1239
+ "logps/chosen": -131.58505249023438,
1240
+ "logps/rejected": -189.05264282226562,
1241
+ "loss": 0.2778,
1242
+ "rewards/accuracies": 0.8999999761581421,
1243
+ "rewards/chosen": 1.8099651336669922,
1244
+ "rewards/margins": 2.3398964405059814,
1245
+ "rewards/rejected": -0.5299314260482788,
1246
+ "step": 740
1247
+ },
1248
+ {
1249
+ "epoch": 0.6871278057718736,
1250
+ "grad_norm": 1.4675796031951904,
1251
+ "learning_rate": 2.6965847187756553e-07,
1252
+ "logits/chosen": -0.7339153289794922,
1253
+ "logits/rejected": -0.6953274011611938,
1254
+ "logps/chosen": -144.21389770507812,
1255
+ "logps/rejected": -171.4956512451172,
1256
+ "loss": 0.2746,
1257
+ "rewards/accuracies": 0.925000011920929,
1258
+ "rewards/chosen": 2.1637752056121826,
1259
+ "rewards/margins": 2.3990566730499268,
1260
+ "rewards/rejected": -0.2352815419435501,
1261
+ "step": 750
1262
+ },
1263
+ {
1264
+ "epoch": 0.6962895098488319,
1265
+ "grad_norm": 2.0078845024108887,
1266
+ "learning_rate": 2.5556714630314613e-07,
1267
+ "logits/chosen": -0.783401370048523,
1268
+ "logits/rejected": -0.854455828666687,
1269
+ "logps/chosen": -105.17523193359375,
1270
+ "logps/rejected": -193.60110473632812,
1271
+ "loss": 0.2727,
1272
+ "rewards/accuracies": 0.8999999761581421,
1273
+ "rewards/chosen": 2.012260913848877,
1274
+ "rewards/margins": 2.8885557651519775,
1275
+ "rewards/rejected": -0.8762944340705872,
1276
+ "step": 760
1277
+ },
1278
+ {
1279
+ "epoch": 0.7054512139257902,
1280
+ "grad_norm": 1.2648169994354248,
1281
+ "learning_rate": 2.417264802420343e-07,
1282
+ "logits/chosen": -0.6513649225234985,
1283
+ "logits/rejected": -0.6898313760757446,
1284
+ "logps/chosen": -141.83282470703125,
1285
+ "logps/rejected": -238.6311492919922,
1286
+ "loss": 0.3154,
1287
+ "rewards/accuracies": 0.8500000238418579,
1288
+ "rewards/chosen": 1.7510058879852295,
1289
+ "rewards/margins": 2.0821216106414795,
1290
+ "rewards/rejected": -0.33111587166786194,
1291
+ "step": 770
1292
+ },
1293
+ {
1294
+ "epoch": 0.7146129180027485,
1295
+ "grad_norm": 2.6904611587524414,
1296
+ "learning_rate": 2.2815066693612117e-07,
1297
+ "logits/chosen": -0.7128955721855164,
1298
+ "logits/rejected": -0.7633499503135681,
1299
+ "logps/chosen": -155.6310577392578,
1300
+ "logps/rejected": -210.39285278320312,
1301
+ "loss": 0.2793,
1302
+ "rewards/accuracies": 0.949999988079071,
1303
+ "rewards/chosen": 1.7025461196899414,
1304
+ "rewards/margins": 2.1338300704956055,
1305
+ "rewards/rejected": -0.4312838613986969,
1306
+ "step": 780
1307
+ },
1308
+ {
1309
+ "epoch": 0.7237746220797068,
1310
+ "grad_norm": 2.1170003414154053,
1311
+ "learning_rate": 2.1485362802770862e-07,
1312
+ "logits/chosen": -0.6805752515792847,
1313
+ "logits/rejected": -0.7278204560279846,
1314
+ "logps/chosen": -207.82858276367188,
1315
+ "logps/rejected": -320.7551574707031,
1316
+ "loss": 0.2457,
1317
+ "rewards/accuracies": 0.8500000238418579,
1318
+ "rewards/chosen": 1.8095613718032837,
1319
+ "rewards/margins": 2.516162157058716,
1320
+ "rewards/rejected": -0.7066007852554321,
1321
+ "step": 790
1322
+ },
1323
+ {
1324
+ "epoch": 0.7329363261566652,
1325
+ "grad_norm": 1.6859688758850098,
1326
+ "learning_rate": 2.018489992832283e-07,
1327
+ "logits/chosen": -0.7043607831001282,
1328
+ "logits/rejected": -0.6326649785041809,
1329
+ "logps/chosen": -189.71005249023438,
1330
+ "logps/rejected": -247.50723266601562,
1331
+ "loss": 0.314,
1332
+ "rewards/accuracies": 0.925000011920929,
1333
+ "rewards/chosen": 1.972821593284607,
1334
+ "rewards/margins": 2.5004470348358154,
1335
+ "rewards/rejected": -0.5276254415512085,
1336
+ "step": 800
1337
+ },
1338
+ {
1339
+ "epoch": 0.7329363261566652,
1340
+ "eval_logits/chosen": -0.6573572754859924,
1341
+ "eval_logits/rejected": -0.6684801578521729,
1342
+ "eval_logps/chosen": -200.33786010742188,
1343
+ "eval_logps/rejected": -275.4322814941406,
1344
+ "eval_loss": 0.3056153357028961,
1345
+ "eval_rewards/accuracies": 0.8670520186424255,
1346
+ "eval_rewards/chosen": 1.703651785850525,
1347
+ "eval_rewards/margins": 2.126586437225342,
1348
+ "eval_rewards/rejected": -0.4229348599910736,
1349
+ "eval_runtime": 253.4624,
1350
+ "eval_samples_per_second": 10.901,
1351
+ "eval_steps_per_second": 1.365,
1352
+ "step": 800
1353
+ },
1354
+ {
1355
+ "epoch": 0.7420980302336234,
1356
+ "grad_norm": 1.46702241897583,
1357
+ "learning_rate": 1.891501166101187e-07,
1358
+ "logits/chosen": -0.8299296498298645,
1359
+ "logits/rejected": -0.8194143176078796,
1360
+ "logps/chosen": -139.70791625976562,
1361
+ "logps/rejected": -179.2274932861328,
1362
+ "loss": 0.2988,
1363
+ "rewards/accuracies": 0.8999999761581421,
1364
+ "rewards/chosen": 1.740025281906128,
1365
+ "rewards/margins": 2.0235095024108887,
1366
+ "rewards/rejected": -0.28348422050476074,
1367
+ "step": 810
1368
+ },
1369
+ {
1370
+ "epoch": 0.7512597343105818,
1371
+ "grad_norm": 2.791546583175659,
1372
+ "learning_rate": 1.767700023812e-07,
1373
+ "logits/chosen": -0.7000871896743774,
1374
+ "logits/rejected": -0.7028741240501404,
1375
+ "logps/chosen": -174.82821655273438,
1376
+ "logps/rejected": -264.07470703125,
1377
+ "loss": 0.272,
1378
+ "rewards/accuracies": 0.8500000238418579,
1379
+ "rewards/chosen": 2.0946412086486816,
1380
+ "rewards/margins": 2.783123016357422,
1381
+ "rewards/rejected": -0.6884818077087402,
1382
+ "step": 820
1383
+ },
1384
+ {
1385
+ "epoch": 0.7604214383875401,
1386
+ "grad_norm": 1.3349977731704712,
1387
+ "learning_rate": 1.6472135208057125e-07,
1388
+ "logits/chosen": -0.6306108832359314,
1389
+ "logits/rejected": -0.698475182056427,
1390
+ "logps/chosen": -155.9135284423828,
1391
+ "logps/rejected": -198.2060546875,
1392
+ "loss": 0.3032,
1393
+ "rewards/accuracies": 0.925000011920929,
1394
+ "rewards/chosen": 1.55352783203125,
1395
+ "rewards/margins": 2.126960515975952,
1396
+ "rewards/rejected": -0.5734325647354126,
1397
+ "step": 830
1398
+ },
1399
+ {
1400
+ "epoch": 0.7695831424644984,
1401
+ "grad_norm": 2.1777162551879883,
1402
+ "learning_rate": 1.530165212847217e-07,
1403
+ "logits/chosen": -0.7774958610534668,
1404
+ "logits/rejected": -0.7312562465667725,
1405
+ "logps/chosen": -141.79075622558594,
1406
+ "logps/rejected": -190.49310302734375,
1407
+ "loss": 0.2807,
1408
+ "rewards/accuracies": 0.8999999761581421,
1409
+ "rewards/chosen": 1.8885326385498047,
1410
+ "rewards/margins": 2.229785442352295,
1411
+ "rewards/rejected": -0.3412528932094574,
1412
+ "step": 840
1413
+ },
1414
+ {
1415
+ "epoch": 0.7787448465414567,
1416
+ "grad_norm": 1.2950890064239502,
1417
+ "learning_rate": 1.4166751299221003e-07,
1418
+ "logits/chosen": -0.6935632228851318,
1419
+ "logits/rejected": -0.681289553642273,
1420
+ "logps/chosen": -166.78561401367188,
1421
+ "logps/rejected": -229.1016387939453,
1422
+ "loss": 0.2802,
1423
+ "rewards/accuracies": 0.875,
1424
+ "rewards/chosen": 1.8389393091201782,
1425
+ "rewards/margins": 3.066588878631592,
1426
+ "rewards/rejected": -1.2276496887207031,
1427
+ "step": 850
1428
+ },
1429
+ {
1430
+ "epoch": 0.7879065506184151,
1431
+ "grad_norm": 1.1921981573104858,
1432
+ "learning_rate": 1.306859653149025e-07,
1433
+ "logits/chosen": -0.7318634390830994,
1434
+ "logits/rejected": -0.7256627082824707,
1435
+ "logps/chosen": -171.42092895507812,
1436
+ "logps/rejected": -247.11831665039062,
1437
+ "loss": 0.2934,
1438
+ "rewards/accuracies": 0.949999988079071,
1439
+ "rewards/chosen": 1.7239230871200562,
1440
+ "rewards/margins": 2.5621562004089355,
1441
+ "rewards/rejected": -0.8382335901260376,
1442
+ "step": 860
1443
+ },
1444
+ {
1445
+ "epoch": 0.7970682546953733,
1446
+ "grad_norm": 1.4422613382339478,
1447
+ "learning_rate": 1.2008313954339305e-07,
1448
+ "logits/chosen": -0.586450457572937,
1449
+ "logits/rejected": -0.5815967321395874,
1450
+ "logps/chosen": -215.48007202148438,
1451
+ "logps/rejected": -242.58511352539062,
1452
+ "loss": 0.2549,
1453
+ "rewards/accuracies": 0.800000011920929,
1454
+ "rewards/chosen": 1.6724342107772827,
1455
+ "rewards/margins": 1.982318639755249,
1456
+ "rewards/rejected": -0.3098844885826111,
1457
+ "step": 870
1458
+ },
1459
+ {
1460
+ "epoch": 0.8062299587723316,
1461
+ "grad_norm": 1.141564130783081,
1462
+ "learning_rate": 1.098699085988432e-07,
1463
+ "logits/chosen": -0.7615233659744263,
1464
+ "logits/rejected": -0.8405082821846008,
1465
+ "logps/chosen": -163.58689880371094,
1466
+ "logps/rejected": -256.6318054199219,
1467
+ "loss": 0.2998,
1468
+ "rewards/accuracies": 0.925000011920929,
1469
+ "rewards/chosen": 1.7078421115875244,
1470
+ "rewards/margins": 1.9117094278335571,
1471
+ "rewards/rejected": -0.20386750996112823,
1472
+ "step": 880
1473
+ },
1474
+ {
1475
+ "epoch": 0.81539166284929,
1476
+ "grad_norm": 1.329354166984558,
1477
+ "learning_rate": 1.0005674588308566e-07,
1478
+ "logits/chosen": -0.7380274534225464,
1479
+ "logits/rejected": -0.75429767370224,
1480
+ "logps/chosen": -136.90902709960938,
1481
+ "logps/rejected": -224.83151245117188,
1482
+ "loss": 0.2231,
1483
+ "rewards/accuracies": 0.8999999761581421,
1484
+ "rewards/chosen": 1.7314188480377197,
1485
+ "rewards/margins": 2.723328113555908,
1486
+ "rewards/rejected": -0.9919096231460571,
1487
+ "step": 890
1488
+ },
1489
+ {
1490
+ "epoch": 0.8245533669262483,
1491
+ "grad_norm": 1.717936396598816,
1492
+ "learning_rate": 9.065371453842358e-08,
1493
+ "logits/chosen": -0.670835018157959,
1494
+ "logits/rejected": -0.6879727244377136,
1495
+ "logps/chosen": -138.87889099121094,
1496
+ "logps/rejected": -184.62257385253906,
1497
+ "loss": 0.3014,
1498
+ "rewards/accuracies": 0.9750000238418579,
1499
+ "rewards/chosen": 2.0553038120269775,
1500
+ "rewards/margins": 2.568469524383545,
1501
+ "rewards/rejected": -0.5131659507751465,
1502
+ "step": 900
1503
+ },
1504
+ {
1505
+ "epoch": 0.8245533669262483,
1506
+ "eval_logits/chosen": -0.6641213893890381,
1507
+ "eval_logits/rejected": -0.670166015625,
1508
+ "eval_logps/chosen": -200.79710388183594,
1509
+ "eval_logps/rejected": -276.2374267578125,
1510
+ "eval_loss": 0.3019951581954956,
1511
+ "eval_rewards/accuracies": 0.8699421882629395,
1512
+ "eval_rewards/chosen": 1.6806890964508057,
1513
+ "eval_rewards/margins": 2.1438791751861572,
1514
+ "eval_rewards/rejected": -0.4631901979446411,
1515
+ "eval_runtime": 253.6623,
1516
+ "eval_samples_per_second": 10.892,
1517
+ "eval_steps_per_second": 1.364,
1518
+ "step": 900
1519
+ },
1520
+ {
1521
+ "epoch": 0.8337150710032066,
1522
+ "grad_norm": 1.479974627494812,
1523
+ "learning_rate": 8.167045712814108e-08,
1524
+ "logits/chosen": -0.6214216351509094,
1525
+ "logits/rejected": -0.641666054725647,
1526
+ "logps/chosen": -177.58828735351562,
1527
+ "logps/rejected": -273.30755615234375,
1528
+ "loss": 0.2533,
1529
+ "rewards/accuracies": 0.875,
1530
+ "rewards/chosen": 1.7214215993881226,
1531
+ "rewards/margins": 1.9738502502441406,
1532
+ "rewards/rejected": -0.2524286210536957,
1533
+ "step": 910
1534
+ },
1535
+ {
1536
+ "epoch": 0.8428767750801649,
1537
+ "grad_norm": 2.062375783920288,
1538
+ "learning_rate": 7.311618574830569e-08,
1539
+ "logits/chosen": -0.5943703651428223,
1540
+ "logits/rejected": -0.6128894686698914,
1541
+ "logps/chosen": -163.90029907226562,
1542
+ "logps/rejected": -248.54415893554688,
1543
+ "loss": 0.2868,
1544
+ "rewards/accuracies": 0.925000011920929,
1545
+ "rewards/chosen": 2.0122509002685547,
1546
+ "rewards/margins": 2.265479803085327,
1547
+ "rewards/rejected": -0.25322893261909485,
1548
+ "step": 920
1549
+ },
1550
+ {
1551
+ "epoch": 0.8520384791571233,
1552
+ "grad_norm": 1.7314789295196533,
1553
+ "learning_rate": 6.499967258100514e-08,
1554
+ "logits/chosen": -0.6944621801376343,
1555
+ "logits/rejected": -0.7938731908798218,
1556
+ "logps/chosen": -163.20828247070312,
1557
+ "logps/rejected": -237.55078125,
1558
+ "loss": 0.2645,
1559
+ "rewards/accuracies": 0.949999988079071,
1560
+ "rewards/chosen": 1.7433500289916992,
1561
+ "rewards/margins": 2.302280902862549,
1562
+ "rewards/rejected": -0.5589307546615601,
1563
+ "step": 930
1564
+ },
1565
+ {
1566
+ "epoch": 0.8612001832340815,
1567
+ "grad_norm": 0.9841431379318237,
1568
+ "learning_rate": 5.732924089870245e-08,
1569
+ "logits/chosen": -0.46664732694625854,
1570
+ "logits/rejected": -0.6183963418006897,
1571
+ "logps/chosen": -208.5711669921875,
1572
+ "logps/rejected": -259.8545837402344,
1573
+ "loss": 0.2725,
1574
+ "rewards/accuracies": 0.8999999761581421,
1575
+ "rewards/chosen": 1.87375009059906,
1576
+ "rewards/margins": 2.2145848274230957,
1577
+ "rewards/rejected": -0.34083452820777893,
1578
+ "step": 940
1579
+ },
1580
+ {
1581
+ "epoch": 0.8703618873110398,
1582
+ "grad_norm": 1.0470377206802368,
1583
+ "learning_rate": 5.011275652893782e-08,
1584
+ "logits/chosen": -0.6211365461349487,
1585
+ "logits/rejected": -0.6796506643295288,
1586
+ "logps/chosen": -152.32177734375,
1587
+ "logps/rejected": -218.6947479248047,
1588
+ "loss": 0.2395,
1589
+ "rewards/accuracies": 0.8999999761581421,
1590
+ "rewards/chosen": 1.6267896890640259,
1591
+ "rewards/margins": 2.076831102371216,
1592
+ "rewards/rejected": -0.45004144310951233,
1593
+ "step": 950
1594
+ },
1595
+ {
1596
+ "epoch": 0.8795235913879982,
1597
+ "grad_norm": 1.149141550064087,
1598
+ "learning_rate": 4.3357619788127634e-08,
1599
+ "logits/chosen": -0.6848306655883789,
1600
+ "logits/rejected": -0.7928074598312378,
1601
+ "logps/chosen": -200.9459228515625,
1602
+ "logps/rejected": -254.53952026367188,
1603
+ "loss": 0.2671,
1604
+ "rewards/accuracies": 0.824999988079071,
1605
+ "rewards/chosen": 1.663435697555542,
1606
+ "rewards/margins": 1.9824724197387695,
1607
+ "rewards/rejected": -0.3190363943576813,
1608
+ "step": 960
1609
+ },
1610
+ {
1611
+ "epoch": 0.8886852954649564,
1612
+ "grad_norm": 1.5327177047729492,
1613
+ "learning_rate": 3.707075789273306e-08,
1614
+ "logits/chosen": -0.6709850430488586,
1615
+ "logits/rejected": -0.6786261796951294,
1616
+ "logps/chosen": -146.47488403320312,
1617
+ "logps/rejected": -247.5612335205078,
1618
+ "loss": 0.2843,
1619
+ "rewards/accuracies": 0.9750000238418579,
1620
+ "rewards/chosen": 1.8335565328598022,
1621
+ "rewards/margins": 2.448019504547119,
1622
+ "rewards/rejected": -0.6144627332687378,
1623
+ "step": 970
1624
+ },
1625
+ {
1626
+ "epoch": 0.8978469995419148,
1627
+ "grad_norm": 1.4160970449447632,
1628
+ "learning_rate": 3.125861785558015e-08,
1629
+ "logits/chosen": -0.7669690251350403,
1630
+ "logits/rejected": -0.8644639849662781,
1631
+ "logps/chosen": -169.50274658203125,
1632
+ "logps/rejected": -304.95098876953125,
1633
+ "loss": 0.2834,
1634
+ "rewards/accuracies": 0.8500000238418579,
1635
+ "rewards/chosen": 1.8083531856536865,
1636
+ "rewards/margins": 2.9228787422180176,
1637
+ "rewards/rejected": -1.114525556564331,
1638
+ "step": 980
1639
+ },
1640
+ {
1641
+ "epoch": 0.9070087036188731,
1642
+ "grad_norm": 2.2089922428131104,
1643
+ "learning_rate": 2.592715987461702e-08,
1644
+ "logits/chosen": -0.7596947550773621,
1645
+ "logits/rejected": -0.7801377177238464,
1646
+ "logps/chosen": -217.3747100830078,
1647
+ "logps/rejected": -258.47796630859375,
1648
+ "loss": 0.3283,
1649
+ "rewards/accuracies": 0.875,
1650
+ "rewards/chosen": 1.5211076736450195,
1651
+ "rewards/margins": 1.6776663064956665,
1652
+ "rewards/rejected": -0.15655846893787384,
1653
+ "step": 990
1654
+ },
1655
+ {
1656
+ "epoch": 0.9161704076958315,
1657
+ "grad_norm": 1.1835498809814453,
1658
+ "learning_rate": 2.108185122088546e-08,
1659
+ "logits/chosen": -0.7457928657531738,
1660
+ "logits/rejected": -0.6937334537506104,
1661
+ "logps/chosen": -169.40447998046875,
1662
+ "logps/rejected": -247.85061645507812,
1663
+ "loss": 0.268,
1664
+ "rewards/accuracies": 0.875,
1665
+ "rewards/chosen": 1.8612730503082275,
1666
+ "rewards/margins": 2.29799747467041,
1667
+ "rewards/rejected": -0.43672457337379456,
1668
+ "step": 1000
1669
+ },
1670
+ {
1671
+ "epoch": 0.9161704076958315,
1672
+ "eval_logits/chosen": -0.6635109186172485,
1673
+ "eval_logits/rejected": -0.6690148711204529,
1674
+ "eval_logps/chosen": -200.81570434570312,
1675
+ "eval_logps/rejected": -276.8312072753906,
1676
+ "eval_loss": 0.29993191361427307,
1677
+ "eval_rewards/accuracies": 0.884393036365509,
1678
+ "eval_rewards/chosen": 1.6797596216201782,
1679
+ "eval_rewards/margins": 2.1726391315460205,
1680
+ "eval_rewards/rejected": -0.4928795397281647,
1681
+ "eval_runtime": 253.6729,
1682
+ "eval_samples_per_second": 10.892,
1683
+ "eval_steps_per_second": 1.364,
1684
+ "step": 1000
1685
+ },
1686
+ {
1687
+ "epoch": 0.9253321117727897,
1688
+ "grad_norm": 1.9540088176727295,
1689
+ "learning_rate": 1.672766063197789e-08,
1690
+ "logits/chosen": -0.6848675608634949,
1691
+ "logits/rejected": -0.6749913692474365,
1692
+ "logps/chosen": -182.75892639160156,
1693
+ "logps/rejected": -234.47232055664062,
1694
+ "loss": 0.2621,
1695
+ "rewards/accuracies": 0.8999999761581421,
1696
+ "rewards/chosen": 1.729029655456543,
1697
+ "rewards/margins": 2.2015597820281982,
1698
+ "rewards/rejected": -0.47253018617630005,
1699
+ "step": 1010
1700
+ },
1701
+ {
1702
+ "epoch": 0.934493815849748,
1703
+ "grad_norm": 1.5091657638549805,
1704
+ "learning_rate": 1.286905321672621e-08,
1705
+ "logits/chosen": -0.6353659629821777,
1706
+ "logits/rejected": -0.6381145715713501,
1707
+ "logps/chosen": -118.5170669555664,
1708
+ "logps/rejected": -205.03689575195312,
1709
+ "loss": 0.2602,
1710
+ "rewards/accuracies": 0.949999988079071,
1711
+ "rewards/chosen": 1.9368598461151123,
1712
+ "rewards/margins": 2.6814284324645996,
1713
+ "rewards/rejected": -0.744568407535553,
1714
+ "step": 1020
1715
+ },
1716
+ {
1717
+ "epoch": 0.9436555199267064,
1718
+ "grad_norm": 1.7073416709899902,
1719
+ "learning_rate": 9.509985876349491e-09,
1720
+ "logits/chosen": -0.61668461561203,
1721
+ "logits/rejected": -0.6262907385826111,
1722
+ "logps/chosen": -143.79531860351562,
1723
+ "logps/rejected": -229.5824737548828,
1724
+ "loss": 0.2564,
1725
+ "rewards/accuracies": 0.949999988079071,
1726
+ "rewards/chosen": 1.7922176122665405,
1727
+ "rewards/margins": 2.186096429824829,
1728
+ "rewards/rejected": -0.3938787579536438,
1729
+ "step": 1030
1730
+ },
1731
+ {
1732
+ "epoch": 0.9528172240036646,
1733
+ "grad_norm": 1.6124757528305054,
1734
+ "learning_rate": 6.6539032467546885e-09,
1735
+ "logits/chosen": -0.790496289730072,
1736
+ "logits/rejected": -0.7677043080329895,
1737
+ "logps/chosen": -256.6092834472656,
1738
+ "logps/rejected": -325.1084289550781,
1739
+ "loss": 0.262,
1740
+ "rewards/accuracies": 0.7749999761581421,
1741
+ "rewards/chosen": 1.504821538925171,
1742
+ "rewards/margins": 2.0350840091705322,
1743
+ "rewards/rejected": -0.5302623510360718,
1744
+ "step": 1040
1745
+ },
1746
+ {
1747
+ "epoch": 0.961978928080623,
1748
+ "grad_norm": 1.0290203094482422,
1749
+ "learning_rate": 4.303734166152706e-09,
1750
+ "logits/chosen": -0.7079821825027466,
1751
+ "logits/rejected": -0.6711887121200562,
1752
+ "logps/chosen": -177.19007873535156,
1753
+ "logps/rejected": -210.00613403320312,
1754
+ "loss": 0.3015,
1755
+ "rewards/accuracies": 0.875,
1756
+ "rewards/chosen": 1.794146180152893,
1757
+ "rewards/margins": 2.358898639678955,
1758
+ "rewards/rejected": -0.564752459526062,
1759
+ "step": 1050
1760
+ },
1761
+ {
1762
+ "epoch": 0.9711406321575813,
1763
+ "grad_norm": 1.0389478206634521,
1764
+ "learning_rate": 2.4618886716110676e-09,
1765
+ "logits/chosen": -0.6768301725387573,
1766
+ "logits/rejected": -0.6700726747512817,
1767
+ "logps/chosen": -167.974609375,
1768
+ "logps/rejected": -247.9666290283203,
1769
+ "loss": 0.2928,
1770
+ "rewards/accuracies": 0.824999988079071,
1771
+ "rewards/chosen": 1.5335534811019897,
1772
+ "rewards/margins": 1.9949595928192139,
1773
+ "rewards/rejected": -0.4614059329032898,
1774
+ "step": 1060
1775
+ },
1776
+ {
1777
+ "epoch": 0.9803023362345397,
1778
+ "grad_norm": 0.8039052486419678,
1779
+ "learning_rate": 1.1302555276238579e-09,
1780
+ "logits/chosen": -0.6699908375740051,
1781
+ "logits/rejected": -0.7130982279777527,
1782
+ "logps/chosen": -109.6313247680664,
1783
+ "logps/rejected": -177.20127868652344,
1784
+ "loss": 0.232,
1785
+ "rewards/accuracies": 0.949999988079071,
1786
+ "rewards/chosen": 1.9235477447509766,
1787
+ "rewards/margins": 2.2871975898742676,
1788
+ "rewards/rejected": -0.3636501729488373,
1789
+ "step": 1070
1790
+ },
1791
+ {
1792
+ "epoch": 0.9894640403114979,
1793
+ "grad_norm": 2.4051105976104736,
1794
+ "learning_rate": 3.102002892329536e-10,
1795
+ "logits/chosen": -0.6233320832252502,
1796
+ "logits/rejected": -0.6594210267066956,
1797
+ "logps/chosen": -142.9510040283203,
1798
+ "logps/rejected": -230.3099365234375,
1799
+ "loss": 0.2467,
1800
+ "rewards/accuracies": 0.949999988079071,
1801
+ "rewards/chosen": 1.8195747137069702,
1802
+ "rewards/margins": 2.5644664764404297,
1803
+ "rewards/rejected": -0.7448917031288147,
1804
+ "step": 1080
1805
+ },
1806
+ {
1807
+ "epoch": 0.9986257443884563,
1808
+ "grad_norm": 1.9035027027130127,
1809
+ "learning_rate": 2.5639016871248366e-12,
1810
+ "logits/chosen": -0.608766496181488,
1811
+ "logits/rejected": -0.5473885536193848,
1812
+ "logps/chosen": -232.18392944335938,
1813
+ "logps/rejected": -249.3759002685547,
1814
+ "loss": 0.2929,
1815
+ "rewards/accuracies": 0.8999999761581421,
1816
+ "rewards/chosen": 1.7724415063858032,
1817
+ "rewards/margins": 2.1429972648620605,
1818
+ "rewards/rejected": -0.3705558478832245,
1819
+ "step": 1090
1820
+ },
1821
+ {
1822
+ "epoch": 0.9995419147961521,
1823
+ "step": 1091,
1824
+ "total_flos": 0.0,
1825
+ "train_loss": 0.37405444834616947,
1826
+ "train_runtime": 8738.9435,
1827
+ "train_samples_per_second": 3.996,
1828
+ "train_steps_per_second": 0.125
1829
+ }
1830
+ ],
1831
+ "logging_steps": 10,
1832
+ "max_steps": 1091,
1833
+ "num_input_tokens_seen": 0,
1834
+ "num_train_epochs": 1,
1835
+ "save_steps": 100,
1836
+ "stateful_callbacks": {
1837
+ "TrainerControl": {
1838
+ "args": {
1839
+ "should_epoch_stop": false,
1840
+ "should_evaluate": false,
1841
+ "should_log": false,
1842
+ "should_save": true,
1843
+ "should_training_stop": true
1844
+ },
1845
+ "attributes": {}
1846
+ }
1847
+ },
1848
+ "total_flos": 0.0,
1849
+ "train_batch_size": 1,
1850
+ "trial_name": null,
1851
+ "trial_params": null
1852
+ }