mjbuehler commited on
Commit
3979ede
1 Parent(s): 2a07250

Model save

Browse files
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: lamm-mit/mistral-7B-v0.3-Instruct-CPT_SFT
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: mistral-7B-v0.3-Instruct-CPT_SFT-DPO
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # mistral-7B-v0.3-Instruct-CPT_SFT-DPO
17
+
18
+ This model is a fine-tuned version of [lamm-mit/mistral-7B-v0.3-Instruct-CPT_SFT](https://huggingface.co/lamm-mit/mistral-7B-v0.3-Instruct-CPT_SFT) on the None dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-07
38
+ - train_batch_size: 4
39
+ - eval_batch_size: 1
40
+ - seed: 42
41
+ - distributed_type: multi-GPU
42
+ - num_devices: 8
43
+ - gradient_accumulation_steps: 2
44
+ - total_train_batch_size: 64
45
+ - total_eval_batch_size: 8
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 1
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - Transformers 4.44.2
58
+ - Pytorch 2.4.0+cu121
59
+ - Datasets 2.21.0
60
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.22162555615179336,
5
+ "train_runtime": 6190.6716,
6
+ "train_samples": 70516,
7
+ "train_samples_per_second": 11.391,
8
+ "train_steps_per_second": 0.178
9
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.44.2"
6
+ }
runs/Aug27_16-07-36_192-222-52-143/events.out.tfevents.1724775259.192-222-52-143.65899.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:13db3c631e29011812acdba756aa872d2ec7a5fe495f2b2a1c65f4df5e34e06d
3
- size 82439
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:186bdb98e7525a0f3fbaa3e1d618c98f3d8a5f8eaef522a3a490a959f39cc8c3
3
+ size 82793
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.22162555615179336,
5
+ "train_runtime": 6190.6716,
6
+ "train_samples": 70516,
7
+ "train_samples_per_second": 11.391,
8
+ "train_steps_per_second": 0.178
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1707 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.0,
5
+ "eval_steps": 500,
6
+ "global_step": 1102,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0009074410163339383,
13
+ "grad_norm": 142.2236804737182,
14
+ "learning_rate": 4.504504504504504e-09,
15
+ "logits/chosen": -2.992155075073242,
16
+ "logits/rejected": -2.8812735080718994,
17
+ "logps/chosen": -496.1777038574219,
18
+ "logps/rejected": -286.02813720703125,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.009074410163339383,
28
+ "grad_norm": 117.29702644691126,
29
+ "learning_rate": 4.504504504504504e-08,
30
+ "logits/chosen": -3.0335752964019775,
31
+ "logits/rejected": -2.898245334625244,
32
+ "logps/chosen": -439.4527587890625,
33
+ "logps/rejected": -257.0932312011719,
34
+ "loss": 0.6922,
35
+ "rewards/accuracies": 0.4027777910232544,
36
+ "rewards/chosen": -0.0025693608913570642,
37
+ "rewards/margins": -0.003381188027560711,
38
+ "rewards/rejected": 0.000811827601864934,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.018148820326678767,
43
+ "grad_norm": 121.55880301059094,
44
+ "learning_rate": 9.009009009009008e-08,
45
+ "logits/chosen": -2.9626657962799072,
46
+ "logits/rejected": -2.811065435409546,
47
+ "logps/chosen": -350.26495361328125,
48
+ "logps/rejected": -255.7550811767578,
49
+ "loss": 0.6818,
50
+ "rewards/accuracies": 0.6875,
51
+ "rewards/chosen": 0.023313423618674278,
52
+ "rewards/margins": 0.02311089262366295,
53
+ "rewards/rejected": 0.00020253304683137685,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.02722323049001815,
58
+ "grad_norm": 100.93699671377806,
59
+ "learning_rate": 1.3513513513513515e-07,
60
+ "logits/chosen": -2.9238393306732178,
61
+ "logits/rejected": -2.7606401443481445,
62
+ "logps/chosen": -429.740966796875,
63
+ "logps/rejected": -255.2905731201172,
64
+ "loss": 0.6275,
65
+ "rewards/accuracies": 0.7875000238418579,
66
+ "rewards/chosen": 0.16224327683448792,
67
+ "rewards/margins": 0.1733538806438446,
68
+ "rewards/rejected": -0.01111060194671154,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.036297640653357534,
73
+ "grad_norm": 59.986377877914826,
74
+ "learning_rate": 1.8018018018018017e-07,
75
+ "logits/chosen": -3.002030372619629,
76
+ "logits/rejected": -2.842498779296875,
77
+ "logps/chosen": -454.6031188964844,
78
+ "logps/rejected": -251.68063354492188,
79
+ "loss": 0.5225,
80
+ "rewards/accuracies": 0.887499988079071,
81
+ "rewards/chosen": 0.473835289478302,
82
+ "rewards/margins": 0.5063358545303345,
83
+ "rewards/rejected": -0.03250055015087128,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.045372050816696916,
88
+ "grad_norm": 50.035828434181596,
89
+ "learning_rate": 2.2522522522522522e-07,
90
+ "logits/chosen": -2.9417529106140137,
91
+ "logits/rejected": -2.826834201812744,
92
+ "logps/chosen": -385.17877197265625,
93
+ "logps/rejected": -235.72042846679688,
94
+ "loss": 0.4419,
95
+ "rewards/accuracies": 0.7875000238418579,
96
+ "rewards/chosen": 0.9201729893684387,
97
+ "rewards/margins": 1.0462477207183838,
98
+ "rewards/rejected": -0.12607479095458984,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.0544464609800363,
103
+ "grad_norm": 49.18557390967671,
104
+ "learning_rate": 2.702702702702703e-07,
105
+ "logits/chosen": -2.95581316947937,
106
+ "logits/rejected": -2.827500581741333,
107
+ "logps/chosen": -356.5251770019531,
108
+ "logps/rejected": -240.679931640625,
109
+ "loss": 0.3881,
110
+ "rewards/accuracies": 0.8374999761581421,
111
+ "rewards/chosen": 1.303996205329895,
112
+ "rewards/margins": 1.6420990228652954,
113
+ "rewards/rejected": -0.33810287714004517,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.06352087114337568,
118
+ "grad_norm": 39.424363627526176,
119
+ "learning_rate": 3.153153153153153e-07,
120
+ "logits/chosen": -3.004495620727539,
121
+ "logits/rejected": -2.840688467025757,
122
+ "logps/chosen": -372.34857177734375,
123
+ "logps/rejected": -231.1216278076172,
124
+ "loss": 0.3465,
125
+ "rewards/accuracies": 0.8999999761581421,
126
+ "rewards/chosen": 1.3726818561553955,
127
+ "rewards/margins": 1.8177658319473267,
128
+ "rewards/rejected": -0.44508394598960876,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.07259528130671507,
133
+ "grad_norm": 45.15703470841338,
134
+ "learning_rate": 3.6036036036036033e-07,
135
+ "logits/chosen": -2.994868516921997,
136
+ "logits/rejected": -2.897393226623535,
137
+ "logps/chosen": -365.3287353515625,
138
+ "logps/rejected": -249.77969360351562,
139
+ "loss": 0.3355,
140
+ "rewards/accuracies": 0.862500011920929,
141
+ "rewards/chosen": 1.2107493877410889,
142
+ "rewards/margins": 1.8611555099487305,
143
+ "rewards/rejected": -0.6504060626029968,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.08166969147005444,
148
+ "grad_norm": 38.281035325477184,
149
+ "learning_rate": 4.054054054054054e-07,
150
+ "logits/chosen": -3.043982982635498,
151
+ "logits/rejected": -2.9395782947540283,
152
+ "logps/chosen": -421.8617248535156,
153
+ "logps/rejected": -285.7194519042969,
154
+ "loss": 0.2828,
155
+ "rewards/accuracies": 0.9125000238418579,
156
+ "rewards/chosen": 1.4924074411392212,
157
+ "rewards/margins": 2.530264377593994,
158
+ "rewards/rejected": -1.0378568172454834,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.09074410163339383,
163
+ "grad_norm": 31.39224573581566,
164
+ "learning_rate": 4.5045045045045043e-07,
165
+ "logits/chosen": -3.025484561920166,
166
+ "logits/rejected": -2.930591583251953,
167
+ "logps/chosen": -398.21221923828125,
168
+ "logps/rejected": -283.81158447265625,
169
+ "loss": 0.2717,
170
+ "rewards/accuracies": 0.8374999761581421,
171
+ "rewards/chosen": 1.310978651046753,
172
+ "rewards/margins": 2.4949724674224854,
173
+ "rewards/rejected": -1.1839935779571533,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.0998185117967332,
178
+ "grad_norm": 36.97628979551037,
179
+ "learning_rate": 4.954954954954955e-07,
180
+ "logits/chosen": -3.070251941680908,
181
+ "logits/rejected": -2.9774134159088135,
182
+ "logps/chosen": -334.50146484375,
183
+ "logps/rejected": -254.9846649169922,
184
+ "loss": 0.2879,
185
+ "rewards/accuracies": 0.800000011920929,
186
+ "rewards/chosen": 1.2126104831695557,
187
+ "rewards/margins": 2.4223546981811523,
188
+ "rewards/rejected": -1.2097440958023071,
189
+ "step": 110
190
+ },
191
+ {
192
+ "epoch": 0.1088929219600726,
193
+ "grad_norm": 36.40105021641922,
194
+ "learning_rate": 4.99898253844669e-07,
195
+ "logits/chosen": -3.031291961669922,
196
+ "logits/rejected": -2.954357862472534,
197
+ "logps/chosen": -379.811767578125,
198
+ "logps/rejected": -316.3371276855469,
199
+ "loss": 0.2486,
200
+ "rewards/accuracies": 0.8374999761581421,
201
+ "rewards/chosen": 1.340606689453125,
202
+ "rewards/margins": 3.5223803520202637,
203
+ "rewards/rejected": -2.1817739009857178,
204
+ "step": 120
205
+ },
206
+ {
207
+ "epoch": 0.11796733212341198,
208
+ "grad_norm": 33.76463840674312,
209
+ "learning_rate": 4.995466450646198e-07,
210
+ "logits/chosen": -3.1293764114379883,
211
+ "logits/rejected": -3.0540575981140137,
212
+ "logps/chosen": -377.92376708984375,
213
+ "logps/rejected": -272.8460388183594,
214
+ "loss": 0.2578,
215
+ "rewards/accuracies": 0.8999999761581421,
216
+ "rewards/chosen": 1.2404711246490479,
217
+ "rewards/margins": 3.5550155639648438,
218
+ "rewards/rejected": -2.314544677734375,
219
+ "step": 130
220
+ },
221
+ {
222
+ "epoch": 0.12704174228675136,
223
+ "grad_norm": 39.029774483087,
224
+ "learning_rate": 4.989442707764628e-07,
225
+ "logits/chosen": -3.071303129196167,
226
+ "logits/rejected": -3.0478246212005615,
227
+ "logps/chosen": -376.86480712890625,
228
+ "logps/rejected": -292.97735595703125,
229
+ "loss": 0.2579,
230
+ "rewards/accuracies": 0.887499988079071,
231
+ "rewards/chosen": 1.3565551042556763,
232
+ "rewards/margins": 3.8431458473205566,
233
+ "rewards/rejected": -2.4865901470184326,
234
+ "step": 140
235
+ },
236
+ {
237
+ "epoch": 0.13611615245009073,
238
+ "grad_norm": 43.986124060871525,
239
+ "learning_rate": 4.980917362966688e-07,
240
+ "logits/chosen": -3.1637825965881348,
241
+ "logits/rejected": -3.027613401412964,
242
+ "logps/chosen": -415.8460388183594,
243
+ "logps/rejected": -311.55364990234375,
244
+ "loss": 0.2438,
245
+ "rewards/accuracies": 0.875,
246
+ "rewards/chosen": 1.2525231838226318,
247
+ "rewards/margins": 4.118167400360107,
248
+ "rewards/rejected": -2.8656444549560547,
249
+ "step": 150
250
+ },
251
+ {
252
+ "epoch": 0.14519056261343014,
253
+ "grad_norm": 46.28933752084163,
254
+ "learning_rate": 4.969898983237597e-07,
255
+ "logits/chosen": -3.1585474014282227,
256
+ "logits/rejected": -3.050255298614502,
257
+ "logps/chosen": -353.08795166015625,
258
+ "logps/rejected": -283.6133117675781,
259
+ "loss": 0.2386,
260
+ "rewards/accuracies": 0.9125000238418579,
261
+ "rewards/chosen": 1.1768524646759033,
262
+ "rewards/margins": 3.947617769241333,
263
+ "rewards/rejected": -2.7707653045654297,
264
+ "step": 160
265
+ },
266
+ {
267
+ "epoch": 0.1542649727767695,
268
+ "grad_norm": 40.8112800014664,
269
+ "learning_rate": 4.95639864077426e-07,
270
+ "logits/chosen": -3.096083164215088,
271
+ "logits/rejected": -3.021690845489502,
272
+ "logps/chosen": -412.97119140625,
273
+ "logps/rejected": -300.5186462402344,
274
+ "loss": 0.2598,
275
+ "rewards/accuracies": 0.8500000238418579,
276
+ "rewards/chosen": 1.2744516134262085,
277
+ "rewards/margins": 4.562188148498535,
278
+ "rewards/rejected": -3.287736177444458,
279
+ "step": 170
280
+ },
281
+ {
282
+ "epoch": 0.16333938294010888,
283
+ "grad_norm": 39.2976511256155,
284
+ "learning_rate": 4.940429901858992e-07,
285
+ "logits/chosen": -3.0431342124938965,
286
+ "logits/rejected": -2.972602605819702,
287
+ "logps/chosen": -365.78326416015625,
288
+ "logps/rejected": -281.6066589355469,
289
+ "loss": 0.2303,
290
+ "rewards/accuracies": 0.8374999761581421,
291
+ "rewards/chosen": 1.166215181350708,
292
+ "rewards/margins": 4.303629398345947,
293
+ "rewards/rejected": -3.1374142169952393,
294
+ "step": 180
295
+ },
296
+ {
297
+ "epoch": 0.1724137931034483,
298
+ "grad_norm": 34.32156443441858,
299
+ "learning_rate": 4.922008813226972e-07,
300
+ "logits/chosen": -3.0900955200195312,
301
+ "logits/rejected": -2.9535610675811768,
302
+ "logps/chosen": -395.902099609375,
303
+ "logps/rejected": -305.49871826171875,
304
+ "loss": 0.2338,
305
+ "rewards/accuracies": 0.875,
306
+ "rewards/chosen": 1.2969857454299927,
307
+ "rewards/margins": 4.443634986877441,
308
+ "rewards/rejected": -3.14664888381958,
309
+ "step": 190
310
+ },
311
+ {
312
+ "epoch": 0.18148820326678766,
313
+ "grad_norm": 44.14419053367824,
314
+ "learning_rate": 4.901153885941126e-07,
315
+ "logits/chosen": -3.0704243183135986,
316
+ "logits/rejected": -2.931751251220703,
317
+ "logps/chosen": -411.55560302734375,
318
+ "logps/rejected": -368.4609375,
319
+ "loss": 0.2297,
320
+ "rewards/accuracies": 0.925000011920929,
321
+ "rewards/chosen": 1.5352587699890137,
322
+ "rewards/margins": 5.012287616729736,
323
+ "rewards/rejected": -3.4770290851593018,
324
+ "step": 200
325
+ },
326
+ {
327
+ "epoch": 0.19056261343012704,
328
+ "grad_norm": 30.76712341757137,
329
+ "learning_rate": 4.877886076790663e-07,
330
+ "logits/chosen": -2.944895029067993,
331
+ "logits/rejected": -2.8378586769104004,
332
+ "logps/chosen": -375.783203125,
333
+ "logps/rejected": -300.9497375488281,
334
+ "loss": 0.2302,
335
+ "rewards/accuracies": 0.8374999761581421,
336
+ "rewards/chosen": 1.2123253345489502,
337
+ "rewards/margins": 4.578190803527832,
338
+ "rewards/rejected": -3.3658649921417236,
339
+ "step": 210
340
+ },
341
+ {
342
+ "epoch": 0.1996370235934664,
343
+ "grad_norm": 37.33134180621657,
344
+ "learning_rate": 4.852228767231913e-07,
345
+ "logits/chosen": -3.0588059425354004,
346
+ "logits/rejected": -2.8839163780212402,
347
+ "logps/chosen": -387.7483215332031,
348
+ "logps/rejected": -300.7808532714844,
349
+ "loss": 0.2294,
350
+ "rewards/accuracies": 0.9125000238418579,
351
+ "rewards/chosen": 1.5753682851791382,
352
+ "rewards/margins": 5.524721145629883,
353
+ "rewards/rejected": -3.949352741241455,
354
+ "step": 220
355
+ },
356
+ {
357
+ "epoch": 0.20871143375680581,
358
+ "grad_norm": 38.73538764825298,
359
+ "learning_rate": 4.824207739892674e-07,
360
+ "logits/chosen": -3.059415102005005,
361
+ "logits/rejected": -2.958604335784912,
362
+ "logps/chosen": -406.916015625,
363
+ "logps/rejected": -364.64337158203125,
364
+ "loss": 0.212,
365
+ "rewards/accuracies": 0.887499988079071,
366
+ "rewards/chosen": 1.1598107814788818,
367
+ "rewards/margins": 4.741861343383789,
368
+ "rewards/rejected": -3.5820508003234863,
369
+ "step": 230
370
+ },
371
+ {
372
+ "epoch": 0.2177858439201452,
373
+ "grad_norm": 32.6395207117509,
374
+ "learning_rate": 4.793851152663654e-07,
375
+ "logits/chosen": -2.9935600757598877,
376
+ "logits/rejected": -2.83933687210083,
377
+ "logps/chosen": -371.41241455078125,
378
+ "logps/rejected": -300.81817626953125,
379
+ "loss": 0.2398,
380
+ "rewards/accuracies": 0.9125000238418579,
381
+ "rewards/chosen": 0.9606507420539856,
382
+ "rewards/margins": 4.657492160797119,
383
+ "rewards/rejected": -3.6968414783477783,
384
+ "step": 240
385
+ },
386
+ {
387
+ "epoch": 0.22686025408348456,
388
+ "grad_norm": 42.815736529820505,
389
+ "learning_rate": 4.7611895104030507e-07,
390
+ "logits/chosen": -2.9664015769958496,
391
+ "logits/rejected": -2.933074474334717,
392
+ "logps/chosen": -398.3942565917969,
393
+ "logps/rejected": -339.1114501953125,
394
+ "loss": 0.2072,
395
+ "rewards/accuracies": 0.9125000238418579,
396
+ "rewards/chosen": 1.0510776042938232,
397
+ "rewards/margins": 5.130535125732422,
398
+ "rewards/rejected": -4.079457759857178,
399
+ "step": 250
400
+ },
401
+ {
402
+ "epoch": 0.23593466424682397,
403
+ "grad_norm": 48.49290374090693,
404
+ "learning_rate": 4.726255634282693e-07,
405
+ "logits/chosen": -2.9970316886901855,
406
+ "logits/rejected": -2.9537835121154785,
407
+ "logps/chosen": -392.30181884765625,
408
+ "logps/rejected": -379.0828857421875,
409
+ "loss": 0.2264,
410
+ "rewards/accuracies": 0.925000011920929,
411
+ "rewards/chosen": 0.7763628959655762,
412
+ "rewards/margins": 4.426127910614014,
413
+ "rewards/rejected": -3.6497650146484375,
414
+ "step": 260
415
+ },
416
+ {
417
+ "epoch": 0.24500907441016334,
418
+ "grad_norm": 32.77780026693269,
419
+ "learning_rate": 4.689084628806562e-07,
420
+ "logits/chosen": -2.927907943725586,
421
+ "logits/rejected": -2.8476579189300537,
422
+ "logps/chosen": -345.444091796875,
423
+ "logps/rejected": -308.51922607421875,
424
+ "loss": 0.2017,
425
+ "rewards/accuracies": 0.925000011920929,
426
+ "rewards/chosen": 0.9452263116836548,
427
+ "rewards/margins": 5.563403129577637,
428
+ "rewards/rejected": -4.61817741394043,
429
+ "step": 270
430
+ },
431
+ {
432
+ "epoch": 0.2540834845735027,
433
+ "grad_norm": 43.514698543958396,
434
+ "learning_rate": 4.6497138465348296e-07,
435
+ "logits/chosen": -3.086697578430176,
436
+ "logits/rejected": -2.8975157737731934,
437
+ "logps/chosen": -374.7484436035156,
438
+ "logps/rejected": -321.0020446777344,
439
+ "loss": 0.2246,
440
+ "rewards/accuracies": 0.824999988079071,
441
+ "rewards/chosen": 0.891644299030304,
442
+ "rewards/margins": 5.307863712310791,
443
+ "rewards/rejected": -4.416219234466553,
444
+ "step": 280
445
+ },
446
+ {
447
+ "epoch": 0.2631578947368421,
448
+ "grad_norm": 52.94657770019182,
449
+ "learning_rate": 4.608182850548852e-07,
450
+ "logits/chosen": -3.0034172534942627,
451
+ "logits/rejected": -2.9029269218444824,
452
+ "logps/chosen": -367.73492431640625,
453
+ "logps/rejected": -322.0057373046875,
454
+ "loss": 0.1981,
455
+ "rewards/accuracies": 0.9125000238418579,
456
+ "rewards/chosen": 1.3871870040893555,
457
+ "rewards/margins": 5.972929954528809,
458
+ "rewards/rejected": -4.585742950439453,
459
+ "step": 290
460
+ },
461
+ {
462
+ "epoch": 0.27223230490018147,
463
+ "grad_norm": 177.99829370761782,
464
+ "learning_rate": 4.564533374694852e-07,
465
+ "logits/chosen": -3.0038394927978516,
466
+ "logits/rejected": -2.9105029106140137,
467
+ "logps/chosen": -433.2913513183594,
468
+ "logps/rejected": -328.7992858886719,
469
+ "loss": 0.1692,
470
+ "rewards/accuracies": 0.9125000238418579,
471
+ "rewards/chosen": 1.1000345945358276,
472
+ "rewards/margins": 5.619580268859863,
473
+ "rewards/rejected": -4.519545555114746,
474
+ "step": 300
475
+ },
476
+ {
477
+ "epoch": 0.2813067150635209,
478
+ "grad_norm": 50.49886074130672,
479
+ "learning_rate": 4.518809281646232e-07,
480
+ "logits/chosen": -2.985572338104248,
481
+ "logits/rejected": -2.889586925506592,
482
+ "logps/chosen": -393.39849853515625,
483
+ "logps/rejected": -343.38818359375,
484
+ "loss": 0.1725,
485
+ "rewards/accuracies": 0.875,
486
+ "rewards/chosen": 1.2861218452453613,
487
+ "rewards/margins": 5.814927577972412,
488
+ "rewards/rejected": -4.528805732727051,
489
+ "step": 310
490
+ },
491
+ {
492
+ "epoch": 0.29038112522686027,
493
+ "grad_norm": 55.02234775232488,
494
+ "learning_rate": 4.4710565188266623e-07,
495
+ "logits/chosen": -2.9454689025878906,
496
+ "logits/rejected": -2.8238115310668945,
497
+ "logps/chosen": -402.4577331542969,
498
+ "logps/rejected": -325.34228515625,
499
+ "loss": 0.1948,
500
+ "rewards/accuracies": 0.925000011920929,
501
+ "rewards/chosen": 1.1505086421966553,
502
+ "rewards/margins": 6.387723445892334,
503
+ "rewards/rejected": -5.237215042114258,
504
+ "step": 320
505
+ },
506
+ {
507
+ "epoch": 0.29945553539019965,
508
+ "grad_norm": 39.01256666197835,
509
+ "learning_rate": 4.4213230722382343e-07,
510
+ "logits/chosen": -2.9265170097351074,
511
+ "logits/rejected": -2.8057961463928223,
512
+ "logps/chosen": -395.21783447265625,
513
+ "logps/rejected": -351.7288513183594,
514
+ "loss": 0.1713,
515
+ "rewards/accuracies": 0.9624999761581421,
516
+ "rewards/chosen": 1.236005187034607,
517
+ "rewards/margins": 6.866035461425781,
518
+ "rewards/rejected": -5.630031108856201,
519
+ "step": 330
520
+ },
521
+ {
522
+ "epoch": 0.308529945553539,
523
+ "grad_norm": 47.58493724390636,
524
+ "learning_rate": 4.3696589182410805e-07,
525
+ "logits/chosen": -2.938333749771118,
526
+ "logits/rejected": -2.813967704772949,
527
+ "logps/chosen": -355.73797607421875,
528
+ "logps/rejected": -356.24774169921875,
529
+ "loss": 0.2,
530
+ "rewards/accuracies": 0.875,
531
+ "rewards/chosen": 1.1646188497543335,
532
+ "rewards/margins": 5.591378211975098,
533
+ "rewards/rejected": -4.426760196685791,
534
+ "step": 340
535
+ },
536
+ {
537
+ "epoch": 0.3176043557168784,
538
+ "grad_norm": 39.65265947598852,
539
+ "learning_rate": 4.3161159733329143e-07,
540
+ "logits/chosen": -2.99360728263855,
541
+ "logits/rejected": -2.904939651489258,
542
+ "logps/chosen": -418.7228088378906,
543
+ "logps/rejected": -416.0142517089844,
544
+ "loss": 0.2203,
545
+ "rewards/accuracies": 0.9125000238418579,
546
+ "rewards/chosen": 0.9050772786140442,
547
+ "rewards/margins": 5.665884494781494,
548
+ "rewards/rejected": -4.760807514190674,
549
+ "step": 350
550
+ },
551
+ {
552
+ "epoch": 0.32667876588021777,
553
+ "grad_norm": 43.14961975290785,
554
+ "learning_rate": 4.2607480419789587e-07,
555
+ "logits/chosen": -2.92106556892395,
556
+ "logits/rejected": -2.852785587310791,
557
+ "logps/chosen": -333.32525634765625,
558
+ "logps/rejected": -300.5793151855469,
559
+ "loss": 0.1981,
560
+ "rewards/accuracies": 0.925000011920929,
561
+ "rewards/chosen": 0.9021322131156921,
562
+ "rewards/margins": 5.309260368347168,
563
+ "rewards/rejected": -4.40712833404541,
564
+ "step": 360
565
+ },
566
+ {
567
+ "epoch": 0.33575317604355714,
568
+ "grad_norm": 27.974944443111728,
569
+ "learning_rate": 4.2036107625446783e-07,
570
+ "logits/chosen": -2.98514986038208,
571
+ "logits/rejected": -2.78173828125,
572
+ "logps/chosen": -397.83782958984375,
573
+ "logps/rejected": -335.9725646972656,
574
+ "loss": 0.1841,
575
+ "rewards/accuracies": 0.949999988079071,
576
+ "rewards/chosen": 1.3740975856781006,
577
+ "rewards/margins": 7.041192531585693,
578
+ "rewards/rejected": -5.667096138000488,
579
+ "step": 370
580
+ },
581
+ {
582
+ "epoch": 0.3448275862068966,
583
+ "grad_norm": 35.04567436633488,
584
+ "learning_rate": 4.1447615513856635e-07,
585
+ "logits/chosen": -2.9535293579101562,
586
+ "logits/rejected": -2.8194968700408936,
587
+ "logps/chosen": -407.129150390625,
588
+ "logps/rejected": -377.0103759765625,
589
+ "loss": 0.1889,
590
+ "rewards/accuracies": 0.8999999761581421,
591
+ "rewards/chosen": 1.0056817531585693,
592
+ "rewards/margins": 5.7427191734313965,
593
+ "rewards/rejected": -4.737037181854248,
594
+ "step": 380
595
+ },
596
+ {
597
+ "epoch": 0.35390199637023595,
598
+ "grad_norm": 45.256080246210836,
599
+ "learning_rate": 4.084259545150832e-07,
600
+ "logits/chosen": -2.968611240386963,
601
+ "logits/rejected": -2.848597288131714,
602
+ "logps/chosen": -392.96746826171875,
603
+ "logps/rejected": -327.70751953125,
604
+ "loss": 0.2033,
605
+ "rewards/accuracies": 0.887499988079071,
606
+ "rewards/chosen": 0.9895534515380859,
607
+ "rewards/margins": 6.1571173667907715,
608
+ "rewards/rejected": -5.167563438415527,
609
+ "step": 390
610
+ },
611
+ {
612
+ "epoch": 0.3629764065335753,
613
+ "grad_norm": 41.85296728501026,
614
+ "learning_rate": 4.022165541356941e-07,
615
+ "logits/chosen": -2.9857497215270996,
616
+ "logits/rejected": -2.7626802921295166,
617
+ "logps/chosen": -396.25079345703125,
618
+ "logps/rejected": -345.28436279296875,
619
+ "loss": 0.1963,
620
+ "rewards/accuracies": 0.949999988079071,
621
+ "rewards/chosen": 1.164312481880188,
622
+ "rewards/margins": 7.196902275085449,
623
+ "rewards/rejected": -6.032589912414551,
624
+ "step": 400
625
+ },
626
+ {
627
+ "epoch": 0.3720508166969147,
628
+ "grad_norm": 62.753257963550354,
629
+ "learning_rate": 3.9585419372941163e-07,
630
+ "logits/chosen": -2.9352316856384277,
631
+ "logits/rejected": -2.838834762573242,
632
+ "logps/chosen": -389.45428466796875,
633
+ "logps/rejected": -342.6952209472656,
634
+ "loss": 0.2307,
635
+ "rewards/accuracies": 0.887499988079071,
636
+ "rewards/chosen": 0.777059018611908,
637
+ "rewards/margins": 6.305887222290039,
638
+ "rewards/rejected": -5.5288286209106445,
639
+ "step": 410
640
+ },
641
+ {
642
+ "epoch": 0.3811252268602541,
643
+ "grad_norm": 48.98248771165015,
644
+ "learning_rate": 3.893452667323793e-07,
645
+ "logits/chosen": -3.0265731811523438,
646
+ "logits/rejected": -2.9187612533569336,
647
+ "logps/chosen": -394.258544921875,
648
+ "logps/rejected": -386.22991943359375,
649
+ "loss": 0.1854,
650
+ "rewards/accuracies": 0.9375,
651
+ "rewards/chosen": 1.1776565313339233,
652
+ "rewards/margins": 7.554742336273193,
653
+ "rewards/rejected": -6.3770856857299805,
654
+ "step": 420
655
+ },
656
+ {
657
+ "epoch": 0.39019963702359345,
658
+ "grad_norm": 39.229272199302606,
659
+ "learning_rate": 3.826963138632079e-07,
660
+ "logits/chosen": -2.914794683456421,
661
+ "logits/rejected": -2.7973687648773193,
662
+ "logps/chosen": -372.42962646484375,
663
+ "logps/rejected": -334.1484375,
664
+ "loss": 0.2196,
665
+ "rewards/accuracies": 0.875,
666
+ "rewards/chosen": 0.8507086634635925,
667
+ "rewards/margins": 6.879024505615234,
668
+ "rewards/rejected": -6.028316020965576,
669
+ "step": 430
670
+ },
671
+ {
672
+ "epoch": 0.3992740471869328,
673
+ "grad_norm": 37.98126277398323,
674
+ "learning_rate": 3.759140165503101e-07,
675
+ "logits/chosen": -3.016681671142578,
676
+ "logits/rejected": -2.8523800373077393,
677
+ "logps/chosen": -371.7410583496094,
678
+ "logps/rejected": -334.4661560058594,
679
+ "loss": 0.2039,
680
+ "rewards/accuracies": 0.8999999761581421,
681
+ "rewards/chosen": 1.0404958724975586,
682
+ "rewards/margins": 7.069964408874512,
683
+ "rewards/rejected": -6.029468536376953,
684
+ "step": 440
685
+ },
686
+ {
687
+ "epoch": 0.40834845735027225,
688
+ "grad_norm": 32.62957193935001,
689
+ "learning_rate": 3.6900519021783783e-07,
690
+ "logits/chosen": -3.0138421058654785,
691
+ "logits/rejected": -2.8914031982421875,
692
+ "logps/chosen": -382.3927917480469,
693
+ "logps/rejected": -346.4029541015625,
694
+ "loss": 0.1882,
695
+ "rewards/accuracies": 0.887499988079071,
696
+ "rewards/chosen": 0.9160375595092773,
697
+ "rewards/margins": 6.041922569274902,
698
+ "rewards/rejected": -5.125885486602783,
699
+ "step": 450
700
+ },
701
+ {
702
+ "epoch": 0.41742286751361163,
703
+ "grad_norm": 49.245544047890604,
704
+ "learning_rate": 3.619767774369694e-07,
705
+ "logits/chosen": -3.032761812210083,
706
+ "logits/rejected": -2.8894588947296143,
707
+ "logps/chosen": -375.8642578125,
708
+ "logps/rejected": -359.11322021484375,
709
+ "loss": 0.2369,
710
+ "rewards/accuracies": 0.8500000238418579,
711
+ "rewards/chosen": 0.7896968126296997,
712
+ "rewards/margins": 5.545886039733887,
713
+ "rewards/rejected": -4.756189346313477,
714
+ "step": 460
715
+ },
716
+ {
717
+ "epoch": 0.426497277676951,
718
+ "grad_norm": 40.38378734951698,
719
+ "learning_rate": 3.548358409494291e-07,
720
+ "logits/chosen": -3.0017242431640625,
721
+ "logits/rejected": -2.8712053298950195,
722
+ "logps/chosen": -403.6952209472656,
723
+ "logps/rejected": -327.45025634765625,
724
+ "loss": 0.2113,
725
+ "rewards/accuracies": 0.9125000238418579,
726
+ "rewards/chosen": 0.7638002634048462,
727
+ "rewards/margins": 5.509848594665527,
728
+ "rewards/rejected": -4.746048450469971,
729
+ "step": 470
730
+ },
731
+ {
732
+ "epoch": 0.4355716878402904,
733
+ "grad_norm": 42.20523053538018,
734
+ "learning_rate": 3.475895565702479e-07,
735
+ "logits/chosen": -3.0346570014953613,
736
+ "logits/rejected": -2.8783650398254395,
737
+ "logps/chosen": -402.5455322265625,
738
+ "logps/rejected": -357.03704833984375,
739
+ "loss": 0.1601,
740
+ "rewards/accuracies": 0.925000011920929,
741
+ "rewards/chosen": 0.7746015787124634,
742
+ "rewards/margins": 6.277700424194336,
743
+ "rewards/rejected": -5.503098487854004,
744
+ "step": 480
745
+ },
746
+ {
747
+ "epoch": 0.44464609800362975,
748
+ "grad_norm": 44.76927870480236,
749
+ "learning_rate": 3.402452059769006e-07,
750
+ "logits/chosen": -2.947755813598633,
751
+ "logits/rejected": -2.7606940269470215,
752
+ "logps/chosen": -388.6606140136719,
753
+ "logps/rejected": -330.046142578125,
754
+ "loss": 0.1948,
755
+ "rewards/accuracies": 0.925000011920929,
756
+ "rewards/chosen": 1.0170588493347168,
757
+ "rewards/margins": 6.382786273956299,
758
+ "rewards/rejected": -5.365727424621582,
759
+ "step": 490
760
+ },
761
+ {
762
+ "epoch": 0.4537205081669691,
763
+ "grad_norm": 52.753061991347224,
764
+ "learning_rate": 3.3281016939206175e-07,
765
+ "logits/chosen": -2.9896976947784424,
766
+ "logits/rejected": -2.8140571117401123,
767
+ "logps/chosen": -393.71551513671875,
768
+ "logps/rejected": -367.58056640625,
769
+ "loss": 0.2108,
770
+ "rewards/accuracies": 0.949999988079071,
771
+ "rewards/chosen": 0.8480059504508972,
772
+ "rewards/margins": 6.418947696685791,
773
+ "rewards/rejected": -5.570941925048828,
774
+ "step": 500
775
+ },
776
+ {
777
+ "epoch": 0.4627949183303085,
778
+ "grad_norm": 33.81897197547977,
779
+ "learning_rate": 3.2529191816733575e-07,
780
+ "logits/chosen": -2.9537408351898193,
781
+ "logits/rejected": -2.8202428817749023,
782
+ "logps/chosen": -399.65386962890625,
783
+ "logps/rejected": -376.10516357421875,
784
+ "loss": 0.1827,
785
+ "rewards/accuracies": 0.9375,
786
+ "rewards/chosen": 0.8464071154594421,
787
+ "rewards/margins": 6.456235408782959,
788
+ "rewards/rejected": -5.609827995300293,
789
+ "step": 510
790
+ },
791
+ {
792
+ "epoch": 0.47186932849364793,
793
+ "grad_norm": 54.599453283610345,
794
+ "learning_rate": 3.1769800727541315e-07,
795
+ "logits/chosen": -2.878075361251831,
796
+ "logits/rejected": -2.6963212490081787,
797
+ "logps/chosen": -387.65777587890625,
798
+ "logps/rejected": -343.40325927734375,
799
+ "loss": 0.1787,
800
+ "rewards/accuracies": 0.925000011920929,
801
+ "rewards/chosen": 0.7126725912094116,
802
+ "rewards/margins": 7.416208744049072,
803
+ "rewards/rejected": -6.703535556793213,
804
+ "step": 520
805
+ },
806
+ {
807
+ "epoch": 0.4809437386569873,
808
+ "grad_norm": 38.57546103390298,
809
+ "learning_rate": 3.1003606771819666e-07,
810
+ "logits/chosen": -2.9312281608581543,
811
+ "logits/rejected": -2.7162327766418457,
812
+ "logps/chosen": -394.450927734375,
813
+ "logps/rejected": -363.7554931640625,
814
+ "loss": 0.189,
815
+ "rewards/accuracies": 0.925000011920929,
816
+ "rewards/chosen": 0.8061065673828125,
817
+ "rewards/margins": 6.801262855529785,
818
+ "rewards/rejected": -5.995156288146973,
819
+ "step": 530
820
+ },
821
+ {
822
+ "epoch": 0.4900181488203267,
823
+ "grad_norm": 40.112845944693504,
824
+ "learning_rate": 3.023137988585276e-07,
825
+ "logits/chosen": -2.88523006439209,
826
+ "logits/rejected": -2.7553329467773438,
827
+ "logps/chosen": -389.08917236328125,
828
+ "logps/rejected": -401.0629577636719,
829
+ "loss": 0.2119,
830
+ "rewards/accuracies": 0.9125000238418579,
831
+ "rewards/chosen": 0.9484742879867554,
832
+ "rewards/margins": 6.505688667297363,
833
+ "rewards/rejected": -5.557214260101318,
834
+ "step": 540
835
+ },
836
+ {
837
+ "epoch": 0.49909255898366606,
838
+ "grad_norm": 58.41683522614029,
839
+ "learning_rate": 2.945389606832165e-07,
840
+ "logits/chosen": -2.8954288959503174,
841
+ "logits/rejected": -2.743122100830078,
842
+ "logps/chosen": -406.09307861328125,
843
+ "logps/rejected": -352.92401123046875,
844
+ "loss": 0.2069,
845
+ "rewards/accuracies": 0.887499988079071,
846
+ "rewards/chosen": 1.2253668308258057,
847
+ "rewards/margins": 8.006866455078125,
848
+ "rewards/rejected": -6.78149938583374,
849
+ "step": 550
850
+ },
851
+ {
852
+ "epoch": 0.5081669691470054,
853
+ "grad_norm": 45.17941049122177,
854
+ "learning_rate": 2.8671936600515445e-07,
855
+ "logits/chosen": -2.9202160835266113,
856
+ "logits/rejected": -2.773153066635132,
857
+ "logps/chosen": -373.97967529296875,
858
+ "logps/rejected": -369.88433837890625,
859
+ "loss": 0.1702,
860
+ "rewards/accuracies": 0.949999988079071,
861
+ "rewards/chosen": 0.7372530698776245,
862
+ "rewards/margins": 6.256679534912109,
863
+ "rewards/rejected": -5.5194268226623535,
864
+ "step": 560
865
+ },
866
+ {
867
+ "epoch": 0.5172413793103449,
868
+ "grad_norm": 45.0884998444585,
869
+ "learning_rate": 2.788628726123399e-07,
870
+ "logits/chosen": -2.9033942222595215,
871
+ "logits/rejected": -2.814613103866577,
872
+ "logps/chosen": -372.53851318359375,
873
+ "logps/rejected": -321.98760986328125,
874
+ "loss": 0.2037,
875
+ "rewards/accuracies": 0.9375,
876
+ "rewards/chosen": 0.7214611768722534,
877
+ "rewards/margins": 6.599020957946777,
878
+ "rewards/rejected": -5.877559661865234,
879
+ "step": 570
880
+ },
881
+ {
882
+ "epoch": 0.5263157894736842,
883
+ "grad_norm": 42.70236030373264,
884
+ "learning_rate": 2.7097737537171095e-07,
885
+ "logits/chosen": -2.9954402446746826,
886
+ "logits/rejected": -2.794814109802246,
887
+ "logps/chosen": -385.25457763671875,
888
+ "logps/rejected": -384.85858154296875,
889
+ "loss": 0.1929,
890
+ "rewards/accuracies": 0.9375,
891
+ "rewards/chosen": 0.578790009021759,
892
+ "rewards/margins": 6.581429481506348,
893
+ "rewards/rejected": -6.0026397705078125,
894
+ "step": 580
895
+ },
896
+ {
897
+ "epoch": 0.5353901996370236,
898
+ "grad_norm": 29.334160089625406,
899
+ "learning_rate": 2.6307079829571685e-07,
900
+ "logits/chosen": -2.958986282348633,
901
+ "logits/rejected": -2.819699764251709,
902
+ "logps/chosen": -398.150146484375,
903
+ "logps/rejected": -386.773193359375,
904
+ "loss": 0.1857,
905
+ "rewards/accuracies": 0.887499988079071,
906
+ "rewards/chosen": 0.7929005026817322,
907
+ "rewards/margins": 6.71899938583374,
908
+ "rewards/rejected": -5.9260993003845215,
909
+ "step": 590
910
+ },
911
+ {
912
+ "epoch": 0.5444646098003629,
913
+ "grad_norm": 29.399030894562397,
914
+ "learning_rate": 2.551510865796032e-07,
915
+ "logits/chosen": -2.888155698776245,
916
+ "logits/rejected": -2.7868504524230957,
917
+ "logps/chosen": -325.9752502441406,
918
+ "logps/rejected": -362.0836486816406,
919
+ "loss": 0.1832,
920
+ "rewards/accuracies": 0.8999999761581421,
921
+ "rewards/chosen": 0.4539141058921814,
922
+ "rewards/margins": 7.052548885345459,
923
+ "rewards/rejected": -6.598635196685791,
924
+ "step": 600
925
+ },
926
+ {
927
+ "epoch": 0.5535390199637024,
928
+ "grad_norm": 36.75793480010378,
929
+ "learning_rate": 2.472261986174088e-07,
930
+ "logits/chosen": -2.8816776275634766,
931
+ "logits/rejected": -2.7344906330108643,
932
+ "logps/chosen": -430.36065673828125,
933
+ "logps/rejected": -403.3946838378906,
934
+ "loss": 0.1992,
935
+ "rewards/accuracies": 0.8999999761581421,
936
+ "rewards/chosen": 0.22524046897888184,
937
+ "rewards/margins": 5.51934814453125,
938
+ "rewards/rejected": -5.294107913970947,
939
+ "step": 610
940
+ },
941
+ {
942
+ "epoch": 0.5626134301270418,
943
+ "grad_norm": 40.826798483602005,
944
+ "learning_rate": 2.393040980047015e-07,
945
+ "logits/chosen": -2.967729091644287,
946
+ "logits/rejected": -2.8321826457977295,
947
+ "logps/chosen": -422.4341735839844,
948
+ "logps/rejected": -382.2395324707031,
949
+ "loss": 0.1743,
950
+ "rewards/accuracies": 0.9125000238418579,
951
+ "rewards/chosen": 0.7554537057876587,
952
+ "rewards/margins": 7.423792839050293,
953
+ "rewards/rejected": -6.668339729309082,
954
+ "step": 620
955
+ },
956
+ {
957
+ "epoch": 0.5716878402903811,
958
+ "grad_norm": 38.907182524465654,
959
+ "learning_rate": 2.3139274553608494e-07,
960
+ "logits/chosen": -2.9248204231262207,
961
+ "logits/rejected": -2.7649989128112793,
962
+ "logps/chosen": -406.0899963378906,
963
+ "logps/rejected": -357.7886047363281,
964
+ "loss": 0.1822,
965
+ "rewards/accuracies": 0.925000011920929,
966
+ "rewards/chosen": 0.6926982402801514,
967
+ "rewards/margins": 6.806327819824219,
968
+ "rewards/rejected": -6.1136298179626465,
969
+ "step": 630
970
+ },
971
+ {
972
+ "epoch": 0.5807622504537205,
973
+ "grad_norm": 64.2592666433393,
974
+ "learning_rate": 2.2350009120552156e-07,
975
+ "logits/chosen": -2.9787538051605225,
976
+ "logits/rejected": -2.829073905944824,
977
+ "logps/chosen": -407.52520751953125,
978
+ "logps/rejected": -409.28289794921875,
979
+ "loss": 0.2093,
980
+ "rewards/accuracies": 0.887499988079071,
981
+ "rewards/chosen": 0.845790684223175,
982
+ "rewards/margins": 7.095252990722656,
983
+ "rewards/rejected": -6.249462604522705,
984
+ "step": 640
985
+ },
986
+ {
987
+ "epoch": 0.5898366606170599,
988
+ "grad_norm": 45.704317168510784,
989
+ "learning_rate": 2.1563406621750825e-07,
990
+ "logits/chosen": -2.8653993606567383,
991
+ "logits/rejected": -2.707611322402954,
992
+ "logps/chosen": -368.9764709472656,
993
+ "logps/rejected": -356.0212707519531,
994
+ "loss": 0.1977,
995
+ "rewards/accuracies": 0.862500011920929,
996
+ "rewards/chosen": 0.7670904397964478,
997
+ "rewards/margins": 6.501168727874756,
998
+ "rewards/rejected": -5.734078407287598,
999
+ "step": 650
1000
+ },
1001
+ {
1002
+ "epoch": 0.5989110707803993,
1003
+ "grad_norm": 25.897772050384255,
1004
+ "learning_rate": 2.0780257501713346e-07,
1005
+ "logits/chosen": -2.9106478691101074,
1006
+ "logits/rejected": -2.778205394744873,
1007
+ "logps/chosen": -423.90802001953125,
1008
+ "logps/rejected": -412.2939453125,
1009
+ "loss": 0.1887,
1010
+ "rewards/accuracies": 0.887499988079071,
1011
+ "rewards/chosen": 0.707015872001648,
1012
+ "rewards/margins": 7.062180995941162,
1013
+ "rewards/rejected": -6.355164527893066,
1014
+ "step": 660
1015
+ },
1016
+ {
1017
+ "epoch": 0.6079854809437386,
1018
+ "grad_norm": 34.675374116606974,
1019
+ "learning_rate": 2.000134873470243e-07,
1020
+ "logits/chosen": -2.8369908332824707,
1021
+ "logits/rejected": -2.7423768043518066,
1022
+ "logps/chosen": -342.940185546875,
1023
+ "logps/rejected": -344.69903564453125,
1024
+ "loss": 0.178,
1025
+ "rewards/accuracies": 0.9125000238418579,
1026
+ "rewards/chosen": 0.7170498371124268,
1027
+ "rewards/margins": 6.7858076095581055,
1028
+ "rewards/rejected": -6.068758964538574,
1029
+ "step": 670
1030
+ },
1031
+ {
1032
+ "epoch": 0.617059891107078,
1033
+ "grad_norm": 26.932488865103295,
1034
+ "learning_rate": 1.922746303391655e-07,
1035
+ "logits/chosen": -2.8944993019104004,
1036
+ "logits/rejected": -2.760188102722168,
1037
+ "logps/chosen": -394.26812744140625,
1038
+ "logps/rejected": -363.68389892578125,
1039
+ "loss": 0.1664,
1040
+ "rewards/accuracies": 0.925000011920929,
1041
+ "rewards/chosen": 1.4090805053710938,
1042
+ "rewards/margins": 7.532193660736084,
1043
+ "rewards/rejected": -6.123114585876465,
1044
+ "step": 680
1045
+ },
1046
+ {
1047
+ "epoch": 0.6261343012704175,
1048
+ "grad_norm": 37.470507715790056,
1049
+ "learning_rate": 1.8459378064953754e-07,
1050
+ "logits/chosen": -2.9657504558563232,
1051
+ "logits/rejected": -2.8593227863311768,
1052
+ "logps/chosen": -416.7271423339844,
1053
+ "logps/rejected": -370.2811584472656,
1054
+ "loss": 0.2084,
1055
+ "rewards/accuracies": 0.875,
1056
+ "rewards/chosen": 0.9790989756584167,
1057
+ "rewards/margins": 6.995152950286865,
1058
+ "rewards/rejected": -6.016053676605225,
1059
+ "step": 690
1060
+ },
1061
+ {
1062
+ "epoch": 0.6352087114337568,
1063
+ "grad_norm": 37.273869581880994,
1064
+ "learning_rate": 1.7697865664347694e-07,
1065
+ "logits/chosen": -2.9320342540740967,
1066
+ "logits/rejected": -2.7820043563842773,
1067
+ "logps/chosen": -388.0702819824219,
1068
+ "logps/rejected": -331.7247009277344,
1069
+ "loss": 0.183,
1070
+ "rewards/accuracies": 0.9125000238418579,
1071
+ "rewards/chosen": 0.8440800905227661,
1072
+ "rewards/margins": 6.018949031829834,
1073
+ "rewards/rejected": -5.174870014190674,
1074
+ "step": 700
1075
+ },
1076
+ {
1077
+ "epoch": 0.6442831215970962,
1078
+ "grad_norm": 44.63318462189936,
1079
+ "learning_rate": 1.6943691063961213e-07,
1080
+ "logits/chosen": -2.9753224849700928,
1081
+ "logits/rejected": -2.7769880294799805,
1082
+ "logps/chosen": -434.841796875,
1083
+ "logps/rejected": -362.3944396972656,
1084
+ "loss": 0.2011,
1085
+ "rewards/accuracies": 0.887499988079071,
1086
+ "rewards/chosen": 1.0161527395248413,
1087
+ "rewards/margins": 7.2209320068359375,
1088
+ "rewards/rejected": -6.20477819442749,
1089
+ "step": 710
1090
+ },
1091
+ {
1092
+ "epoch": 0.6533575317604355,
1093
+ "grad_norm": 44.28088050095069,
1094
+ "learning_rate": 1.6197612122016846e-07,
1095
+ "logits/chosen": -2.9147086143493652,
1096
+ "logits/rejected": -2.8043932914733887,
1097
+ "logps/chosen": -398.91796875,
1098
+ "logps/rejected": -389.1512145996094,
1099
+ "loss": 0.1713,
1100
+ "rewards/accuracies": 0.8500000238418579,
1101
+ "rewards/chosen": 0.8414427042007446,
1102
+ "rewards/margins": 6.729714870452881,
1103
+ "rewards/rejected": -5.888272285461426,
1104
+ "step": 720
1105
+ },
1106
+ {
1107
+ "epoch": 0.662431941923775,
1108
+ "grad_norm": 47.2641364219176,
1109
+ "learning_rate": 1.5460378561536985e-07,
1110
+ "logits/chosen": -2.917802333831787,
1111
+ "logits/rejected": -2.7316904067993164,
1112
+ "logps/chosen": -369.023193359375,
1113
+ "logps/rejected": -319.9668273925781,
1114
+ "loss": 0.1807,
1115
+ "rewards/accuracies": 0.8999999761581421,
1116
+ "rewards/chosen": 0.9150484204292297,
1117
+ "rewards/margins": 6.587407112121582,
1118
+ "rewards/rejected": -5.672359466552734,
1119
+ "step": 730
1120
+ },
1121
+ {
1122
+ "epoch": 0.6715063520871143,
1123
+ "grad_norm": 30.01994165255659,
1124
+ "learning_rate": 1.473273121695898e-07,
1125
+ "logits/chosen": -2.909447193145752,
1126
+ "logits/rejected": -2.808865547180176,
1127
+ "logps/chosen": -408.38360595703125,
1128
+ "logps/rejected": -387.4463806152344,
1129
+ "loss": 0.185,
1130
+ "rewards/accuracies": 0.887499988079071,
1131
+ "rewards/chosen": 0.7770580649375916,
1132
+ "rewards/margins": 6.37229061126709,
1133
+ "rewards/rejected": -5.5952324867248535,
1134
+ "step": 740
1135
+ },
1136
+ {
1137
+ "epoch": 0.6805807622504537,
1138
+ "grad_norm": 47.21435045984216,
1139
+ "learning_rate": 1.4015401289682214e-07,
1140
+ "logits/chosen": -2.8479952812194824,
1141
+ "logits/rejected": -2.741609811782837,
1142
+ "logps/chosen": -336.0369567871094,
1143
+ "logps/rejected": -328.47589111328125,
1144
+ "loss": 0.2359,
1145
+ "rewards/accuracies": 0.8999999761581421,
1146
+ "rewards/chosen": 0.46019020676612854,
1147
+ "rewards/margins": 5.526650905609131,
1148
+ "rewards/rejected": -5.066461086273193,
1149
+ "step": 750
1150
+ },
1151
+ {
1152
+ "epoch": 0.6896551724137931,
1153
+ "grad_norm": 36.804170758784224,
1154
+ "learning_rate": 1.3309109613295335e-07,
1155
+ "logits/chosen": -2.9334492683410645,
1156
+ "logits/rejected": -2.7972865104675293,
1157
+ "logps/chosen": -420.43133544921875,
1158
+ "logps/rejected": -374.5361328125,
1159
+ "loss": 0.1815,
1160
+ "rewards/accuracies": 0.8500000238418579,
1161
+ "rewards/chosen": 0.5759689211845398,
1162
+ "rewards/margins": 6.218279838562012,
1163
+ "rewards/rejected": -5.642312049865723,
1164
+ "step": 760
1165
+ },
1166
+ {
1167
+ "epoch": 0.6987295825771325,
1168
+ "grad_norm": 35.669878276021684,
1169
+ "learning_rate": 1.2614565929221848e-07,
1170
+ "logits/chosen": -2.9297587871551514,
1171
+ "logits/rejected": -2.762052059173584,
1172
+ "logps/chosen": -372.5475769042969,
1173
+ "logps/rejected": -365.61163330078125,
1174
+ "loss": 0.1943,
1175
+ "rewards/accuracies": 0.8999999761581421,
1176
+ "rewards/chosen": 0.7651561498641968,
1177
+ "rewards/margins": 6.711656093597412,
1178
+ "rewards/rejected": -5.946499824523926,
1179
+ "step": 770
1180
+ },
1181
+ {
1182
+ "epoch": 0.7078039927404719,
1183
+ "grad_norm": 37.95229234009131,
1184
+ "learning_rate": 1.1932468173512137e-07,
1185
+ "logits/chosen": -2.9614641666412354,
1186
+ "logits/rejected": -2.714155435562134,
1187
+ "logps/chosen": -418.35504150390625,
1188
+ "logps/rejected": -348.20721435546875,
1189
+ "loss": 0.1698,
1190
+ "rewards/accuracies": 0.8999999761581421,
1191
+ "rewards/chosen": 1.0913097858428955,
1192
+ "rewards/margins": 7.305689811706543,
1193
+ "rewards/rejected": -6.214380741119385,
1194
+ "step": 780
1195
+ },
1196
+ {
1197
+ "epoch": 0.7168784029038112,
1198
+ "grad_norm": 36.57594683086614,
1199
+ "learning_rate": 1.1263501775498438e-07,
1200
+ "logits/chosen": -2.938605785369873,
1201
+ "logits/rejected": -2.8134965896606445,
1202
+ "logps/chosen": -362.9466552734375,
1203
+ "logps/rejected": -375.305908203125,
1204
+ "loss": 0.2012,
1205
+ "rewards/accuracies": 0.887499988079071,
1206
+ "rewards/chosen": 0.29936400055885315,
1207
+ "rewards/margins": 5.319377899169922,
1208
+ "rewards/rejected": -5.020013332366943,
1209
+ "step": 790
1210
+ },
1211
+ {
1212
+ "epoch": 0.7259528130671506,
1213
+ "grad_norm": 44.23152439879075,
1214
+ "learning_rate": 1.0608338969017682e-07,
1215
+ "logits/chosen": -2.996309757232666,
1216
+ "logits/rejected": -2.8086256980895996,
1217
+ "logps/chosen": -454.28607177734375,
1218
+ "logps/rejected": -414.04010009765625,
1219
+ "loss": 0.2077,
1220
+ "rewards/accuracies": 0.875,
1221
+ "rewards/chosen": 1.2388927936553955,
1222
+ "rewards/margins": 7.735617160797119,
1223
+ "rewards/rejected": -6.496724605560303,
1224
+ "step": 800
1225
+ },
1226
+ {
1227
+ "epoch": 0.73502722323049,
1228
+ "grad_norm": 49.451677093132325,
1229
+ "learning_rate": 9.96763811689425e-08,
1230
+ "logits/chosen": -2.904308795928955,
1231
+ "logits/rejected": -2.783384323120117,
1232
+ "logps/chosen": -391.5575256347656,
1233
+ "logps/rejected": -389.44952392578125,
1234
+ "loss": 0.2014,
1235
+ "rewards/accuracies": 0.8999999761581421,
1236
+ "rewards/chosen": 0.8881933093070984,
1237
+ "rewards/margins": 6.98279333114624,
1238
+ "rewards/rejected": -6.094600677490234,
1239
+ "step": 810
1240
+ },
1241
+ {
1242
+ "epoch": 0.7441016333938294,
1243
+ "grad_norm": 52.0962797399699,
1244
+ "learning_rate": 9.3420430493615e-08,
1245
+ "logits/chosen": -2.8749382495880127,
1246
+ "logits/rejected": -2.759850025177002,
1247
+ "logps/chosen": -366.92822265625,
1248
+ "logps/rejected": -347.29150390625,
1249
+ "loss": 0.1903,
1250
+ "rewards/accuracies": 0.8999999761581421,
1251
+ "rewards/chosen": 0.6582810878753662,
1252
+ "rewards/margins": 6.6857147216796875,
1253
+ "rewards/rejected": -6.027434349060059,
1254
+ "step": 820
1255
+ },
1256
+ {
1257
+ "epoch": 0.7531760435571688,
1258
+ "grad_norm": 33.61793404339224,
1259
+ "learning_rate": 8.732182417086903e-08,
1260
+ "logits/chosen": -2.954723834991455,
1261
+ "logits/rejected": -2.773974895477295,
1262
+ "logps/chosen": -398.5955810546875,
1263
+ "logps/rejected": -388.16168212890625,
1264
+ "loss": 0.1736,
1265
+ "rewards/accuracies": 0.9624999761581421,
1266
+ "rewards/chosen": 1.4529194831848145,
1267
+ "rewards/margins": 8.60301399230957,
1268
+ "rewards/rejected": -7.150094032287598,
1269
+ "step": 830
1270
+ },
1271
+ {
1272
+ "epoch": 0.7622504537205081,
1273
+ "grad_norm": 35.017851342156135,
1274
+ "learning_rate": 8.138669059450778e-08,
1275
+ "logits/chosen": -2.9093117713928223,
1276
+ "logits/rejected": -2.808568000793457,
1277
+ "logps/chosen": -387.08154296875,
1278
+ "logps/rejected": -375.1770324707031,
1279
+ "loss": 0.1741,
1280
+ "rewards/accuracies": 0.9125000238418579,
1281
+ "rewards/chosen": 0.7035520076751709,
1282
+ "rewards/margins": 6.644250392913818,
1283
+ "rewards/rejected": -5.940698623657227,
1284
+ "step": 840
1285
+ },
1286
+ {
1287
+ "epoch": 0.7713248638838476,
1288
+ "grad_norm": 50.95837810281916,
1289
+ "learning_rate": 7.562099388713702e-08,
1290
+ "logits/chosen": -2.957639455795288,
1291
+ "logits/rejected": -2.8320231437683105,
1292
+ "logps/chosen": -385.71533203125,
1293
+ "logps/rejected": -370.684326171875,
1294
+ "loss": 0.1801,
1295
+ "rewards/accuracies": 0.925000011920929,
1296
+ "rewards/chosen": 1.0452622175216675,
1297
+ "rewards/margins": 6.53484582901001,
1298
+ "rewards/rejected": -5.489583492279053,
1299
+ "step": 850
1300
+ },
1301
+ {
1302
+ "epoch": 0.7803992740471869,
1303
+ "grad_norm": 38.76152254168693,
1304
+ "learning_rate": 7.003052790691089e-08,
1305
+ "logits/chosen": -2.9534757137298584,
1306
+ "logits/rejected": -2.808516502380371,
1307
+ "logps/chosen": -386.3631896972656,
1308
+ "logps/rejected": -357.408203125,
1309
+ "loss": 0.1747,
1310
+ "rewards/accuracies": 0.875,
1311
+ "rewards/chosen": 0.7921295762062073,
1312
+ "rewards/margins": 6.969007968902588,
1313
+ "rewards/rejected": -6.176877498626709,
1314
+ "step": 860
1315
+ },
1316
+ {
1317
+ "epoch": 0.7894736842105263,
1318
+ "grad_norm": 48.43342560419616,
1319
+ "learning_rate": 6.462091042537576e-08,
1320
+ "logits/chosen": -2.9628138542175293,
1321
+ "logits/rejected": -2.819770336151123,
1322
+ "logps/chosen": -469.53765869140625,
1323
+ "logps/rejected": -417.4677734375,
1324
+ "loss": 0.2053,
1325
+ "rewards/accuracies": 0.887499988079071,
1326
+ "rewards/chosen": 1.4326789379119873,
1327
+ "rewards/margins": 8.428723335266113,
1328
+ "rewards/rejected": -6.996045112609863,
1329
+ "step": 870
1330
+ },
1331
+ {
1332
+ "epoch": 0.7985480943738656,
1333
+ "grad_norm": 59.70924121351975,
1334
+ "learning_rate": 5.9397577482259043e-08,
1335
+ "logits/chosen": -2.8752875328063965,
1336
+ "logits/rejected": -2.7745578289031982,
1337
+ "logps/chosen": -355.57318115234375,
1338
+ "logps/rejected": -377.0833740234375,
1339
+ "loss": 0.2041,
1340
+ "rewards/accuracies": 0.887499988079071,
1341
+ "rewards/chosen": 0.831215500831604,
1342
+ "rewards/margins": 7.086012363433838,
1343
+ "rewards/rejected": -6.254797458648682,
1344
+ "step": 880
1345
+ },
1346
+ {
1347
+ "epoch": 0.8076225045372051,
1348
+ "grad_norm": 48.84810595799757,
1349
+ "learning_rate": 5.436577792287841e-08,
1350
+ "logits/chosen": -2.9051766395568848,
1351
+ "logits/rejected": -2.7932395935058594,
1352
+ "logps/chosen": -353.7893981933594,
1353
+ "logps/rejected": -335.53155517578125,
1354
+ "loss": 0.1852,
1355
+ "rewards/accuracies": 0.9375,
1356
+ "rewards/chosen": 1.2077093124389648,
1357
+ "rewards/margins": 7.8495988845825195,
1358
+ "rewards/rejected": -6.641890048980713,
1359
+ "step": 890
1360
+ },
1361
+ {
1362
+ "epoch": 0.8166969147005445,
1363
+ "grad_norm": 42.88339321895752,
1364
+ "learning_rate": 4.953056812365958e-08,
1365
+ "logits/chosen": -2.968008518218994,
1366
+ "logits/rejected": -2.7981395721435547,
1367
+ "logps/chosen": -372.144775390625,
1368
+ "logps/rejected": -380.69683837890625,
1369
+ "loss": 0.1864,
1370
+ "rewards/accuracies": 0.875,
1371
+ "rewards/chosen": 0.842302680015564,
1372
+ "rewards/margins": 6.652394771575928,
1373
+ "rewards/rejected": -5.810092449188232,
1374
+ "step": 900
1375
+ },
1376
+ {
1377
+ "epoch": 0.8257713248638838,
1378
+ "grad_norm": 28.951235980213713,
1379
+ "learning_rate": 4.489680691106279e-08,
1380
+ "logits/chosen": -2.989492654800415,
1381
+ "logits/rejected": -2.8102896213531494,
1382
+ "logps/chosen": -456.3057556152344,
1383
+ "logps/rejected": -383.52679443359375,
1384
+ "loss": 0.1658,
1385
+ "rewards/accuracies": 0.949999988079071,
1386
+ "rewards/chosen": 0.962626576423645,
1387
+ "rewards/margins": 6.753287315368652,
1388
+ "rewards/rejected": -5.790660381317139,
1389
+ "step": 910
1390
+ },
1391
+ {
1392
+ "epoch": 0.8348457350272233,
1393
+ "grad_norm": 30.634191739944754,
1394
+ "learning_rate": 4.046915067902443e-08,
1395
+ "logits/chosen": -2.9627013206481934,
1396
+ "logits/rejected": -2.7708044052124023,
1397
+ "logps/chosen": -386.7125549316406,
1398
+ "logps/rejected": -367.0064697265625,
1399
+ "loss": 0.1785,
1400
+ "rewards/accuracies": 0.9624999761581421,
1401
+ "rewards/chosen": 1.275061011314392,
1402
+ "rewards/margins": 8.114995956420898,
1403
+ "rewards/rejected": -6.839936256408691,
1404
+ "step": 920
1405
+ },
1406
+ {
1407
+ "epoch": 0.8439201451905626,
1408
+ "grad_norm": 40.72997706997498,
1409
+ "learning_rate": 3.625204870981974e-08,
1410
+ "logits/chosen": -2.981132984161377,
1411
+ "logits/rejected": -2.8593552112579346,
1412
+ "logps/chosen": -377.1697998046875,
1413
+ "logps/rejected": -368.0603942871094,
1414
+ "loss": 0.1753,
1415
+ "rewards/accuracies": 0.925000011920929,
1416
+ "rewards/chosen": 0.7182249426841736,
1417
+ "rewards/margins": 6.968735694885254,
1418
+ "rewards/rejected": -6.250511169433594,
1419
+ "step": 930
1420
+ },
1421
+ {
1422
+ "epoch": 0.852994555353902,
1423
+ "grad_norm": 41.94194593926165,
1424
+ "learning_rate": 3.2249738703049175e-08,
1425
+ "logits/chosen": -2.9394679069519043,
1426
+ "logits/rejected": -2.8016715049743652,
1427
+ "logps/chosen": -411.6636657714844,
1428
+ "logps/rejected": -401.5919494628906,
1429
+ "loss": 0.1752,
1430
+ "rewards/accuracies": 0.9375,
1431
+ "rewards/chosen": 0.9104796648025513,
1432
+ "rewards/margins": 6.801443576812744,
1433
+ "rewards/rejected": -5.890963554382324,
1434
+ "step": 940
1435
+ },
1436
+ {
1437
+ "epoch": 0.8620689655172413,
1438
+ "grad_norm": 41.16087847448295,
1439
+ "learning_rate": 2.8466242517240142e-08,
1440
+ "logits/chosen": -2.865461826324463,
1441
+ "logits/rejected": -2.7517268657684326,
1442
+ "logps/chosen": -380.6204833984375,
1443
+ "logps/rejected": -385.02252197265625,
1444
+ "loss": 0.1663,
1445
+ "rewards/accuracies": 0.9624999761581421,
1446
+ "rewards/chosen": 0.9618560671806335,
1447
+ "rewards/margins": 7.1874799728393555,
1448
+ "rewards/rejected": -6.225625038146973,
1449
+ "step": 950
1450
+ },
1451
+ {
1452
+ "epoch": 0.8711433756805808,
1453
+ "grad_norm": 42.56619788961049,
1454
+ "learning_rate": 2.4905362128344652e-08,
1455
+ "logits/chosen": -2.952007293701172,
1456
+ "logits/rejected": -2.8182990550994873,
1457
+ "logps/chosen": -385.042236328125,
1458
+ "logps/rejected": -372.07244873046875,
1459
+ "loss": 0.1849,
1460
+ "rewards/accuracies": 0.887499988079071,
1461
+ "rewards/chosen": 0.5808476209640503,
1462
+ "rewards/margins": 6.048865795135498,
1463
+ "rewards/rejected": -5.468017578125,
1464
+ "step": 960
1465
+ },
1466
+ {
1467
+ "epoch": 0.8802177858439202,
1468
+ "grad_norm": 42.694992390184375,
1469
+ "learning_rate": 2.1570675809193554e-08,
1470
+ "logits/chosen": -2.9329612255096436,
1471
+ "logits/rejected": -2.7521860599517822,
1472
+ "logps/chosen": -355.7178649902344,
1473
+ "logps/rejected": -344.80706787109375,
1474
+ "loss": 0.165,
1475
+ "rewards/accuracies": 0.887499988079071,
1476
+ "rewards/chosen": 0.8769713640213013,
1477
+ "rewards/margins": 7.218625068664551,
1478
+ "rewards/rejected": -6.341653823852539,
1479
+ "step": 970
1480
+ },
1481
+ {
1482
+ "epoch": 0.8892921960072595,
1483
+ "grad_norm": 44.47460215236176,
1484
+ "learning_rate": 1.846553453374586e-08,
1485
+ "logits/chosen": -2.974410057067871,
1486
+ "logits/rejected": -2.8346712589263916,
1487
+ "logps/chosen": -332.41546630859375,
1488
+ "logps/rejected": -347.7070007324219,
1489
+ "loss": 0.1838,
1490
+ "rewards/accuracies": 0.875,
1491
+ "rewards/chosen": 0.37545478343963623,
1492
+ "rewards/margins": 5.861527442932129,
1493
+ "rewards/rejected": -5.486072540283203,
1494
+ "step": 980
1495
+ },
1496
+ {
1497
+ "epoch": 0.8983666061705989,
1498
+ "grad_norm": 40.813616862543405,
1499
+ "learning_rate": 1.559305860974805e-08,
1500
+ "logits/chosen": -2.964592456817627,
1501
+ "logits/rejected": -2.7961647510528564,
1502
+ "logps/chosen": -376.69952392578125,
1503
+ "logps/rejected": -346.335205078125,
1504
+ "loss": 0.1654,
1505
+ "rewards/accuracies": 0.949999988079071,
1506
+ "rewards/chosen": 1.2973288297653198,
1507
+ "rewards/margins": 7.887129306793213,
1508
+ "rewards/rejected": -6.589800834655762,
1509
+ "step": 990
1510
+ },
1511
+ {
1512
+ "epoch": 0.9074410163339383,
1513
+ "grad_norm": 49.7394201603181,
1514
+ "learning_rate": 1.2956134543185449e-08,
1515
+ "logits/chosen": -2.921257257461548,
1516
+ "logits/rejected": -2.7407443523406982,
1517
+ "logps/chosen": -385.1451110839844,
1518
+ "logps/rejected": -322.24420166015625,
1519
+ "loss": 0.1997,
1520
+ "rewards/accuracies": 0.8999999761581421,
1521
+ "rewards/chosen": 0.8060976266860962,
1522
+ "rewards/margins": 6.704709053039551,
1523
+ "rewards/rejected": -5.898611545562744,
1524
+ "step": 1000
1525
+ },
1526
+ {
1527
+ "epoch": 0.9165154264972777,
1528
+ "grad_norm": 37.49009348161237,
1529
+ "learning_rate": 1.0557412137677884e-08,
1530
+ "logits/chosen": -2.918593406677246,
1531
+ "logits/rejected": -2.7678284645080566,
1532
+ "logps/chosen": -395.64837646484375,
1533
+ "logps/rejected": -375.7630615234375,
1534
+ "loss": 0.1726,
1535
+ "rewards/accuracies": 0.887499988079071,
1536
+ "rewards/chosen": 0.6721734404563904,
1537
+ "rewards/margins": 6.370687007904053,
1538
+ "rewards/rejected": -5.698513031005859,
1539
+ "step": 1010
1540
+ },
1541
+ {
1542
+ "epoch": 0.925589836660617,
1543
+ "grad_norm": 38.05733801062978,
1544
+ "learning_rate": 8.399301831733403e-09,
1545
+ "logits/chosen": -2.9395432472229004,
1546
+ "logits/rejected": -2.7823617458343506,
1547
+ "logps/chosen": -366.7617492675781,
1548
+ "logps/rejected": -380.7712707519531,
1549
+ "loss": 0.1866,
1550
+ "rewards/accuracies": 0.8999999761581421,
1551
+ "rewards/chosen": 0.8301785588264465,
1552
+ "rewards/margins": 7.507985591888428,
1553
+ "rewards/rejected": -6.677806854248047,
1554
+ "step": 1020
1555
+ },
1556
+ {
1557
+ "epoch": 0.9346642468239564,
1558
+ "grad_norm": 48.56610093039856,
1559
+ "learning_rate": 6.483972276536576e-09,
1560
+ "logits/chosen": -2.9480996131896973,
1561
+ "logits/rejected": -2.7890021800994873,
1562
+ "logps/chosen": -426.88836669921875,
1563
+ "logps/rejected": -351.18865966796875,
1564
+ "loss": 0.1814,
1565
+ "rewards/accuracies": 0.9375,
1566
+ "rewards/chosen": 0.70756995677948,
1567
+ "rewards/margins": 6.491690158843994,
1568
+ "rewards/rejected": -5.784119606018066,
1569
+ "step": 1030
1570
+ },
1571
+ {
1572
+ "epoch": 0.9437386569872959,
1573
+ "grad_norm": 46.68373107587798,
1574
+ "learning_rate": 4.813348156704866e-09,
1575
+ "logits/chosen": -2.9594180583953857,
1576
+ "logits/rejected": -2.7199923992156982,
1577
+ "logps/chosen": -368.69635009765625,
1578
+ "logps/rejected": -394.76495361328125,
1579
+ "loss": 0.1897,
1580
+ "rewards/accuracies": 0.9125000238418579,
1581
+ "rewards/chosen": 0.9022048115730286,
1582
+ "rewards/margins": 7.9423065185546875,
1583
+ "rewards/rejected": -7.040102481842041,
1584
+ "step": 1040
1585
+ },
1586
+ {
1587
+ "epoch": 0.9528130671506352,
1588
+ "grad_norm": 36.58692937622254,
1589
+ "learning_rate": 3.389108256203338e-09,
1590
+ "logits/chosen": -2.948655605316162,
1591
+ "logits/rejected": -2.722470283508301,
1592
+ "logps/chosen": -396.5683288574219,
1593
+ "logps/rejected": -346.69183349609375,
1594
+ "loss": 0.1677,
1595
+ "rewards/accuracies": 0.8999999761581421,
1596
+ "rewards/chosen": 0.7442362308502197,
1597
+ "rewards/margins": 7.20086145401001,
1598
+ "rewards/rejected": -6.456625938415527,
1599
+ "step": 1050
1600
+ },
1601
+ {
1602
+ "epoch": 0.9618874773139746,
1603
+ "grad_norm": 55.430814413767415,
1604
+ "learning_rate": 2.2126837713609403e-09,
1605
+ "logits/chosen": -2.8871541023254395,
1606
+ "logits/rejected": -2.744736671447754,
1607
+ "logps/chosen": -364.49560546875,
1608
+ "logps/rejected": -356.38458251953125,
1609
+ "loss": 0.1784,
1610
+ "rewards/accuracies": 0.9125000238418579,
1611
+ "rewards/chosen": 0.7931915521621704,
1612
+ "rewards/margins": 6.91254186630249,
1613
+ "rewards/rejected": -6.119350433349609,
1614
+ "step": 1060
1615
+ },
1616
+ {
1617
+ "epoch": 0.9709618874773139,
1618
+ "grad_norm": 65.34227799978058,
1619
+ "learning_rate": 1.2852568726837987e-09,
1620
+ "logits/chosen": -2.975020408630371,
1621
+ "logits/rejected": -2.7820792198181152,
1622
+ "logps/chosen": -431.84185791015625,
1623
+ "logps/rejected": -396.8271484375,
1624
+ "loss": 0.2151,
1625
+ "rewards/accuracies": 0.887499988079071,
1626
+ "rewards/chosen": 1.1091196537017822,
1627
+ "rewards/margins": 7.056349754333496,
1628
+ "rewards/rejected": -5.947229862213135,
1629
+ "step": 1070
1630
+ },
1631
+ {
1632
+ "epoch": 0.9800362976406534,
1633
+ "grad_norm": 33.456813936400515,
1634
+ "learning_rate": 6.077595169105277e-10,
1635
+ "logits/chosen": -2.886939525604248,
1636
+ "logits/rejected": -2.7690751552581787,
1637
+ "logps/chosen": -361.1842346191406,
1638
+ "logps/rejected": -356.4033508300781,
1639
+ "loss": 0.1774,
1640
+ "rewards/accuracies": 0.925000011920929,
1641
+ "rewards/chosen": 1.015371322631836,
1642
+ "rewards/margins": 7.721086025238037,
1643
+ "rewards/rejected": -6.705715179443359,
1644
+ "step": 1080
1645
+ },
1646
+ {
1647
+ "epoch": 0.9891107078039928,
1648
+ "grad_norm": 40.254145384111084,
1649
+ "learning_rate": 1.8087251050369344e-10,
1650
+ "logits/chosen": -2.9307265281677246,
1651
+ "logits/rejected": -2.742335557937622,
1652
+ "logps/chosen": -380.31597900390625,
1653
+ "logps/rejected": -383.219970703125,
1654
+ "loss": 0.1869,
1655
+ "rewards/accuracies": 0.9375,
1656
+ "rewards/chosen": 1.3260897397994995,
1657
+ "rewards/margins": 8.104445457458496,
1658
+ "rewards/rejected": -6.778355598449707,
1659
+ "step": 1090
1660
+ },
1661
+ {
1662
+ "epoch": 0.9981851179673321,
1663
+ "grad_norm": 39.63910669613641,
1664
+ "learning_rate": 5.024825517951914e-12,
1665
+ "logits/chosen": -2.9634110927581787,
1666
+ "logits/rejected": -2.8349342346191406,
1667
+ "logps/chosen": -384.75299072265625,
1668
+ "logps/rejected": -384.3755798339844,
1669
+ "loss": 0.1842,
1670
+ "rewards/accuracies": 0.9375,
1671
+ "rewards/chosen": 0.5791146755218506,
1672
+ "rewards/margins": 6.315325736999512,
1673
+ "rewards/rejected": -5.736211776733398,
1674
+ "step": 1100
1675
+ },
1676
+ {
1677
+ "epoch": 1.0,
1678
+ "step": 1102,
1679
+ "total_flos": 0.0,
1680
+ "train_loss": 0.22162555615179336,
1681
+ "train_runtime": 6190.6716,
1682
+ "train_samples_per_second": 11.391,
1683
+ "train_steps_per_second": 0.178
1684
+ }
1685
+ ],
1686
+ "logging_steps": 10,
1687
+ "max_steps": 1102,
1688
+ "num_input_tokens_seen": 0,
1689
+ "num_train_epochs": 1,
1690
+ "save_steps": 500,
1691
+ "stateful_callbacks": {
1692
+ "TrainerControl": {
1693
+ "args": {
1694
+ "should_epoch_stop": false,
1695
+ "should_evaluate": false,
1696
+ "should_log": false,
1697
+ "should_save": true,
1698
+ "should_training_stop": true
1699
+ },
1700
+ "attributes": {}
1701
+ }
1702
+ },
1703
+ "total_flos": 0.0,
1704
+ "train_batch_size": 4,
1705
+ "trial_name": null,
1706
+ "trial_params": null
1707
+ }