taicheng commited on
Commit
8804fd9
·
verified ·
1 Parent(s): 7e37322

Model save

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: alignment-handbook/zephyr-7b-sft-full
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: zephyr-7b-align-scan-7e-07-0.99-cosine-2.0
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-7b-align-scan-7e-07-0.99-cosine-2.0
18
+
19
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.1824
22
+ - Rewards/chosen: 2.5463
23
+ - Rewards/rejected: 0.5342
24
+ - Rewards/accuracies: 0.3452
25
+ - Rewards/margins: 2.0120
26
+ - Logps/rejected: -80.5887
27
+ - Logps/chosen: -71.9193
28
+ - Logits/rejected: -2.6463
29
+ - Logits/chosen: -2.6632
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 7e-07
49
+ - train_batch_size: 8
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 4
54
+ - gradient_accumulation_steps: 2
55
+ - total_train_batch_size: 64
56
+ - total_eval_batch_size: 32
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 2
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
66
+ | 0.9118 | 0.3484 | 100 | 0.8952 | 1.8796 | 1.1355 | 0.3353 | 0.7441 | -79.9814 | -72.5926 | -2.5564 | -2.5727 |
67
+ | 0.9553 | 0.6969 | 200 | 1.0700 | 2.5989 | 1.4006 | 0.3413 | 1.1983 | -79.7136 | -71.8661 | -2.5726 | -2.5893 |
68
+ | 0.4066 | 1.0453 | 300 | 1.0729 | 2.3164 | 0.9125 | 0.3433 | 1.4038 | -80.2066 | -72.1515 | -2.5962 | -2.6126 |
69
+ | 0.3805 | 1.3937 | 400 | 1.1546 | 2.9774 | 1.1937 | 0.3373 | 1.7837 | -79.9225 | -71.4837 | -2.6247 | -2.6413 |
70
+ | 0.3975 | 1.7422 | 500 | 1.1824 | 2.5463 | 0.5342 | 0.3452 | 2.0120 | -80.5887 | -71.9193 | -2.6463 | -2.6632 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.44.2
76
+ - Pytorch 2.4.0
77
+ - Datasets 2.21.0
78
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.6661292189920406,
5
+ "train_runtime": 6507.4651,
6
+ "train_samples": 18340,
7
+ "train_samples_per_second": 5.637,
8
+ "train_steps_per_second": 0.088
9
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.44.2"
6
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.6661292189920406,
5
+ "train_runtime": 6507.4651,
6
+ "train_samples": 18340,
7
+ "train_samples_per_second": 5.637,
8
+ "train_steps_per_second": 0.088
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,992 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.0,
5
+ "eval_steps": 100,
6
+ "global_step": 574,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.003484320557491289,
13
+ "grad_norm": 572.2640100744908,
14
+ "learning_rate": 1.2068965517241378e-08,
15
+ "logits/chosen": -2.5345611572265625,
16
+ "logits/rejected": -2.581700563430786,
17
+ "logps/chosen": -60.002105712890625,
18
+ "logps/rejected": -99.98374938964844,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.03484320557491289,
28
+ "grad_norm": 548.4940657606528,
29
+ "learning_rate": 1.206896551724138e-07,
30
+ "logits/chosen": -2.5637850761413574,
31
+ "logits/rejected": -2.562532424926758,
32
+ "logps/chosen": -59.66712188720703,
33
+ "logps/rejected": -73.37606811523438,
34
+ "loss": 0.7017,
35
+ "rewards/accuracies": 0.1805555522441864,
36
+ "rewards/chosen": -0.00738478871062398,
37
+ "rewards/margins": -0.01218735333532095,
38
+ "rewards/rejected": 0.00480256462469697,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.06968641114982578,
43
+ "grad_norm": 669.195296069125,
44
+ "learning_rate": 2.413793103448276e-07,
45
+ "logits/chosen": -2.606231927871704,
46
+ "logits/rejected": -2.565000534057617,
47
+ "logps/chosen": -104.05134582519531,
48
+ "logps/rejected": -94.89227294921875,
49
+ "loss": 0.6861,
50
+ "rewards/accuracies": 0.3687500059604645,
51
+ "rewards/chosen": 0.07496137917041779,
52
+ "rewards/margins": 0.07313241064548492,
53
+ "rewards/rejected": 0.001828978187404573,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.10452961672473868,
58
+ "grad_norm": 701.7098593476545,
59
+ "learning_rate": 3.620689655172414e-07,
60
+ "logits/chosen": -2.5953564643859863,
61
+ "logits/rejected": -2.575517177581787,
62
+ "logps/chosen": -82.11582946777344,
63
+ "logps/rejected": -91.40339660644531,
64
+ "loss": 0.6768,
65
+ "rewards/accuracies": 0.36250001192092896,
66
+ "rewards/chosen": 0.37693265080451965,
67
+ "rewards/margins": 0.24151858687400818,
68
+ "rewards/rejected": 0.1354140341281891,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.13937282229965156,
73
+ "grad_norm": 569.2211096554255,
74
+ "learning_rate": 4.827586206896552e-07,
75
+ "logits/chosen": -2.49927020072937,
76
+ "logits/rejected": -2.497345209121704,
77
+ "logps/chosen": -77.96027374267578,
78
+ "logps/rejected": -73.55770111083984,
79
+ "loss": 0.6699,
80
+ "rewards/accuracies": 0.3187499940395355,
81
+ "rewards/chosen": -0.04143521189689636,
82
+ "rewards/margins": 0.5093806982040405,
83
+ "rewards/rejected": -0.5508158802986145,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.17421602787456447,
88
+ "grad_norm": 446.25211605228685,
89
+ "learning_rate": 6.034482758620689e-07,
90
+ "logits/chosen": -2.5248429775238037,
91
+ "logits/rejected": -2.529026508331299,
92
+ "logps/chosen": -63.092262268066406,
93
+ "logps/rejected": -75.61325073242188,
94
+ "loss": 0.7196,
95
+ "rewards/accuracies": 0.30000001192092896,
96
+ "rewards/chosen": 0.838420033454895,
97
+ "rewards/margins": 0.35312479734420776,
98
+ "rewards/rejected": 0.48529529571533203,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.20905923344947736,
103
+ "grad_norm": 431.8823983807605,
104
+ "learning_rate": 6.999740526496426e-07,
105
+ "logits/chosen": -2.494865894317627,
106
+ "logits/rejected": -2.4892258644104004,
107
+ "logps/chosen": -70.98899841308594,
108
+ "logps/rejected": -66.5857925415039,
109
+ "loss": 0.7101,
110
+ "rewards/accuracies": 0.32499998807907104,
111
+ "rewards/chosen": 2.1953351497650146,
112
+ "rewards/margins": 0.3907918632030487,
113
+ "rewards/rejected": 1.8045432567596436,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.24390243902439024,
118
+ "grad_norm": 543.5545103197261,
119
+ "learning_rate": 6.990662992822431e-07,
120
+ "logits/chosen": -2.5150294303894043,
121
+ "logits/rejected": -2.510296583175659,
122
+ "logps/chosen": -61.68668746948242,
123
+ "logps/rejected": -66.4362564086914,
124
+ "loss": 0.7767,
125
+ "rewards/accuracies": 0.3062500059604645,
126
+ "rewards/chosen": 2.524014949798584,
127
+ "rewards/margins": 0.5535615086555481,
128
+ "rewards/rejected": 1.9704535007476807,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.2787456445993031,
133
+ "grad_norm": 602.8845854238214,
134
+ "learning_rate": 6.96865023062192e-07,
135
+ "logits/chosen": -2.461080551147461,
136
+ "logits/rejected": -2.452082395553589,
137
+ "logps/chosen": -73.27593231201172,
138
+ "logps/rejected": -75.97522735595703,
139
+ "loss": 0.8282,
140
+ "rewards/accuracies": 0.3125,
141
+ "rewards/chosen": 2.365041732788086,
142
+ "rewards/margins": 0.8407068252563477,
143
+ "rewards/rejected": 1.5243349075317383,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.313588850174216,
148
+ "grad_norm": 645.4166740504302,
149
+ "learning_rate": 6.93378381182268e-07,
150
+ "logits/chosen": -2.5054688453674316,
151
+ "logits/rejected": -2.5200397968292236,
152
+ "logps/chosen": -63.54323196411133,
153
+ "logps/rejected": -68.08236694335938,
154
+ "loss": 0.9178,
155
+ "rewards/accuracies": 0.29374998807907104,
156
+ "rewards/chosen": 2.4146523475646973,
157
+ "rewards/margins": 0.40329408645629883,
158
+ "rewards/rejected": 2.0113582611083984,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.34843205574912894,
163
+ "grad_norm": 625.9456943514227,
164
+ "learning_rate": 6.886192939700987e-07,
165
+ "logits/chosen": -2.495720863342285,
166
+ "logits/rejected": -2.494643449783325,
167
+ "logps/chosen": -72.901611328125,
168
+ "logps/rejected": -79.84459686279297,
169
+ "loss": 0.9118,
170
+ "rewards/accuracies": 0.32499998807907104,
171
+ "rewards/chosen": 2.345865488052368,
172
+ "rewards/margins": 1.1508872509002686,
173
+ "rewards/rejected": 1.1949782371520996,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.34843205574912894,
178
+ "eval_logits/chosen": -2.572685718536377,
179
+ "eval_logits/rejected": -2.556438684463501,
180
+ "eval_logps/chosen": -72.59263610839844,
181
+ "eval_logps/rejected": -79.98140716552734,
182
+ "eval_loss": 0.8951926827430725,
183
+ "eval_rewards/accuracies": 0.335317462682724,
184
+ "eval_rewards/chosen": 1.8796159029006958,
185
+ "eval_rewards/margins": 0.7441409230232239,
186
+ "eval_rewards/rejected": 1.1354750394821167,
187
+ "eval_runtime": 114.849,
188
+ "eval_samples_per_second": 17.414,
189
+ "eval_steps_per_second": 0.549,
190
+ "step": 100
191
+ },
192
+ {
193
+ "epoch": 0.3832752613240418,
194
+ "grad_norm": 633.8111608364293,
195
+ "learning_rate": 6.826053970097538e-07,
196
+ "logits/chosen": -2.5071868896484375,
197
+ "logits/rejected": -2.4732370376586914,
198
+ "logps/chosen": -72.26206970214844,
199
+ "logps/rejected": -62.43015670776367,
200
+ "loss": 0.934,
201
+ "rewards/accuracies": 0.2562499940395355,
202
+ "rewards/chosen": 0.8988760709762573,
203
+ "rewards/margins": 0.3242531418800354,
204
+ "rewards/rejected": 0.5746229887008667,
205
+ "step": 110
206
+ },
207
+ {
208
+ "epoch": 0.4181184668989547,
209
+ "grad_norm": 481.08142593629555,
210
+ "learning_rate": 6.753589757901721e-07,
211
+ "logits/chosen": -2.5381650924682617,
212
+ "logits/rejected": -2.50740385055542,
213
+ "logps/chosen": -76.40122985839844,
214
+ "logps/rejected": -66.65751647949219,
215
+ "loss": 0.857,
216
+ "rewards/accuracies": 0.29374998807907104,
217
+ "rewards/chosen": 1.0497384071350098,
218
+ "rewards/margins": 1.0358079671859741,
219
+ "rewards/rejected": 0.013930544257164001,
220
+ "step": 120
221
+ },
222
+ {
223
+ "epoch": 0.4529616724738676,
224
+ "grad_norm": 807.6631711479599,
225
+ "learning_rate": 6.669068831226014e-07,
226
+ "logits/chosen": -2.5767252445220947,
227
+ "logits/rejected": -2.557955265045166,
228
+ "logps/chosen": -83.18122863769531,
229
+ "logps/rejected": -88.09697723388672,
230
+ "loss": 1.0372,
231
+ "rewards/accuracies": 0.33125001192092896,
232
+ "rewards/chosen": 0.6925557255744934,
233
+ "rewards/margins": 1.275132417678833,
234
+ "rewards/rejected": -0.5825767517089844,
235
+ "step": 130
236
+ },
237
+ {
238
+ "epoch": 0.4878048780487805,
239
+ "grad_norm": 443.55201800181277,
240
+ "learning_rate": 6.572804396330676e-07,
241
+ "logits/chosen": -2.4845736026763916,
242
+ "logits/rejected": -2.4748897552490234,
243
+ "logps/chosen": -79.59736633300781,
244
+ "logps/rejected": -70.54890441894531,
245
+ "loss": 0.9077,
246
+ "rewards/accuracies": 0.33125001192092896,
247
+ "rewards/chosen": 1.5325819253921509,
248
+ "rewards/margins": 1.3193892240524292,
249
+ "rewards/rejected": 0.21319285035133362,
250
+ "step": 140
251
+ },
252
+ {
253
+ "epoch": 0.5226480836236934,
254
+ "grad_norm": 574.2676129428694,
255
+ "learning_rate": 6.465153176986211e-07,
256
+ "logits/chosen": -2.579017162322998,
257
+ "logits/rejected": -2.5382275581359863,
258
+ "logps/chosen": -77.61248016357422,
259
+ "logps/rejected": -78.82749938964844,
260
+ "loss": 1.0493,
261
+ "rewards/accuracies": 0.2750000059604645,
262
+ "rewards/chosen": 1.853137731552124,
263
+ "rewards/margins": 1.137075662612915,
264
+ "rewards/rejected": 0.7160621881484985,
265
+ "step": 150
266
+ },
267
+ {
268
+ "epoch": 0.5574912891986062,
269
+ "grad_norm": 575.7733117089178,
270
+ "learning_rate": 6.346514092574479e-07,
271
+ "logits/chosen": -2.573914051055908,
272
+ "logits/rejected": -2.5928311347961426,
273
+ "logps/chosen": -62.48719024658203,
274
+ "logps/rejected": -70.96321105957031,
275
+ "loss": 0.9487,
276
+ "rewards/accuracies": 0.3062500059604645,
277
+ "rewards/chosen": 2.017285108566284,
278
+ "rewards/margins": 1.0176817178726196,
279
+ "rewards/rejected": 0.9996035695075989,
280
+ "step": 160
281
+ },
282
+ {
283
+ "epoch": 0.5923344947735192,
284
+ "grad_norm": 636.1057741626599,
285
+ "learning_rate": 6.21732677982701e-07,
286
+ "logits/chosen": -2.5915422439575195,
287
+ "logits/rejected": -2.5760223865509033,
288
+ "logps/chosen": -67.14725494384766,
289
+ "logps/rejected": -75.51747131347656,
290
+ "loss": 0.9285,
291
+ "rewards/accuracies": 0.26875001192092896,
292
+ "rewards/chosen": 1.8019440174102783,
293
+ "rewards/margins": 1.0133386850357056,
294
+ "rewards/rejected": 0.7886053919792175,
295
+ "step": 170
296
+ },
297
+ {
298
+ "epoch": 0.627177700348432,
299
+ "grad_norm": 634.0125212661374,
300
+ "learning_rate": 6.078069963678453e-07,
301
+ "logits/chosen": -2.6139795780181885,
302
+ "logits/rejected": -2.60251522064209,
303
+ "logps/chosen": -89.45849609375,
304
+ "logps/rejected": -84.91694641113281,
305
+ "loss": 1.1004,
306
+ "rewards/accuracies": 0.3375000059604645,
307
+ "rewards/chosen": 2.3746800422668457,
308
+ "rewards/margins": 0.797105073928833,
309
+ "rewards/rejected": 1.577574610710144,
310
+ "step": 180
311
+ },
312
+ {
313
+ "epoch": 0.662020905923345,
314
+ "grad_norm": 494.843810287881,
315
+ "learning_rate": 5.929259683272219e-07,
316
+ "logits/chosen": -2.602254629135132,
317
+ "logits/rejected": -2.594606399536133,
318
+ "logps/chosen": -68.91534423828125,
319
+ "logps/rejected": -80.00617218017578,
320
+ "loss": 0.9944,
321
+ "rewards/accuracies": 0.28125,
322
+ "rewards/chosen": 2.114778518676758,
323
+ "rewards/margins": 0.607982873916626,
324
+ "rewards/rejected": 1.5067954063415527,
325
+ "step": 190
326
+ },
327
+ {
328
+ "epoch": 0.6968641114982579,
329
+ "grad_norm": 826.9190409978122,
330
+ "learning_rate": 5.771447379692167e-07,
331
+ "logits/chosen": -2.625168800354004,
332
+ "logits/rejected": -2.6301324367523193,
333
+ "logps/chosen": -87.10128784179688,
334
+ "logps/rejected": -90.02182006835938,
335
+ "loss": 0.9553,
336
+ "rewards/accuracies": 0.3687500059604645,
337
+ "rewards/chosen": 3.519533157348633,
338
+ "rewards/margins": 1.5109440088272095,
339
+ "rewards/rejected": 2.008589029312134,
340
+ "step": 200
341
+ },
342
+ {
343
+ "epoch": 0.6968641114982579,
344
+ "eval_logits/chosen": -2.58929705619812,
345
+ "eval_logits/rejected": -2.5725698471069336,
346
+ "eval_logps/chosen": -71.86607360839844,
347
+ "eval_logps/rejected": -79.71355438232422,
348
+ "eval_loss": 1.070008635520935,
349
+ "eval_rewards/accuracies": 0.341269850730896,
350
+ "eval_rewards/chosen": 2.5989110469818115,
351
+ "eval_rewards/margins": 1.1982702016830444,
352
+ "eval_rewards/rejected": 1.4006409645080566,
353
+ "eval_runtime": 113.335,
354
+ "eval_samples_per_second": 17.647,
355
+ "eval_steps_per_second": 0.556,
356
+ "step": 200
357
+ },
358
+ {
359
+ "epoch": 0.7317073170731707,
360
+ "grad_norm": 847.0633281542522,
361
+ "learning_rate": 5.605217852506545e-07,
362
+ "logits/chosen": -2.6012160778045654,
363
+ "logits/rejected": -2.57619047164917,
364
+ "logps/chosen": -67.00978088378906,
365
+ "logps/rejected": -62.69084548950195,
366
+ "loss": 0.9355,
367
+ "rewards/accuracies": 0.3687500059604645,
368
+ "rewards/chosen": 2.286196231842041,
369
+ "rewards/margins": 1.579746961593628,
370
+ "rewards/rejected": 0.7064491510391235,
371
+ "step": 210
372
+ },
373
+ {
374
+ "epoch": 0.7665505226480837,
375
+ "grad_norm": 558.5058882421968,
376
+ "learning_rate": 5.43118709269656e-07,
377
+ "logits/chosen": -2.652629852294922,
378
+ "logits/rejected": -2.6338040828704834,
379
+ "logps/chosen": -71.07757568359375,
380
+ "logps/rejected": -69.96295928955078,
381
+ "loss": 1.0861,
382
+ "rewards/accuracies": 0.2562499940395355,
383
+ "rewards/chosen": 2.3567943572998047,
384
+ "rewards/margins": 0.6735709309577942,
385
+ "rewards/rejected": 1.6832237243652344,
386
+ "step": 220
387
+ },
388
+ {
389
+ "epoch": 0.8013937282229965,
390
+ "grad_norm": 874.1019734865492,
391
+ "learning_rate": 5.25e-07,
392
+ "logits/chosen": -2.675318956375122,
393
+ "logits/rejected": -2.6557412147521973,
394
+ "logps/chosen": -86.0287857055664,
395
+ "logps/rejected": -86.36713409423828,
396
+ "loss": 1.1663,
397
+ "rewards/accuracies": 0.3375000059604645,
398
+ "rewards/chosen": 3.9321389198303223,
399
+ "rewards/margins": 2.168241024017334,
400
+ "rewards/rejected": 1.7638976573944092,
401
+ "step": 230
402
+ },
403
+ {
404
+ "epoch": 0.8362369337979094,
405
+ "grad_norm": 612.8944076163772,
406
+ "learning_rate": 5.062327993128697e-07,
407
+ "logits/chosen": -2.6777350902557373,
408
+ "logits/rejected": -2.6439225673675537,
409
+ "logps/chosen": -82.02351379394531,
410
+ "logps/rejected": -76.44059753417969,
411
+ "loss": 1.1645,
412
+ "rewards/accuracies": 0.33125001192092896,
413
+ "rewards/chosen": 4.670910835266113,
414
+ "rewards/margins": 1.0123672485351562,
415
+ "rewards/rejected": 3.658543825149536,
416
+ "step": 240
417
+ },
418
+ {
419
+ "epoch": 0.8710801393728222,
420
+ "grad_norm": 661.3902849256391,
421
+ "learning_rate": 4.868866521715546e-07,
422
+ "logits/chosen": -2.6785645484924316,
423
+ "logits/rejected": -2.642632007598877,
424
+ "logps/chosen": -91.0850601196289,
425
+ "logps/rejected": -86.79383850097656,
426
+ "loss": 0.8849,
427
+ "rewards/accuracies": 0.3687500059604645,
428
+ "rewards/chosen": 4.320570468902588,
429
+ "rewards/margins": 1.1997525691986084,
430
+ "rewards/rejected": 3.1208178997039795,
431
+ "step": 250
432
+ },
433
+ {
434
+ "epoch": 0.9059233449477352,
435
+ "grad_norm": 559.1976202405948,
436
+ "learning_rate": 4.6703324892109645e-07,
437
+ "logits/chosen": -2.567991256713867,
438
+ "logits/rejected": -2.581616163253784,
439
+ "logps/chosen": -56.56006622314453,
440
+ "logps/rejected": -64.0355224609375,
441
+ "loss": 1.1214,
442
+ "rewards/accuracies": 0.2874999940395355,
443
+ "rewards/chosen": 2.46440052986145,
444
+ "rewards/margins": 0.8875689506530762,
445
+ "rewards/rejected": 1.5768316984176636,
446
+ "step": 260
447
+ },
448
+ {
449
+ "epoch": 0.9407665505226481,
450
+ "grad_norm": 923.7220287977552,
451
+ "learning_rate": 4.4674615962787004e-07,
452
+ "logits/chosen": -2.647984504699707,
453
+ "logits/rejected": -2.646916151046753,
454
+ "logps/chosen": -66.90876770019531,
455
+ "logps/rejected": -81.64918518066406,
456
+ "loss": 1.0676,
457
+ "rewards/accuracies": 0.3125,
458
+ "rewards/chosen": 1.9516960382461548,
459
+ "rewards/margins": 1.0779374837875366,
460
+ "rewards/rejected": 0.8737583160400391,
461
+ "step": 270
462
+ },
463
+ {
464
+ "epoch": 0.975609756097561,
465
+ "grad_norm": 647.9814126875335,
466
+ "learning_rate": 4.2610056145354496e-07,
467
+ "logits/chosen": -2.538573741912842,
468
+ "logits/rejected": -2.5176804065704346,
469
+ "logps/chosen": -65.74501037597656,
470
+ "logps/rejected": -70.30267333984375,
471
+ "loss": 0.862,
472
+ "rewards/accuracies": 0.3187499940395355,
473
+ "rewards/chosen": 2.068479299545288,
474
+ "rewards/margins": 1.5420581102371216,
475
+ "rewards/rejected": 0.5264211893081665,
476
+ "step": 280
477
+ },
478
+ {
479
+ "epoch": 1.0104529616724738,
480
+ "grad_norm": 56.46750622929306,
481
+ "learning_rate": 4.051729600736907e-07,
482
+ "logits/chosen": -2.5648231506347656,
483
+ "logits/rejected": -2.536726474761963,
484
+ "logps/chosen": -68.04057312011719,
485
+ "logps/rejected": -65.41937255859375,
486
+ "loss": 0.7918,
487
+ "rewards/accuracies": 0.38749998807907104,
488
+ "rewards/chosen": 3.9868416786193848,
489
+ "rewards/margins": 4.487275123596191,
490
+ "rewards/rejected": -0.5004340410232544,
491
+ "step": 290
492
+ },
493
+ {
494
+ "epoch": 1.0452961672473868,
495
+ "grad_norm": 20.388910811138857,
496
+ "learning_rate": 3.8404090617335413e-07,
497
+ "logits/chosen": -2.5940048694610596,
498
+ "logits/rejected": -2.579204559326172,
499
+ "logps/chosen": -59.690216064453125,
500
+ "logps/rejected": -76.28305053710938,
501
+ "loss": 0.4066,
502
+ "rewards/accuracies": 0.41874998807907104,
503
+ "rewards/chosen": 6.790108680725098,
504
+ "rewards/margins": 13.831901550292969,
505
+ "rewards/rejected": -7.0417938232421875,
506
+ "step": 300
507
+ },
508
+ {
509
+ "epoch": 1.0452961672473868,
510
+ "eval_logits/chosen": -2.612644672393799,
511
+ "eval_logits/rejected": -2.5961754322052,
512
+ "eval_logps/chosen": -72.1514663696289,
513
+ "eval_logps/rejected": -80.20658111572266,
514
+ "eval_loss": 1.0729384422302246,
515
+ "eval_rewards/accuracies": 0.3432539701461792,
516
+ "eval_rewards/chosen": 2.3163692951202393,
517
+ "eval_rewards/margins": 1.4038203954696655,
518
+ "eval_rewards/rejected": 0.9125491380691528,
519
+ "eval_runtime": 113.475,
520
+ "eval_samples_per_second": 17.625,
521
+ "eval_steps_per_second": 0.555,
522
+ "step": 300
523
+ },
524
+ {
525
+ "epoch": 1.0801393728222997,
526
+ "grad_norm": 3.5419086603813197,
527
+ "learning_rate": 3.6278270807018065e-07,
528
+ "logits/chosen": -2.558955430984497,
529
+ "logits/rejected": -2.5596249103546143,
530
+ "logps/chosen": -59.938392639160156,
531
+ "logps/rejected": -84.96260833740234,
532
+ "loss": 0.416,
533
+ "rewards/accuracies": 0.4375,
534
+ "rewards/chosen": 7.621278285980225,
535
+ "rewards/margins": 17.239816665649414,
536
+ "rewards/rejected": -9.618535995483398,
537
+ "step": 310
538
+ },
539
+ {
540
+ "epoch": 1.1149825783972125,
541
+ "grad_norm": 201.12315966640935,
542
+ "learning_rate": 3.414771415300036e-07,
543
+ "logits/chosen": -2.5916085243225098,
544
+ "logits/rejected": -2.5778439044952393,
545
+ "logps/chosen": -66.45283508300781,
546
+ "logps/rejected": -87.47081756591797,
547
+ "loss": 0.3907,
548
+ "rewards/accuracies": 0.4625000059604645,
549
+ "rewards/chosen": 7.431988716125488,
550
+ "rewards/margins": 16.59490203857422,
551
+ "rewards/rejected": -9.16291332244873,
552
+ "step": 320
553
+ },
554
+ {
555
+ "epoch": 1.1498257839721253,
556
+ "grad_norm": 403.8100928464834,
557
+ "learning_rate": 3.2020315785022746e-07,
558
+ "logits/chosen": -2.5842907428741455,
559
+ "logits/rejected": -2.5574951171875,
560
+ "logps/chosen": -75.70329284667969,
561
+ "logps/rejected": -86.71253967285156,
562
+ "loss": 0.3965,
563
+ "rewards/accuracies": 0.518750011920929,
564
+ "rewards/chosen": 8.334002494812012,
565
+ "rewards/margins": 16.338943481445312,
566
+ "rewards/rejected": -8.004940032958984,
567
+ "step": 330
568
+ },
569
+ {
570
+ "epoch": 1.1846689895470384,
571
+ "grad_norm": 184.77353798917835,
572
+ "learning_rate": 2.9903959129274836e-07,
573
+ "logits/chosen": -2.561433792114258,
574
+ "logits/rejected": -2.5655293464660645,
575
+ "logps/chosen": -74.55548095703125,
576
+ "logps/rejected": -104.60441589355469,
577
+ "loss": 0.4148,
578
+ "rewards/accuracies": 0.53125,
579
+ "rewards/chosen": 9.100366592407227,
580
+ "rewards/margins": 18.328720092773438,
581
+ "rewards/rejected": -9.228353500366211,
582
+ "step": 340
583
+ },
584
+ {
585
+ "epoch": 1.2195121951219512,
586
+ "grad_norm": 188.43635904114095,
587
+ "learning_rate": 2.7806486695056977e-07,
588
+ "logits/chosen": -2.5871026515960693,
589
+ "logits/rejected": -2.55499529838562,
590
+ "logps/chosen": -58.55460739135742,
591
+ "logps/rejected": -74.59812927246094,
592
+ "loss": 0.4111,
593
+ "rewards/accuracies": 0.48750001192092896,
594
+ "rewards/chosen": 9.941993713378906,
595
+ "rewards/margins": 18.771812438964844,
596
+ "rewards/rejected": -8.829817771911621,
597
+ "step": 350
598
+ },
599
+ {
600
+ "epoch": 1.254355400696864,
601
+ "grad_norm": 154.26226580174057,
602
+ "learning_rate": 2.573567101306622e-07,
603
+ "logits/chosen": -2.5829975605010986,
604
+ "logits/rejected": -2.5526790618896484,
605
+ "logps/chosen": -62.894317626953125,
606
+ "logps/rejected": -71.01216888427734,
607
+ "loss": 0.4026,
608
+ "rewards/accuracies": 0.4437499940395355,
609
+ "rewards/chosen": 8.866216659545898,
610
+ "rewards/margins": 14.160728454589844,
611
+ "rewards/rejected": -5.294511795043945,
612
+ "step": 360
613
+ },
614
+ {
615
+ "epoch": 1.289198606271777,
616
+ "grad_norm": 16.651956659184666,
617
+ "learning_rate": 2.369918583299939e-07,
618
+ "logits/chosen": -2.5568833351135254,
619
+ "logits/rejected": -2.5733718872070312,
620
+ "logps/chosen": -62.53009033203125,
621
+ "logps/rejected": -82.81775665283203,
622
+ "loss": 0.4444,
623
+ "rewards/accuracies": 0.46875,
624
+ "rewards/chosen": 8.450610160827637,
625
+ "rewards/margins": 16.84958839416504,
626
+ "rewards/rejected": -8.398977279663086,
627
+ "step": 370
628
+ },
629
+ {
630
+ "epoch": 1.32404181184669,
631
+ "grad_norm": 210.98546985458898,
632
+ "learning_rate": 2.1704577687205507e-07,
633
+ "logits/chosen": -2.5712497234344482,
634
+ "logits/rejected": -2.567377805709839,
635
+ "logps/chosen": -76.60011291503906,
636
+ "logps/rejected": -98.51103210449219,
637
+ "loss": 0.4409,
638
+ "rewards/accuracies": 0.53125,
639
+ "rewards/chosen": 10.171670913696289,
640
+ "rewards/margins": 22.095985412597656,
641
+ "rewards/rejected": -11.924314498901367,
642
+ "step": 380
643
+ },
644
+ {
645
+ "epoch": 1.3588850174216027,
646
+ "grad_norm": 125.98956132873472,
647
+ "learning_rate": 1.975923792576331e-07,
648
+ "logits/chosen": -2.6548500061035156,
649
+ "logits/rejected": -2.6388492584228516,
650
+ "logps/chosen": -57.238861083984375,
651
+ "logps/rejected": -80.28170776367188,
652
+ "loss": 0.3853,
653
+ "rewards/accuracies": 0.4375,
654
+ "rewards/chosen": 9.658943176269531,
655
+ "rewards/margins": 18.589771270751953,
656
+ "rewards/rejected": -8.930827140808105,
657
+ "step": 390
658
+ },
659
+ {
660
+ "epoch": 1.3937282229965158,
661
+ "grad_norm": 30.8789509954886,
662
+ "learning_rate": 1.7870375326612014e-07,
663
+ "logits/chosen": -2.638012647628784,
664
+ "logits/rejected": -2.609297275543213,
665
+ "logps/chosen": -76.772705078125,
666
+ "logps/rejected": -106.2076644897461,
667
+ "loss": 0.3805,
668
+ "rewards/accuracies": 0.518750011920929,
669
+ "rewards/chosen": 9.280598640441895,
670
+ "rewards/margins": 17.006460189819336,
671
+ "rewards/rejected": -7.7258620262146,
672
+ "step": 400
673
+ },
674
+ {
675
+ "epoch": 1.3937282229965158,
676
+ "eval_logits/chosen": -2.6412570476531982,
677
+ "eval_logits/rejected": -2.6246657371520996,
678
+ "eval_logps/chosen": -71.48371124267578,
679
+ "eval_logps/rejected": -79.92253875732422,
680
+ "eval_loss": 1.1545917987823486,
681
+ "eval_rewards/accuracies": 0.3373015820980072,
682
+ "eval_rewards/chosen": 2.977447986602783,
683
+ "eval_rewards/margins": 1.783706545829773,
684
+ "eval_rewards/rejected": 1.1937415599822998,
685
+ "eval_runtime": 113.3185,
686
+ "eval_samples_per_second": 17.649,
687
+ "eval_steps_per_second": 0.556,
688
+ "step": 400
689
+ },
690
+ {
691
+ "epoch": 1.4285714285714286,
692
+ "grad_norm": 134.60970325056522,
693
+ "learning_rate": 1.604498938223354e-07,
694
+ "logits/chosen": -2.6223552227020264,
695
+ "logits/rejected": -2.611177444458008,
696
+ "logps/chosen": -71.12557220458984,
697
+ "logps/rejected": -88.45091247558594,
698
+ "loss": 0.6988,
699
+ "rewards/accuracies": 0.48124998807907104,
700
+ "rewards/chosen": 11.372858047485352,
701
+ "rewards/margins": 19.366708755493164,
702
+ "rewards/rejected": -7.993849754333496,
703
+ "step": 410
704
+ },
705
+ {
706
+ "epoch": 1.4634146341463414,
707
+ "grad_norm": 144.0362669522128,
708
+ "learning_rate": 1.4289844361876528e-07,
709
+ "logits/chosen": -2.665538787841797,
710
+ "logits/rejected": -2.665926933288574,
711
+ "logps/chosen": -67.73007202148438,
712
+ "logps/rejected": -91.9347915649414,
713
+ "loss": 0.394,
714
+ "rewards/accuracies": 0.42500001192092896,
715
+ "rewards/chosen": 8.27946662902832,
716
+ "rewards/margins": 15.272089958190918,
717
+ "rewards/rejected": -6.9926252365112305,
718
+ "step": 420
719
+ },
720
+ {
721
+ "epoch": 1.4982578397212545,
722
+ "grad_norm": 8.116844029360541,
723
+ "learning_rate": 1.2611444245438813e-07,
724
+ "logits/chosen": -2.629135847091675,
725
+ "logits/rejected": -2.6135053634643555,
726
+ "logps/chosen": -59.4090576171875,
727
+ "logps/rejected": -79.82469177246094,
728
+ "loss": 0.3924,
729
+ "rewards/accuracies": 0.46875,
730
+ "rewards/chosen": 9.160750389099121,
731
+ "rewards/margins": 16.42301368713379,
732
+ "rewards/rejected": -7.262263298034668,
733
+ "step": 430
734
+ },
735
+ {
736
+ "epoch": 1.533101045296167,
737
+ "grad_norm": 20.390949955841545,
738
+ "learning_rate": 1.1016008621895228e-07,
739
+ "logits/chosen": -2.6179585456848145,
740
+ "logits/rejected": -2.6255526542663574,
741
+ "logps/chosen": -59.94586944580078,
742
+ "logps/rejected": -79.95707702636719,
743
+ "loss": 0.385,
744
+ "rewards/accuracies": 0.4312500059604645,
745
+ "rewards/chosen": 9.932461738586426,
746
+ "rewards/margins": 16.32512664794922,
747
+ "rewards/rejected": -6.392666339874268,
748
+ "step": 440
749
+ },
750
+ {
751
+ "epoch": 1.5679442508710801,
752
+ "grad_norm": 91.65774378764581,
753
+ "learning_rate": 9.509449641582943e-08,
754
+ "logits/chosen": -2.669267177581787,
755
+ "logits/rejected": -2.6314785480499268,
756
+ "logps/chosen": -78.78290557861328,
757
+ "logps/rejected": -96.71044158935547,
758
+ "loss": 0.4341,
759
+ "rewards/accuracies": 0.518750011920929,
760
+ "rewards/chosen": 11.938484191894531,
761
+ "rewards/margins": 22.57853889465332,
762
+ "rewards/rejected": -10.640054702758789,
763
+ "step": 450
764
+ },
765
+ {
766
+ "epoch": 1.6027874564459932,
767
+ "grad_norm": 203.85164100757706,
768
+ "learning_rate": 8.097350107751374e-08,
769
+ "logits/chosen": -2.643629550933838,
770
+ "logits/rejected": -2.621685743331909,
771
+ "logps/chosen": -65.54093933105469,
772
+ "logps/rejected": -86.33696746826172,
773
+ "loss": 0.4012,
774
+ "rewards/accuracies": 0.5,
775
+ "rewards/chosen": 10.532175064086914,
776
+ "rewards/margins": 19.771120071411133,
777
+ "rewards/rejected": -9.238944053649902,
778
+ "step": 460
779
+ },
780
+ {
781
+ "epoch": 1.6376306620209058,
782
+ "grad_norm": 0.18713678602063968,
783
+ "learning_rate": 6.784942788562304e-08,
784
+ "logits/chosen": -2.6330292224884033,
785
+ "logits/rejected": -2.6224162578582764,
786
+ "logps/chosen": -53.2932243347168,
787
+ "logps/rejected": -78.58303833007812,
788
+ "loss": 0.4188,
789
+ "rewards/accuracies": 0.4000000059604645,
790
+ "rewards/chosen": 7.929083347320557,
791
+ "rewards/margins": 14.772134780883789,
792
+ "rewards/rejected": -6.843050479888916,
793
+ "step": 470
794
+ },
795
+ {
796
+ "epoch": 1.6724738675958188,
797
+ "grad_norm": 0.6147060015446919,
798
+ "learning_rate": 5.5770910262027175e-08,
799
+ "logits/chosen": -2.6437172889709473,
800
+ "logits/rejected": -2.627516269683838,
801
+ "logps/chosen": -48.605804443359375,
802
+ "logps/rejected": -54.954551696777344,
803
+ "loss": 0.3885,
804
+ "rewards/accuracies": 0.36250001192092896,
805
+ "rewards/chosen": 5.401150703430176,
806
+ "rewards/margins": 8.344820022583008,
807
+ "rewards/rejected": -2.943669319152832,
808
+ "step": 480
809
+ },
810
+ {
811
+ "epoch": 1.7073170731707317,
812
+ "grad_norm": 77.39241567442491,
813
+ "learning_rate": 4.47827071496673e-08,
814
+ "logits/chosen": -2.630645275115967,
815
+ "logits/rejected": -2.618098735809326,
816
+ "logps/chosen": -65.59320831298828,
817
+ "logps/rejected": -76.5845718383789,
818
+ "loss": 0.4658,
819
+ "rewards/accuracies": 0.40625,
820
+ "rewards/chosen": 7.552302360534668,
821
+ "rewards/margins": 14.009483337402344,
822
+ "rewards/rejected": -6.457179069519043,
823
+ "step": 490
824
+ },
825
+ {
826
+ "epoch": 1.7421602787456445,
827
+ "grad_norm": 9.754129842794342,
828
+ "learning_rate": 3.492553715089692e-08,
829
+ "logits/chosen": -2.5552730560302734,
830
+ "logits/rejected": -2.546638011932373,
831
+ "logps/chosen": -67.24018096923828,
832
+ "logps/rejected": -92.03102111816406,
833
+ "loss": 0.3975,
834
+ "rewards/accuracies": 0.46875,
835
+ "rewards/chosen": 8.957259178161621,
836
+ "rewards/margins": 17.181909561157227,
837
+ "rewards/rejected": -8.224650382995605,
838
+ "step": 500
839
+ },
840
+ {
841
+ "epoch": 1.7421602787456445,
842
+ "eval_logits/chosen": -2.663222551345825,
843
+ "eval_logits/rejected": -2.6463263034820557,
844
+ "eval_logps/chosen": -71.91925048828125,
845
+ "eval_logps/rejected": -80.58873748779297,
846
+ "eval_loss": 1.182449460029602,
847
+ "eval_rewards/accuracies": 0.3452380895614624,
848
+ "eval_rewards/chosen": 2.54626727104187,
849
+ "eval_rewards/margins": 2.0120491981506348,
850
+ "eval_rewards/rejected": 0.5342182517051697,
851
+ "eval_runtime": 113.333,
852
+ "eval_samples_per_second": 17.647,
853
+ "eval_steps_per_second": 0.556,
854
+ "step": 500
855
+ },
856
+ {
857
+ "epoch": 1.7770034843205575,
858
+ "grad_norm": 9.598053942856874,
859
+ "learning_rate": 2.6235927637971816e-08,
860
+ "logits/chosen": -2.625732898712158,
861
+ "logits/rejected": -2.6054017543792725,
862
+ "logps/chosen": -61.70050048828125,
863
+ "logps/rejected": -77.1646728515625,
864
+ "loss": 0.3933,
865
+ "rewards/accuracies": 0.4625000059604645,
866
+ "rewards/chosen": 8.965360641479492,
867
+ "rewards/margins": 16.470144271850586,
868
+ "rewards/rejected": -7.504785060882568,
869
+ "step": 510
870
+ },
871
+ {
872
+ "epoch": 1.8118466898954704,
873
+ "grad_norm": 19.555147588045187,
874
+ "learning_rate": 1.8746079394836706e-08,
875
+ "logits/chosen": -2.636565685272217,
876
+ "logits/rejected": -2.6276707649230957,
877
+ "logps/chosen": -65.6178207397461,
878
+ "logps/rejected": -84.69874572753906,
879
+ "loss": 0.4072,
880
+ "rewards/accuracies": 0.4625000059604645,
881
+ "rewards/chosen": 8.566861152648926,
882
+ "rewards/margins": 15.977938652038574,
883
+ "rewards/rejected": -7.411079406738281,
884
+ "step": 520
885
+ },
886
+ {
887
+ "epoch": 1.8466898954703832,
888
+ "grad_norm": 45.12187365083394,
889
+ "learning_rate": 1.2483747291799507e-08,
890
+ "logits/chosen": -2.610792636871338,
891
+ "logits/rejected": -2.603881597518921,
892
+ "logps/chosen": -62.07392501831055,
893
+ "logps/rejected": -80.65681457519531,
894
+ "loss": 0.3989,
895
+ "rewards/accuracies": 0.4625000059604645,
896
+ "rewards/chosen": 9.169586181640625,
897
+ "rewards/margins": 18.3698787689209,
898
+ "rewards/rejected": -9.200292587280273,
899
+ "step": 530
900
+ },
901
+ {
902
+ "epoch": 1.8815331010452963,
903
+ "grad_norm": 828.0040833812193,
904
+ "learning_rate": 7.472137435272619e-09,
905
+ "logits/chosen": -2.624800205230713,
906
+ "logits/rejected": -2.638892889022827,
907
+ "logps/chosen": -54.112327575683594,
908
+ "logps/rejected": -85.64537048339844,
909
+ "loss": 0.4706,
910
+ "rewards/accuracies": 0.44999998807907104,
911
+ "rewards/chosen": 9.522268295288086,
912
+ "rewards/margins": 21.70328712463379,
913
+ "rewards/rejected": -12.181015968322754,
914
+ "step": 540
915
+ },
916
+ {
917
+ "epoch": 1.916376306620209,
918
+ "grad_norm": 6.099605163051641,
919
+ "learning_rate": 3.729821173711411e-09,
920
+ "logits/chosen": -2.6054625511169434,
921
+ "logits/rejected": -2.585512161254883,
922
+ "logps/chosen": -78.79656982421875,
923
+ "logps/rejected": -100.56778717041016,
924
+ "loss": 0.3741,
925
+ "rewards/accuracies": 0.5,
926
+ "rewards/chosen": 11.91783332824707,
927
+ "rewards/margins": 25.462675094604492,
928
+ "rewards/rejected": -13.544839859008789,
929
+ "step": 550
930
+ },
931
+ {
932
+ "epoch": 1.951219512195122,
933
+ "grad_norm": 239.71729089566762,
934
+ "learning_rate": 1.2706662784136513e-09,
935
+ "logits/chosen": -2.601435422897339,
936
+ "logits/rejected": -2.609788417816162,
937
+ "logps/chosen": -56.35918045043945,
938
+ "logps/rejected": -76.89906311035156,
939
+ "loss": 0.4008,
940
+ "rewards/accuracies": 0.4437499940395355,
941
+ "rewards/chosen": 7.287221431732178,
942
+ "rewards/margins": 13.824111938476562,
943
+ "rewards/rejected": -6.536891937255859,
944
+ "step": 560
945
+ },
946
+ {
947
+ "epoch": 1.986062717770035,
948
+ "grad_norm": 23.401979133014883,
949
+ "learning_rate": 1.0378555420122448e-10,
950
+ "logits/chosen": -2.6814956665039062,
951
+ "logits/rejected": -2.6513266563415527,
952
+ "logps/chosen": -57.64264678955078,
953
+ "logps/rejected": -68.03236389160156,
954
+ "loss": 0.3939,
955
+ "rewards/accuracies": 0.4375,
956
+ "rewards/chosen": 7.494875907897949,
957
+ "rewards/margins": 12.909930229187012,
958
+ "rewards/rejected": -5.4150543212890625,
959
+ "step": 570
960
+ },
961
+ {
962
+ "epoch": 2.0,
963
+ "step": 574,
964
+ "total_flos": 0.0,
965
+ "train_loss": 0.6661292189920406,
966
+ "train_runtime": 6507.4651,
967
+ "train_samples_per_second": 5.637,
968
+ "train_steps_per_second": 0.088
969
+ }
970
+ ],
971
+ "logging_steps": 10,
972
+ "max_steps": 574,
973
+ "num_input_tokens_seen": 0,
974
+ "num_train_epochs": 2,
975
+ "save_steps": 100,
976
+ "stateful_callbacks": {
977
+ "TrainerControl": {
978
+ "args": {
979
+ "should_epoch_stop": false,
980
+ "should_evaluate": false,
981
+ "should_log": false,
982
+ "should_save": true,
983
+ "should_training_stop": true
984
+ },
985
+ "attributes": {}
986
+ }
987
+ },
988
+ "total_flos": 0.0,
989
+ "train_batch_size": 8,
990
+ "trial_name": null,
991
+ "trial_params": null
992
+ }