lole25 commited on
Commit
a92810a
1 Parent(s): 7955a1d

Model save

Browse files
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ base_model: DUAL-GPO/phi-2-gpo-new-i0
9
+ model-index:
10
+ - name: phi-2-gpo-v35-i1
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # phi-2-gpo-v35-i1
18
+
19
+ This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-new-i0](https://huggingface.co/DUAL-GPO/phi-2-gpo-new-i0) on the None dataset.
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 5e-06
39
+ - train_batch_size: 4
40
+ - eval_batch_size: 4
41
+ - seed: 42
42
+ - distributed_type: multi-GPU
43
+ - num_devices: 2
44
+ - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 32
46
+ - total_eval_batch_size: 8
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
+ - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 1
51
+
52
+ ### Training results
53
+
54
+
55
+
56
+ ### Framework versions
57
+
58
+ - PEFT 0.7.1
59
+ - Transformers 4.36.2
60
+ - Pytorch 2.1.2+cu121
61
+ - Datasets 2.14.6
62
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:066ee63acc48e6e3b43a9ed5e7f85cbac03459fe7e2f15942e9c75c1d703a020
3
  size 167807296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afea87b6311e587ea4e1153fa522c44d07f13ca35caab548c4e5bf282f31f315
3
  size 167807296
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.08644322651809155,
4
+ "train_runtime": 5283.9744,
5
+ "train_samples": 30000,
6
+ "train_samples_per_second": 5.678,
7
+ "train_steps_per_second": 0.177
8
+ }
runs/May16_23-27-05_gpu4-119-5/events.out.tfevents.1715866164.gpu4-119-5.702446.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:254e598bf850564742adc74376690d6b9d6bc6300fe4a767f97b2aa5da64656b
3
- size 30195
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36de2e70ca0692165cd4f348b0a3bd529950bed00bb1e59131db6a5667b79a6f
3
+ size 32451
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.08644322651809155,
4
+ "train_runtime": 5283.9744,
5
+ "train_samples": 30000,
6
+ "train_samples_per_second": 5.678,
7
+ "train_steps_per_second": 0.177
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9994666666666666,
5
+ "eval_steps": 500,
6
+ "global_step": 937,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 5.319148936170213e-08,
14
+ "logits/chosen": 0.1184067577123642,
15
+ "logits/rejected": 0.3525714576244354,
16
+ "logps/chosen": -429.5767822265625,
17
+ "logps/rejected": -514.7810668945312,
18
+ "loss": 0.3025,
19
+ "rewards/accuracies": 0.0,
20
+ "rewards/chosen": 0.0,
21
+ "rewards/margins": 0.0,
22
+ "rewards/rejected": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.01,
27
+ "learning_rate": 5.319148936170213e-07,
28
+ "logits/chosen": 0.05111883208155632,
29
+ "logits/rejected": 0.23624171316623688,
30
+ "logps/chosen": -363.744140625,
31
+ "logps/rejected": -491.4056091308594,
32
+ "loss": 0.3259,
33
+ "rewards/accuracies": 0.3819444477558136,
34
+ "rewards/chosen": 2.125546416209545e-05,
35
+ "rewards/margins": -4.7190667828544974e-05,
36
+ "rewards/rejected": 6.844610470579937e-05,
37
+ "step": 10
38
+ },
39
+ {
40
+ "epoch": 0.02,
41
+ "learning_rate": 1.0638297872340427e-06,
42
+ "logits/chosen": 0.0970318466424942,
43
+ "logits/rejected": 0.21818876266479492,
44
+ "logps/chosen": -342.73223876953125,
45
+ "logps/rejected": -533.9749755859375,
46
+ "loss": 0.3389,
47
+ "rewards/accuracies": 0.5062500238418579,
48
+ "rewards/chosen": -0.00010573499457677826,
49
+ "rewards/margins": 0.00035456178011372685,
50
+ "rewards/rejected": -0.00046029678196646273,
51
+ "step": 20
52
+ },
53
+ {
54
+ "epoch": 0.03,
55
+ "learning_rate": 1.595744680851064e-06,
56
+ "logits/chosen": 0.08977697044610977,
57
+ "logits/rejected": 0.20492060482501984,
58
+ "logps/chosen": -356.282958984375,
59
+ "logps/rejected": -506.893798828125,
60
+ "loss": 0.3282,
61
+ "rewards/accuracies": 0.6499999761581421,
62
+ "rewards/chosen": -0.0007758208666928113,
63
+ "rewards/margins": 0.0015017592813819647,
64
+ "rewards/rejected": -0.0022775803226977587,
65
+ "step": 30
66
+ },
67
+ {
68
+ "epoch": 0.04,
69
+ "learning_rate": 2.1276595744680853e-06,
70
+ "logits/chosen": 0.14597493410110474,
71
+ "logits/rejected": 0.2742587625980377,
72
+ "logps/chosen": -373.9236755371094,
73
+ "logps/rejected": -478.6368103027344,
74
+ "loss": 0.3329,
75
+ "rewards/accuracies": 0.699999988079071,
76
+ "rewards/chosen": -0.0021772016771137714,
77
+ "rewards/margins": 0.00360355107113719,
78
+ "rewards/rejected": -0.005780753213912249,
79
+ "step": 40
80
+ },
81
+ {
82
+ "epoch": 0.05,
83
+ "learning_rate": 2.6595744680851065e-06,
84
+ "logits/chosen": 0.06719115376472473,
85
+ "logits/rejected": 0.20441851019859314,
86
+ "logps/chosen": -339.3710021972656,
87
+ "logps/rejected": -437.3455505371094,
88
+ "loss": 0.3465,
89
+ "rewards/accuracies": 0.7250000238418579,
90
+ "rewards/chosen": -0.00426669092848897,
91
+ "rewards/margins": 0.009421268478035927,
92
+ "rewards/rejected": -0.01368795894086361,
93
+ "step": 50
94
+ },
95
+ {
96
+ "epoch": 0.06,
97
+ "learning_rate": 3.191489361702128e-06,
98
+ "logits/chosen": -0.008065430447459221,
99
+ "logits/rejected": 0.08582984656095505,
100
+ "logps/chosen": -321.49005126953125,
101
+ "logps/rejected": -471.43896484375,
102
+ "loss": 0.3231,
103
+ "rewards/accuracies": 0.7562500238418579,
104
+ "rewards/chosen": -0.008598757907748222,
105
+ "rewards/margins": 0.023485153913497925,
106
+ "rewards/rejected": -0.032083909958601,
107
+ "step": 60
108
+ },
109
+ {
110
+ "epoch": 0.07,
111
+ "learning_rate": 3.723404255319149e-06,
112
+ "logits/chosen": 0.06349015235900879,
113
+ "logits/rejected": 0.1338900625705719,
114
+ "logps/chosen": -400.74725341796875,
115
+ "logps/rejected": -535.9130859375,
116
+ "loss": 0.3199,
117
+ "rewards/accuracies": 0.768750011920929,
118
+ "rewards/chosen": -0.02710326947271824,
119
+ "rewards/margins": 0.03926965966820717,
120
+ "rewards/rejected": -0.06637293100357056,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.09,
125
+ "learning_rate": 4.255319148936171e-06,
126
+ "logits/chosen": -0.012521197088062763,
127
+ "logits/rejected": 0.05875171348452568,
128
+ "logps/chosen": -416.5260314941406,
129
+ "logps/rejected": -619.4620971679688,
130
+ "loss": 0.2653,
131
+ "rewards/accuracies": 0.762499988079071,
132
+ "rewards/chosen": -0.04513772577047348,
133
+ "rewards/margins": 0.09761229157447815,
134
+ "rewards/rejected": -0.14275000989437103,
135
+ "step": 80
136
+ },
137
+ {
138
+ "epoch": 0.1,
139
+ "learning_rate": 4.787234042553192e-06,
140
+ "logits/chosen": -0.06508742272853851,
141
+ "logits/rejected": -0.025008535012602806,
142
+ "logps/chosen": -398.4359436035156,
143
+ "logps/rejected": -709.9359130859375,
144
+ "loss": 0.2408,
145
+ "rewards/accuracies": 0.8187500238418579,
146
+ "rewards/chosen": -0.06391867995262146,
147
+ "rewards/margins": 0.18030937016010284,
148
+ "rewards/rejected": -0.2442280501127243,
149
+ "step": 90
150
+ },
151
+ {
152
+ "epoch": 0.11,
153
+ "learning_rate": 4.999375059004058e-06,
154
+ "logits/chosen": -0.11634773015975952,
155
+ "logits/rejected": -0.04804380610585213,
156
+ "logps/chosen": -461.4971618652344,
157
+ "logps/rejected": -726.8289794921875,
158
+ "loss": 0.2717,
159
+ "rewards/accuracies": 0.7749999761581421,
160
+ "rewards/chosen": -0.0884505957365036,
161
+ "rewards/margins": 0.17610163986682892,
162
+ "rewards/rejected": -0.2645522356033325,
163
+ "step": 100
164
+ },
165
+ {
166
+ "epoch": 0.12,
167
+ "learning_rate": 4.9955571065548795e-06,
168
+ "logits/chosen": -0.07285631448030472,
169
+ "logits/rejected": -0.07443198561668396,
170
+ "logps/chosen": -428.96588134765625,
171
+ "logps/rejected": -709.934326171875,
172
+ "loss": 0.2447,
173
+ "rewards/accuracies": 0.762499988079071,
174
+ "rewards/chosen": -0.06827546656131744,
175
+ "rewards/margins": 0.1353142112493515,
176
+ "rewards/rejected": -0.20358964800834656,
177
+ "step": 110
178
+ },
179
+ {
180
+ "epoch": 0.13,
181
+ "learning_rate": 4.9882736864879e-06,
182
+ "logits/chosen": -0.176306813955307,
183
+ "logits/rejected": -0.049706846475601196,
184
+ "logps/chosen": -430.32562255859375,
185
+ "logps/rejected": -693.2147216796875,
186
+ "loss": 0.2456,
187
+ "rewards/accuracies": 0.7437499761581421,
188
+ "rewards/chosen": -0.08173463493585587,
189
+ "rewards/margins": 0.13715550303459167,
190
+ "rewards/rejected": -0.21889011561870575,
191
+ "step": 120
192
+ },
193
+ {
194
+ "epoch": 0.14,
195
+ "learning_rate": 4.977534912960124e-06,
196
+ "logits/chosen": -0.15699656307697296,
197
+ "logits/rejected": -0.0577981173992157,
198
+ "logps/chosen": -445.72894287109375,
199
+ "logps/rejected": -769.3819580078125,
200
+ "loss": 0.2081,
201
+ "rewards/accuracies": 0.7875000238418579,
202
+ "rewards/chosen": -0.07942022383213043,
203
+ "rewards/margins": 0.19334132969379425,
204
+ "rewards/rejected": -0.2727615237236023,
205
+ "step": 130
206
+ },
207
+ {
208
+ "epoch": 0.15,
209
+ "learning_rate": 4.963355698422092e-06,
210
+ "logits/chosen": -0.18548890948295593,
211
+ "logits/rejected": -0.12828989326953888,
212
+ "logps/chosen": -451.9037170410156,
213
+ "logps/rejected": -739.3684692382812,
214
+ "loss": 0.2257,
215
+ "rewards/accuracies": 0.78125,
216
+ "rewards/chosen": -0.0921342521905899,
217
+ "rewards/margins": 0.16765089333057404,
218
+ "rewards/rejected": -0.25978511571884155,
219
+ "step": 140
220
+ },
221
+ {
222
+ "epoch": 0.16,
223
+ "learning_rate": 4.945755732909625e-06,
224
+ "logits/chosen": -0.15024694800376892,
225
+ "logits/rejected": -0.06675387918949127,
226
+ "logps/chosen": -537.3986206054688,
227
+ "logps/rejected": -775.892578125,
228
+ "loss": 0.2479,
229
+ "rewards/accuracies": 0.7749999761581421,
230
+ "rewards/chosen": -0.10632505267858505,
231
+ "rewards/margins": 0.16134649515151978,
232
+ "rewards/rejected": -0.2676715552806854,
233
+ "step": 150
234
+ },
235
+ {
236
+ "epoch": 0.17,
237
+ "learning_rate": 4.924759456701167e-06,
238
+ "logits/chosen": -0.20865170657634735,
239
+ "logits/rejected": -0.10186436027288437,
240
+ "logps/chosen": -483.97760009765625,
241
+ "logps/rejected": -849.4505004882812,
242
+ "loss": 0.2265,
243
+ "rewards/accuracies": 0.824999988079071,
244
+ "rewards/chosen": -0.09985359013080597,
245
+ "rewards/margins": 0.2456916868686676,
246
+ "rewards/rejected": -0.34554529190063477,
247
+ "step": 160
248
+ },
249
+ {
250
+ "epoch": 0.18,
251
+ "learning_rate": 4.900396026378671e-06,
252
+ "logits/chosen": -0.23064300417900085,
253
+ "logits/rejected": -0.12898269295692444,
254
+ "logps/chosen": -494.6697692871094,
255
+ "logps/rejected": -781.0926513671875,
256
+ "loss": 0.2324,
257
+ "rewards/accuracies": 0.7875000238418579,
258
+ "rewards/chosen": -0.11788048595190048,
259
+ "rewards/margins": 0.17606906592845917,
260
+ "rewards/rejected": -0.29394954442977905,
261
+ "step": 170
262
+ },
263
+ {
264
+ "epoch": 0.19,
265
+ "learning_rate": 4.872699274339169e-06,
266
+ "logits/chosen": -0.22916531562805176,
267
+ "logits/rejected": -0.14095698297023773,
268
+ "logps/chosen": -458.0244140625,
269
+ "logps/rejected": -759.4512939453125,
270
+ "loss": 0.2393,
271
+ "rewards/accuracies": 0.8374999761581421,
272
+ "rewards/chosen": -0.12103551626205444,
273
+ "rewards/margins": 0.17601092159748077,
274
+ "rewards/rejected": -0.29704639315605164,
275
+ "step": 180
276
+ },
277
+ {
278
+ "epoch": 0.2,
279
+ "learning_rate": 4.8417076618132434e-06,
280
+ "logits/chosen": -0.19613122940063477,
281
+ "logits/rejected": -0.19725319743156433,
282
+ "logps/chosen": -477.3087463378906,
283
+ "logps/rejected": -782.9113159179688,
284
+ "loss": 0.2591,
285
+ "rewards/accuracies": 0.800000011920929,
286
+ "rewards/chosen": -0.12908069789409637,
287
+ "rewards/margins": 0.18730813264846802,
288
+ "rewards/rejected": -0.3163888156414032,
289
+ "step": 190
290
+ },
291
+ {
292
+ "epoch": 0.21,
293
+ "learning_rate": 4.807464225455655e-06,
294
+ "logits/chosen": -0.18979570269584656,
295
+ "logits/rejected": -0.12616662681102753,
296
+ "logps/chosen": -501.4048767089844,
297
+ "logps/rejected": -773.6704711914062,
298
+ "loss": 0.2318,
299
+ "rewards/accuracies": 0.71875,
300
+ "rewards/chosen": -0.126991406083107,
301
+ "rewards/margins": 0.16587643325328827,
302
+ "rewards/rejected": -0.29286783933639526,
303
+ "step": 200
304
+ },
305
+ {
306
+ "epoch": 0.22,
307
+ "learning_rate": 4.770016517582283e-06,
308
+ "logits/chosen": -0.19870837032794952,
309
+ "logits/rejected": -0.1376388520002365,
310
+ "logps/chosen": -476.4922790527344,
311
+ "logps/rejected": -797.6036376953125,
312
+ "loss": 0.2344,
313
+ "rewards/accuracies": 0.7875000238418579,
314
+ "rewards/chosen": -0.12952814996242523,
315
+ "rewards/margins": 0.19133572280406952,
316
+ "rewards/rejected": -0.32086387276649475,
317
+ "step": 210
318
+ },
319
+ {
320
+ "epoch": 0.23,
321
+ "learning_rate": 4.7294165401363616e-06,
322
+ "logits/chosen": -0.18674388527870178,
323
+ "logits/rejected": -0.17149865627288818,
324
+ "logps/chosen": -476.748779296875,
325
+ "logps/rejected": -736.6768798828125,
326
+ "loss": 0.2225,
327
+ "rewards/accuracies": 0.768750011920929,
328
+ "rewards/chosen": -0.1292114555835724,
329
+ "rewards/margins": 0.17972734570503235,
330
+ "rewards/rejected": -0.30893880128860474,
331
+ "step": 220
332
+ },
333
+ {
334
+ "epoch": 0.25,
335
+ "learning_rate": 4.68572067247573e-06,
336
+ "logits/chosen": -0.2134261131286621,
337
+ "logits/rejected": -0.17389492690563202,
338
+ "logps/chosen": -556.177490234375,
339
+ "logps/rejected": -932.4171752929688,
340
+ "loss": 0.2011,
341
+ "rewards/accuracies": 0.7875000238418579,
342
+ "rewards/chosen": -0.17576465010643005,
343
+ "rewards/margins": 0.23368000984191895,
344
+ "rewards/rejected": -0.4094447195529938,
345
+ "step": 230
346
+ },
347
+ {
348
+ "epoch": 0.26,
349
+ "learning_rate": 4.638989593081364e-06,
350
+ "logits/chosen": -0.2312520444393158,
351
+ "logits/rejected": -0.17490056157112122,
352
+ "logps/chosen": -608.5135498046875,
353
+ "logps/rejected": -883.7859497070312,
354
+ "loss": 0.2251,
355
+ "rewards/accuracies": 0.78125,
356
+ "rewards/chosen": -0.19390198588371277,
357
+ "rewards/margins": 0.20176473259925842,
358
+ "rewards/rejected": -0.3956666886806488,
359
+ "step": 240
360
+ },
361
+ {
362
+ "epoch": 0.27,
363
+ "learning_rate": 4.5892881952959015e-06,
364
+ "logits/chosen": -0.19589188694953918,
365
+ "logits/rejected": -0.16277478635311127,
366
+ "logps/chosen": -522.8567504882812,
367
+ "logps/rejected": -815.2623291015625,
368
+ "loss": 0.2215,
369
+ "rewards/accuracies": 0.793749988079071,
370
+ "rewards/chosen": -0.15903013944625854,
371
+ "rewards/margins": 0.19178643822669983,
372
+ "rewards/rejected": -0.3508165776729584,
373
+ "step": 250
374
+ },
375
+ {
376
+ "epoch": 0.28,
377
+ "learning_rate": 4.536685497209182e-06,
378
+ "logits/chosen": -0.25143852829933167,
379
+ "logits/rejected": -0.15717722475528717,
380
+ "logps/chosen": -533.4439697265625,
381
+ "logps/rejected": -940.82763671875,
382
+ "loss": 0.2056,
383
+ "rewards/accuracies": 0.862500011920929,
384
+ "rewards/chosen": -0.18470649421215057,
385
+ "rewards/margins": 0.22995992004871368,
386
+ "rewards/rejected": -0.41466641426086426,
387
+ "step": 260
388
+ },
389
+ {
390
+ "epoch": 0.29,
391
+ "learning_rate": 4.481254545815943e-06,
392
+ "logits/chosen": -0.23728492856025696,
393
+ "logits/rejected": -0.22243991494178772,
394
+ "logps/chosen": -552.9218139648438,
395
+ "logps/rejected": -903.7254028320312,
396
+ "loss": 0.196,
397
+ "rewards/accuracies": 0.800000011920929,
398
+ "rewards/chosen": -0.20833177864551544,
399
+ "rewards/margins": 0.22902217507362366,
400
+ "rewards/rejected": -0.4373539388179779,
401
+ "step": 270
402
+ },
403
+ {
404
+ "epoch": 0.3,
405
+ "learning_rate": 4.42307231557875e-06,
406
+ "logits/chosen": -0.27826839685440063,
407
+ "logits/rejected": -0.200110524892807,
408
+ "logps/chosen": -523.31787109375,
409
+ "logps/rejected": -899.9382934570312,
410
+ "loss": 0.1954,
411
+ "rewards/accuracies": 0.75,
412
+ "rewards/chosen": -0.20382864773273468,
413
+ "rewards/margins": 0.2211592197418213,
414
+ "rewards/rejected": -0.42498788237571716,
415
+ "step": 280
416
+ },
417
+ {
418
+ "epoch": 0.31,
419
+ "learning_rate": 4.3622196015370305e-06,
420
+ "logits/chosen": -0.23895792663097382,
421
+ "logits/rejected": -0.14843276143074036,
422
+ "logps/chosen": -594.8115844726562,
423
+ "logps/rejected": -907.5458984375,
424
+ "loss": 0.2135,
425
+ "rewards/accuracies": 0.800000011920929,
426
+ "rewards/chosen": -0.19232866168022156,
427
+ "rewards/margins": 0.21376939117908478,
428
+ "rewards/rejected": -0.40609806776046753,
429
+ "step": 290
430
+ },
431
+ {
432
+ "epoch": 0.32,
433
+ "learning_rate": 4.298780907110648e-06,
434
+ "logits/chosen": -0.24878796935081482,
435
+ "logits/rejected": -0.18727460503578186,
436
+ "logps/chosen": -580.38671875,
437
+ "logps/rejected": -878.1405029296875,
438
+ "loss": 0.1938,
439
+ "rewards/accuracies": 0.800000011920929,
440
+ "rewards/chosen": -0.2084374725818634,
441
+ "rewards/margins": 0.20776410400867462,
442
+ "rewards/rejected": -0.4162015914916992,
443
+ "step": 300
444
+ },
445
+ {
446
+ "epoch": 0.33,
447
+ "learning_rate": 4.23284432675381e-06,
448
+ "logits/chosen": -0.3039627969264984,
449
+ "logits/rejected": -0.23918378353118896,
450
+ "logps/chosen": -601.3494873046875,
451
+ "logps/rejected": -868.8585815429688,
452
+ "loss": 0.2041,
453
+ "rewards/accuracies": 0.7875000238418579,
454
+ "rewards/chosen": -0.23962783813476562,
455
+ "rewards/margins": 0.18101730942726135,
456
+ "rewards/rejected": -0.420645147562027,
457
+ "step": 310
458
+ },
459
+ {
460
+ "epoch": 0.34,
461
+ "learning_rate": 4.164501423622277e-06,
462
+ "logits/chosen": -0.3026599884033203,
463
+ "logits/rejected": -0.22676105797290802,
464
+ "logps/chosen": -559.4996337890625,
465
+ "logps/rejected": -871.5550537109375,
466
+ "loss": 0.1855,
467
+ "rewards/accuracies": 0.8187500238418579,
468
+ "rewards/chosen": -0.21141842007637024,
469
+ "rewards/margins": 0.21575455367565155,
470
+ "rewards/rejected": -0.427172988653183,
471
+ "step": 320
472
+ },
473
+ {
474
+ "epoch": 0.35,
475
+ "learning_rate": 4.0938471024237355e-06,
476
+ "logits/chosen": -0.24148467183113098,
477
+ "logits/rejected": -0.19344112277030945,
478
+ "logps/chosen": -570.2223510742188,
479
+ "logps/rejected": -958.9056396484375,
480
+ "loss": 0.1827,
481
+ "rewards/accuracies": 0.8187500238418579,
482
+ "rewards/chosen": -0.22266192734241486,
483
+ "rewards/margins": 0.2378120869398117,
484
+ "rewards/rejected": -0.46047407388687134,
485
+ "step": 330
486
+ },
487
+ {
488
+ "epoch": 0.36,
489
+ "learning_rate": 4.020979477627907e-06,
490
+ "logits/chosen": -0.23564183712005615,
491
+ "logits/rejected": -0.23679451644420624,
492
+ "logps/chosen": -623.2242431640625,
493
+ "logps/rejected": -871.82568359375,
494
+ "loss": 0.2297,
495
+ "rewards/accuracies": 0.7562500238418579,
496
+ "rewards/chosen": -0.2473243921995163,
497
+ "rewards/margins": 0.19464418292045593,
498
+ "rewards/rejected": -0.44196853041648865,
499
+ "step": 340
500
+ },
501
+ {
502
+ "epoch": 0.37,
503
+ "learning_rate": 3.9459997372194105e-06,
504
+ "logits/chosen": -0.22112271189689636,
505
+ "logits/rejected": -0.24255314469337463,
506
+ "logps/chosen": -654.8084716796875,
507
+ "logps/rejected": -992.0069580078125,
508
+ "loss": 0.2101,
509
+ "rewards/accuracies": 0.768750011920929,
510
+ "rewards/chosen": -0.2653377950191498,
511
+ "rewards/margins": 0.21331480145454407,
512
+ "rewards/rejected": -0.47865256667137146,
513
+ "step": 350
514
+ },
515
+ {
516
+ "epoch": 0.38,
517
+ "learning_rate": 3.869012002182573e-06,
518
+ "logits/chosen": -0.2565579116344452,
519
+ "logits/rejected": -0.23455138504505157,
520
+ "logps/chosen": -634.1543579101562,
521
+ "logps/rejected": -1048.505859375,
522
+ "loss": 0.1816,
523
+ "rewards/accuracies": 0.8687499761581421,
524
+ "rewards/chosen": -0.246078759431839,
525
+ "rewards/margins": 0.2945541739463806,
526
+ "rewards/rejected": -0.540632963180542,
527
+ "step": 360
528
+ },
529
+ {
530
+ "epoch": 0.39,
531
+ "learning_rate": 3.7901231819133104e-06,
532
+ "logits/chosen": -0.27797406911849976,
533
+ "logits/rejected": -0.24355745315551758,
534
+ "logps/chosen": -634.2208251953125,
535
+ "logps/rejected": -972.6463012695312,
536
+ "loss": 0.2072,
537
+ "rewards/accuracies": 0.8125,
538
+ "rewards/chosen": -0.2534112334251404,
539
+ "rewards/margins": 0.23360367119312286,
540
+ "rewards/rejected": -0.48701491951942444,
541
+ "step": 370
542
+ },
543
+ {
544
+ "epoch": 0.41,
545
+ "learning_rate": 3.709442825758875e-06,
546
+ "logits/chosen": -0.2145443856716156,
547
+ "logits/rejected": -0.2447199821472168,
548
+ "logps/chosen": -630.0811157226562,
549
+ "logps/rejected": -895.8094482421875,
550
+ "loss": 0.2066,
551
+ "rewards/accuracies": 0.78125,
552
+ "rewards/chosen": -0.2521076202392578,
553
+ "rewards/margins": 0.2124219834804535,
554
+ "rewards/rejected": -0.4645296037197113,
555
+ "step": 380
556
+ },
557
+ {
558
+ "epoch": 0.42,
559
+ "learning_rate": 3.6270829708916113e-06,
560
+ "logits/chosen": -0.26773589849472046,
561
+ "logits/rejected": -0.1856563836336136,
562
+ "logps/chosen": -635.20703125,
563
+ "logps/rejected": -1008.9361572265625,
564
+ "loss": 0.1878,
565
+ "rewards/accuracies": 0.8125,
566
+ "rewards/chosen": -0.2653789818286896,
567
+ "rewards/margins": 0.2648240625858307,
568
+ "rewards/rejected": -0.5302029848098755,
569
+ "step": 390
570
+ },
571
+ {
572
+ "epoch": 0.43,
573
+ "learning_rate": 3.543157986727991e-06,
574
+ "logits/chosen": -0.24474501609802246,
575
+ "logits/rejected": -0.21170318126678467,
576
+ "logps/chosen": -730.6715087890625,
577
+ "logps/rejected": -1069.3619384765625,
578
+ "loss": 0.2077,
579
+ "rewards/accuracies": 0.800000011920929,
580
+ "rewards/chosen": -0.2803260385990143,
581
+ "rewards/margins": 0.2544651925563812,
582
+ "rewards/rejected": -0.5347911715507507,
583
+ "step": 400
584
+ },
585
+ {
586
+ "epoch": 0.44,
587
+ "learning_rate": 3.4577844161089614e-06,
588
+ "logits/chosen": -0.1851538121700287,
589
+ "logits/rejected": -0.24047665297985077,
590
+ "logps/chosen": -616.1502685546875,
591
+ "logps/rejected": -903.6608276367188,
592
+ "loss": 0.2123,
593
+ "rewards/accuracies": 0.7437499761581421,
594
+ "rewards/chosen": -0.24726176261901855,
595
+ "rewards/margins": 0.19664901494979858,
596
+ "rewards/rejected": -0.44391077756881714,
597
+ "step": 410
598
+ },
599
+ {
600
+ "epoch": 0.45,
601
+ "learning_rate": 3.3710808134621577e-06,
602
+ "logits/chosen": -0.2203509360551834,
603
+ "logits/rejected": -0.2040051519870758,
604
+ "logps/chosen": -636.1844482421875,
605
+ "logps/rejected": -1008.9363403320312,
606
+ "loss": 0.1868,
607
+ "rewards/accuracies": 0.856249988079071,
608
+ "rewards/chosen": -0.2344098538160324,
609
+ "rewards/margins": 0.2508249282836914,
610
+ "rewards/rejected": -0.485234797000885,
611
+ "step": 420
612
+ },
613
+ {
614
+ "epoch": 0.46,
615
+ "learning_rate": 3.2831675801707126e-06,
616
+ "logits/chosen": -0.2882968783378601,
617
+ "logits/rejected": -0.28868910670280457,
618
+ "logps/chosen": -629.03662109375,
619
+ "logps/rejected": -997.5048828125,
620
+ "loss": 0.1877,
621
+ "rewards/accuracies": 0.8062499761581421,
622
+ "rewards/chosen": -0.2724894881248474,
623
+ "rewards/margins": 0.2427108734846115,
624
+ "rewards/rejected": -0.5152003765106201,
625
+ "step": 430
626
+ },
627
+ {
628
+ "epoch": 0.47,
629
+ "learning_rate": 3.194166797377289e-06,
630
+ "logits/chosen": -0.2842113971710205,
631
+ "logits/rejected": -0.2725662589073181,
632
+ "logps/chosen": -629.4954223632812,
633
+ "logps/rejected": -921.0350341796875,
634
+ "loss": 0.1871,
635
+ "rewards/accuracies": 0.824999988079071,
636
+ "rewards/chosen": -0.2583664059638977,
637
+ "rewards/margins": 0.23012156784534454,
638
+ "rewards/rejected": -0.48848801851272583,
639
+ "step": 440
640
+ },
641
+ {
642
+ "epoch": 0.48,
643
+ "learning_rate": 3.104202056455501e-06,
644
+ "logits/chosen": -0.23092789947986603,
645
+ "logits/rejected": -0.2383890151977539,
646
+ "logps/chosen": -603.952392578125,
647
+ "logps/rejected": -931.93994140625,
648
+ "loss": 0.1852,
649
+ "rewards/accuracies": 0.768750011920929,
650
+ "rewards/chosen": -0.24851222336292267,
651
+ "rewards/margins": 0.2311173975467682,
652
+ "rewards/rejected": -0.4796296954154968,
653
+ "step": 450
654
+ },
655
+ {
656
+ "epoch": 0.49,
657
+ "learning_rate": 3.013398287384144e-06,
658
+ "logits/chosen": -0.25257351994514465,
659
+ "logits/rejected": -0.20124118030071259,
660
+ "logps/chosen": -720.2821044921875,
661
+ "logps/rejected": -1016.345703125,
662
+ "loss": 0.2169,
663
+ "rewards/accuracies": 0.7749999761581421,
664
+ "rewards/chosen": -0.291043221950531,
665
+ "rewards/margins": 0.23256464302539825,
666
+ "rewards/rejected": -0.5236078500747681,
667
+ "step": 460
668
+ },
669
+ {
670
+ "epoch": 0.5,
671
+ "learning_rate": 2.9218815852625717e-06,
672
+ "logits/chosen": -0.280508816242218,
673
+ "logits/rejected": -0.27508029341697693,
674
+ "logps/chosen": -599.3237915039062,
675
+ "logps/rejected": -959.4754028320312,
676
+ "loss": 0.1873,
677
+ "rewards/accuracies": 0.8125,
678
+ "rewards/chosen": -0.266213595867157,
679
+ "rewards/margins": 0.23423174023628235,
680
+ "rewards/rejected": -0.5004453063011169,
681
+ "step": 470
682
+ },
683
+ {
684
+ "epoch": 0.51,
685
+ "learning_rate": 2.829779035208113e-06,
686
+ "logits/chosen": -0.264869749546051,
687
+ "logits/rejected": -0.2316662073135376,
688
+ "logps/chosen": -632.74853515625,
689
+ "logps/rejected": -977.66845703125,
690
+ "loss": 0.1933,
691
+ "rewards/accuracies": 0.762499988079071,
692
+ "rewards/chosen": -0.2532680928707123,
693
+ "rewards/margins": 0.23631851375102997,
694
+ "rewards/rejected": -0.48958665132522583,
695
+ "step": 480
696
+ },
697
+ {
698
+ "epoch": 0.52,
699
+ "learning_rate": 2.737218535878705e-06,
700
+ "logits/chosen": -0.26723513007164,
701
+ "logits/rejected": -0.2752009928226471,
702
+ "logps/chosen": -612.4442749023438,
703
+ "logps/rejected": -949.7120971679688,
704
+ "loss": 0.1789,
705
+ "rewards/accuracies": 0.793749988079071,
706
+ "rewards/chosen": -0.2443923056125641,
707
+ "rewards/margins": 0.23941712081432343,
708
+ "rewards/rejected": -0.4838094115257263,
709
+ "step": 490
710
+ },
711
+ {
712
+ "epoch": 0.53,
713
+ "learning_rate": 2.64432862186579e-06,
714
+ "logits/chosen": -0.3223264515399933,
715
+ "logits/rejected": -0.29264402389526367,
716
+ "logps/chosen": -628.1448974609375,
717
+ "logps/rejected": -997.76123046875,
718
+ "loss": 0.1908,
719
+ "rewards/accuracies": 0.8187500238418579,
720
+ "rewards/chosen": -0.26595020294189453,
721
+ "rewards/margins": 0.25043538212776184,
722
+ "rewards/rejected": -0.516385555267334,
723
+ "step": 500
724
+ },
725
+ {
726
+ "epoch": 0.54,
727
+ "learning_rate": 2.551238285204126e-06,
728
+ "logits/chosen": -0.2382831573486328,
729
+ "logits/rejected": -0.2551121711730957,
730
+ "logps/chosen": -653.0089111328125,
731
+ "logps/rejected": -1023.2013549804688,
732
+ "loss": 0.2152,
733
+ "rewards/accuracies": 0.8125,
734
+ "rewards/chosen": -0.2814193367958069,
735
+ "rewards/margins": 0.2521519660949707,
736
+ "rewards/rejected": -0.5335713028907776,
737
+ "step": 510
738
+ },
739
+ {
740
+ "epoch": 0.55,
741
+ "learning_rate": 2.4580767962463688e-06,
742
+ "logits/chosen": -0.25415441393852234,
743
+ "logits/rejected": -0.2852818965911865,
744
+ "logps/chosen": -588.615234375,
745
+ "logps/rejected": -990.0931396484375,
746
+ "loss": 0.1917,
747
+ "rewards/accuracies": 0.793749988079071,
748
+ "rewards/chosen": -0.2602744996547699,
749
+ "rewards/margins": 0.24789556860923767,
750
+ "rewards/rejected": -0.5081701278686523,
751
+ "step": 520
752
+ },
753
+ {
754
+ "epoch": 0.57,
755
+ "learning_rate": 2.3649735241511546e-06,
756
+ "logits/chosen": -0.2810899317264557,
757
+ "logits/rejected": -0.24050931632518768,
758
+ "logps/chosen": -667.4718627929688,
759
+ "logps/rejected": -1030.656494140625,
760
+ "loss": 0.1788,
761
+ "rewards/accuracies": 0.793749988079071,
762
+ "rewards/chosen": -0.2851766049861908,
763
+ "rewards/margins": 0.2829527258872986,
764
+ "rewards/rejected": -0.5681293606758118,
765
+ "step": 530
766
+ },
767
+ {
768
+ "epoch": 0.58,
769
+ "learning_rate": 2.2720577572339914e-06,
770
+ "logits/chosen": -0.29503872990608215,
771
+ "logits/rejected": -0.2738192677497864,
772
+ "logps/chosen": -669.0242309570312,
773
+ "logps/rejected": -1088.0225830078125,
774
+ "loss": 0.1644,
775
+ "rewards/accuracies": 0.8062499761581421,
776
+ "rewards/chosen": -0.29529038071632385,
777
+ "rewards/margins": 0.3003653287887573,
778
+ "rewards/rejected": -0.5956557393074036,
779
+ "step": 540
780
+ },
781
+ {
782
+ "epoch": 0.59,
783
+ "learning_rate": 2.1794585234303995e-06,
784
+ "logits/chosen": -0.2950688898563385,
785
+ "logits/rejected": -0.1541706621646881,
786
+ "logps/chosen": -705.3633422851562,
787
+ "logps/rejected": -1031.0267333984375,
788
+ "loss": 0.1825,
789
+ "rewards/accuracies": 0.8999999761581421,
790
+ "rewards/chosen": -0.2902067303657532,
791
+ "rewards/margins": 0.284890741109848,
792
+ "rewards/rejected": -0.5750974416732788,
793
+ "step": 550
794
+ },
795
+ {
796
+ "epoch": 0.6,
797
+ "learning_rate": 2.0873044111206407e-06,
798
+ "logits/chosen": -0.28339633345603943,
799
+ "logits/rejected": -0.22169962525367737,
800
+ "logps/chosen": -629.9029541015625,
801
+ "logps/rejected": -1066.449462890625,
802
+ "loss": 0.1942,
803
+ "rewards/accuracies": 0.8500000238418579,
804
+ "rewards/chosen": -0.2589099705219269,
805
+ "rewards/margins": 0.2774832844734192,
806
+ "rewards/rejected": -0.5363932847976685,
807
+ "step": 560
808
+ },
809
+ {
810
+ "epoch": 0.61,
811
+ "learning_rate": 1.9957233905648293e-06,
812
+ "logits/chosen": -0.24862179160118103,
813
+ "logits/rejected": -0.22971436381340027,
814
+ "logps/chosen": -636.7313232421875,
815
+ "logps/rejected": -972.0720825195312,
816
+ "loss": 0.1842,
817
+ "rewards/accuracies": 0.78125,
818
+ "rewards/chosen": -0.2589462399482727,
819
+ "rewards/margins": 0.2386230230331421,
820
+ "rewards/rejected": -0.4975692629814148,
821
+ "step": 570
822
+ },
823
+ {
824
+ "epoch": 0.62,
825
+ "learning_rate": 1.904842636196402e-06,
826
+ "logits/chosen": -0.2794085443019867,
827
+ "logits/rejected": -0.25304803252220154,
828
+ "logps/chosen": -655.1509399414062,
829
+ "logps/rejected": -1037.134033203125,
830
+ "loss": 0.2004,
831
+ "rewards/accuracies": 0.8374999761581421,
832
+ "rewards/chosen": -0.2735671103000641,
833
+ "rewards/margins": 0.27847912907600403,
834
+ "rewards/rejected": -0.5520461797714233,
835
+ "step": 580
836
+ },
837
+ {
838
+ "epoch": 0.63,
839
+ "learning_rate": 1.814788350020726e-06,
840
+ "logits/chosen": -0.2863375246524811,
841
+ "logits/rejected": -0.2578074336051941,
842
+ "logps/chosen": -644.8294067382812,
843
+ "logps/rejected": -978.0472412109375,
844
+ "loss": 0.1852,
845
+ "rewards/accuracies": 0.8125,
846
+ "rewards/chosen": -0.27340829372406006,
847
+ "rewards/margins": 0.256661593914032,
848
+ "rewards/rejected": -0.5300698280334473,
849
+ "step": 590
850
+ },
851
+ {
852
+ "epoch": 0.64,
853
+ "learning_rate": 1.725685586364051e-06,
854
+ "logits/chosen": -0.30925947427749634,
855
+ "logits/rejected": -0.26838865876197815,
856
+ "logps/chosen": -632.4716796875,
857
+ "logps/rejected": -994.3053588867188,
858
+ "loss": 0.2151,
859
+ "rewards/accuracies": 0.800000011920929,
860
+ "rewards/chosen": -0.27847468852996826,
861
+ "rewards/margins": 0.25020256638526917,
862
+ "rewards/rejected": -0.5286772847175598,
863
+ "step": 600
864
+ },
865
+ {
866
+ "epoch": 0.65,
867
+ "learning_rate": 1.6376580782162172e-06,
868
+ "logits/chosen": -0.20261721312999725,
869
+ "logits/rejected": -0.221277117729187,
870
+ "logps/chosen": -634.4677124023438,
871
+ "logps/rejected": -994.3683471679688,
872
+ "loss": 0.1934,
873
+ "rewards/accuracies": 0.793749988079071,
874
+ "rewards/chosen": -0.25305241346359253,
875
+ "rewards/margins": 0.2600497603416443,
876
+ "rewards/rejected": -0.5131021738052368,
877
+ "step": 610
878
+ },
879
+ {
880
+ "epoch": 0.66,
881
+ "learning_rate": 1.550828065408227e-06,
882
+ "logits/chosen": -0.293588250875473,
883
+ "logits/rejected": -0.250821977853775,
884
+ "logps/chosen": -588.5574951171875,
885
+ "logps/rejected": -976.6282348632812,
886
+ "loss": 0.1876,
887
+ "rewards/accuracies": 0.831250011920929,
888
+ "rewards/chosen": -0.26154106855392456,
889
+ "rewards/margins": 0.25152388215065,
890
+ "rewards/rejected": -0.5130649209022522,
891
+ "step": 620
892
+ },
893
+ {
894
+ "epoch": 0.67,
895
+ "learning_rate": 1.4653161248633053e-06,
896
+ "logits/chosen": -0.25644367933273315,
897
+ "logits/rejected": -0.2529579699039459,
898
+ "logps/chosen": -637.1114501953125,
899
+ "logps/rejected": -1041.966064453125,
900
+ "loss": 0.177,
901
+ "rewards/accuracies": 0.8062499761581421,
902
+ "rewards/chosen": -0.26990941166877747,
903
+ "rewards/margins": 0.2800416052341461,
904
+ "rewards/rejected": -0.5499509572982788,
905
+ "step": 630
906
+ },
907
+ {
908
+ "epoch": 0.68,
909
+ "learning_rate": 1.381241003157162e-06,
910
+ "logits/chosen": -0.25759977102279663,
911
+ "logits/rejected": -0.22868645191192627,
912
+ "logps/chosen": -585.88671875,
913
+ "logps/rejected": -971.8045043945312,
914
+ "loss": 0.1695,
915
+ "rewards/accuracies": 0.8500000238418579,
916
+ "rewards/chosen": -0.24511754512786865,
917
+ "rewards/margins": 0.2909080982208252,
918
+ "rewards/rejected": -0.5360256433486938,
919
+ "step": 640
920
+ },
921
+ {
922
+ "epoch": 0.69,
923
+ "learning_rate": 1.298719451619979e-06,
924
+ "logits/chosen": -0.27405595779418945,
925
+ "logits/rejected": -0.25046929717063904,
926
+ "logps/chosen": -686.3485107421875,
927
+ "logps/rejected": -1069.6209716796875,
928
+ "loss": 0.1779,
929
+ "rewards/accuracies": 0.793749988079071,
930
+ "rewards/chosen": -0.29048749804496765,
931
+ "rewards/margins": 0.3021007776260376,
932
+ "rewards/rejected": -0.5925883054733276,
933
+ "step": 650
934
+ },
935
+ {
936
+ "epoch": 0.7,
937
+ "learning_rate": 1.2178660642091036e-06,
938
+ "logits/chosen": -0.258391410112381,
939
+ "logits/rejected": -0.2719349265098572,
940
+ "logps/chosen": -642.6732177734375,
941
+ "logps/rejected": -1080.385498046875,
942
+ "loss": 0.1866,
943
+ "rewards/accuracies": 0.875,
944
+ "rewards/chosen": -0.2909965515136719,
945
+ "rewards/margins": 0.3018215596675873,
946
+ "rewards/rejected": -0.5928180813789368,
947
+ "step": 660
948
+ },
949
+ {
950
+ "epoch": 0.71,
951
+ "learning_rate": 1.1387931183775821e-06,
952
+ "logits/chosen": -0.29932543635368347,
953
+ "logits/rejected": -0.2793118357658386,
954
+ "logps/chosen": -649.6714477539062,
955
+ "logps/rejected": -1090.2452392578125,
956
+ "loss": 0.1646,
957
+ "rewards/accuracies": 0.831250011920929,
958
+ "rewards/chosen": -0.2679978013038635,
959
+ "rewards/margins": 0.30532750487327576,
960
+ "rewards/rejected": -0.5733253359794617,
961
+ "step": 670
962
+ },
963
+ {
964
+ "epoch": 0.73,
965
+ "learning_rate": 1.061610419159532e-06,
966
+ "logits/chosen": -0.27962726354599,
967
+ "logits/rejected": -0.21005702018737793,
968
+ "logps/chosen": -654.9832763671875,
969
+ "logps/rejected": -1042.5281982421875,
970
+ "loss": 0.195,
971
+ "rewards/accuracies": 0.8374999761581421,
972
+ "rewards/chosen": -0.28967055678367615,
973
+ "rewards/margins": 0.28377145528793335,
974
+ "rewards/rejected": -0.5734419822692871,
975
+ "step": 680
976
+ },
977
+ {
978
+ "epoch": 0.74,
979
+ "learning_rate": 9.864251466888364e-07,
980
+ "logits/chosen": -0.2319578379392624,
981
+ "logits/rejected": -0.22523808479309082,
982
+ "logps/chosen": -587.6854248046875,
983
+ "logps/rejected": -929.84716796875,
984
+ "loss": 0.1957,
985
+ "rewards/accuracies": 0.78125,
986
+ "rewards/chosen": -0.2619277834892273,
987
+ "rewards/margins": 0.24305231869220734,
988
+ "rewards/rejected": -0.5049800276756287,
989
+ "step": 690
990
+ },
991
+ {
992
+ "epoch": 0.75,
993
+ "learning_rate": 9.133417073629288e-07,
994
+ "logits/chosen": -0.2581741213798523,
995
+ "logits/rejected": -0.22566178441047668,
996
+ "logps/chosen": -673.9590454101562,
997
+ "logps/rejected": -1043.8779296875,
998
+ "loss": 0.1946,
999
+ "rewards/accuracies": 0.824999988079071,
1000
+ "rewards/chosen": -0.2741592526435852,
1001
+ "rewards/margins": 0.2668037712574005,
1002
+ "rewards/rejected": -0.5409630537033081,
1003
+ "step": 700
1004
+ },
1005
+ {
1006
+ "epoch": 0.76,
1007
+ "learning_rate": 8.424615888583332e-07,
1008
+ "logits/chosen": -0.24903810024261475,
1009
+ "logits/rejected": -0.19368259608745575,
1010
+ "logps/chosen": -702.1976318359375,
1011
+ "logps/rejected": -1104.967529296875,
1012
+ "loss": 0.1768,
1013
+ "rewards/accuracies": 0.78125,
1014
+ "rewards/chosen": -0.29275885224342346,
1015
+ "rewards/margins": 0.24981990456581116,
1016
+ "rewards/rejected": -0.5425786972045898,
1017
+ "step": 710
1018
+ },
1019
+ {
1020
+ "epoch": 0.77,
1021
+ "learning_rate": 7.738832191993092e-07,
1022
+ "logits/chosen": -0.2902497947216034,
1023
+ "logits/rejected": -0.27156323194503784,
1024
+ "logps/chosen": -642.2786865234375,
1025
+ "logps/rejected": -1030.1949462890625,
1026
+ "loss": 0.1896,
1027
+ "rewards/accuracies": 0.8125,
1028
+ "rewards/chosen": -0.296554833650589,
1029
+ "rewards/margins": 0.2522734999656677,
1030
+ "rewards/rejected": -0.5488283634185791,
1031
+ "step": 720
1032
+ },
1033
+ {
1034
+ "epoch": 0.78,
1035
+ "learning_rate": 7.077018300752917e-07,
1036
+ "logits/chosen": -0.25113141536712646,
1037
+ "logits/rejected": -0.27631789445877075,
1038
+ "logps/chosen": -658.340087890625,
1039
+ "logps/rejected": -1025.861328125,
1040
+ "loss": 0.1804,
1041
+ "rewards/accuracies": 0.800000011920929,
1042
+ "rewards/chosen": -0.2932133376598358,
1043
+ "rewards/margins": 0.24299320578575134,
1044
+ "rewards/rejected": -0.5362066030502319,
1045
+ "step": 730
1046
+ },
1047
+ {
1048
+ "epoch": 0.79,
1049
+ "learning_rate": 6.440093245969342e-07,
1050
+ "logits/chosen": -0.2601129412651062,
1051
+ "logits/rejected": -0.23299941420555115,
1052
+ "logps/chosen": -661.2008056640625,
1053
+ "logps/rejected": -1049.9329833984375,
1054
+ "loss": 0.1731,
1055
+ "rewards/accuracies": 0.84375,
1056
+ "rewards/chosen": -0.28521472215652466,
1057
+ "rewards/margins": 0.2829175591468811,
1058
+ "rewards/rejected": -0.5681322813034058,
1059
+ "step": 740
1060
+ },
1061
+ {
1062
+ "epoch": 0.8,
1063
+ "learning_rate": 5.828941496744075e-07,
1064
+ "logits/chosen": -0.2920230031013489,
1065
+ "logits/rejected": -0.2706466317176819,
1066
+ "logps/chosen": -614.7430419921875,
1067
+ "logps/rejected": -1084.22802734375,
1068
+ "loss": 0.1975,
1069
+ "rewards/accuracies": 0.8374999761581421,
1070
+ "rewards/chosen": -0.26638248562812805,
1071
+ "rewards/margins": 0.29731154441833496,
1072
+ "rewards/rejected": -0.5636940598487854,
1073
+ "step": 750
1074
+ },
1075
+ {
1076
+ "epoch": 0.81,
1077
+ "learning_rate": 5.244411731951671e-07,
1078
+ "logits/chosen": -0.2530083954334259,
1079
+ "logits/rejected": -0.2159876525402069,
1080
+ "logps/chosen": -636.4935913085938,
1081
+ "logps/rejected": -965.4192504882812,
1082
+ "loss": 0.1974,
1083
+ "rewards/accuracies": 0.800000011920929,
1084
+ "rewards/chosen": -0.27532270550727844,
1085
+ "rewards/margins": 0.23001885414123535,
1086
+ "rewards/rejected": -0.5053415298461914,
1087
+ "step": 760
1088
+ },
1089
+ {
1090
+ "epoch": 0.82,
1091
+ "learning_rate": 4.6873156617173594e-07,
1092
+ "logits/chosen": -0.2676984369754791,
1093
+ "logits/rejected": -0.2522314190864563,
1094
+ "logps/chosen": -619.7949829101562,
1095
+ "logps/rejected": -1006.8516845703125,
1096
+ "loss": 0.1934,
1097
+ "rewards/accuracies": 0.7875000238418579,
1098
+ "rewards/chosen": -0.25617390871047974,
1099
+ "rewards/margins": 0.25205108523368835,
1100
+ "rewards/rejected": -0.5082249641418457,
1101
+ "step": 770
1102
+ },
1103
+ {
1104
+ "epoch": 0.83,
1105
+ "learning_rate": 4.1584269002318653e-07,
1106
+ "logits/chosen": -0.276606023311615,
1107
+ "logits/rejected": -0.18771035969257355,
1108
+ "logps/chosen": -680.2154541015625,
1109
+ "logps/rejected": -1099.0855712890625,
1110
+ "loss": 0.1918,
1111
+ "rewards/accuracies": 0.8500000238418579,
1112
+ "rewards/chosen": -0.2760952413082123,
1113
+ "rewards/margins": 0.2856435477733612,
1114
+ "rewards/rejected": -0.5617388486862183,
1115
+ "step": 780
1116
+ },
1117
+ {
1118
+ "epoch": 0.84,
1119
+ "learning_rate": 3.658479891468258e-07,
1120
+ "logits/chosen": -0.2671615183353424,
1121
+ "logits/rejected": -0.28831931948661804,
1122
+ "logps/chosen": -652.9794921875,
1123
+ "logps/rejected": -1049.533447265625,
1124
+ "loss": 0.1852,
1125
+ "rewards/accuracies": 0.8125,
1126
+ "rewards/chosen": -0.2673850953578949,
1127
+ "rewards/margins": 0.2847273647785187,
1128
+ "rewards/rejected": -0.5521124601364136,
1129
+ "step": 790
1130
+ },
1131
+ {
1132
+ "epoch": 0.85,
1133
+ "learning_rate": 3.18816888929272e-07,
1134
+ "logits/chosen": -0.2824237644672394,
1135
+ "logits/rejected": -0.2227623015642166,
1136
+ "logps/chosen": -643.0541381835938,
1137
+ "logps/rejected": -957.2637939453125,
1138
+ "loss": 0.207,
1139
+ "rewards/accuracies": 0.75,
1140
+ "rewards/chosen": -0.2690204679965973,
1141
+ "rewards/margins": 0.2505555748939514,
1142
+ "rewards/rejected": -0.5195759534835815,
1143
+ "step": 800
1144
+ },
1145
+ {
1146
+ "epoch": 0.86,
1147
+ "learning_rate": 2.748146993385484e-07,
1148
+ "logits/chosen": -0.3143889307975769,
1149
+ "logits/rejected": -0.2759067118167877,
1150
+ "logps/chosen": -651.569580078125,
1151
+ "logps/rejected": -1108.028076171875,
1152
+ "loss": 0.1587,
1153
+ "rewards/accuracies": 0.800000011920929,
1154
+ "rewards/chosen": -0.2932509779930115,
1155
+ "rewards/margins": 0.2749115526676178,
1156
+ "rewards/rejected": -0.5681625604629517,
1157
+ "step": 810
1158
+ },
1159
+ {
1160
+ "epoch": 0.87,
1161
+ "learning_rate": 2.3390252423108077e-07,
1162
+ "logits/chosen": -0.26839888095855713,
1163
+ "logits/rejected": -0.23119351267814636,
1164
+ "logps/chosen": -665.0179443359375,
1165
+ "logps/rejected": -1112.646728515625,
1166
+ "loss": 0.1658,
1167
+ "rewards/accuracies": 0.8687499761581421,
1168
+ "rewards/chosen": -0.28339672088623047,
1169
+ "rewards/margins": 0.3101740777492523,
1170
+ "rewards/rejected": -0.5935708284378052,
1171
+ "step": 820
1172
+ },
1173
+ {
1174
+ "epoch": 0.89,
1175
+ "learning_rate": 1.961371764995243e-07,
1176
+ "logits/chosen": -0.2762928605079651,
1177
+ "logits/rejected": -0.28436392545700073,
1178
+ "logps/chosen": -631.9949951171875,
1179
+ "logps/rejected": -1104.309326171875,
1180
+ "loss": 0.1821,
1181
+ "rewards/accuracies": 0.862500011920929,
1182
+ "rewards/chosen": -0.2872810959815979,
1183
+ "rewards/margins": 0.303080290555954,
1184
+ "rewards/rejected": -0.5903614163398743,
1185
+ "step": 830
1186
+ },
1187
+ {
1188
+ "epoch": 0.9,
1189
+ "learning_rate": 1.61571099179261e-07,
1190
+ "logits/chosen": -0.2735849916934967,
1191
+ "logits/rejected": -0.2402714043855667,
1192
+ "logps/chosen": -652.4813842773438,
1193
+ "logps/rejected": -1141.485107421875,
1194
+ "loss": 0.1638,
1195
+ "rewards/accuracies": 0.8687499761581421,
1196
+ "rewards/chosen": -0.2600463032722473,
1197
+ "rewards/margins": 0.34138140082359314,
1198
+ "rewards/rejected": -0.6014277338981628,
1199
+ "step": 840
1200
+ },
1201
+ {
1202
+ "epoch": 0.91,
1203
+ "learning_rate": 1.3025229262312367e-07,
1204
+ "logits/chosen": -0.20563116669654846,
1205
+ "logits/rejected": -0.25282037258148193,
1206
+ "logps/chosen": -693.9833984375,
1207
+ "logps/rejected": -1084.752197265625,
1208
+ "loss": 0.183,
1209
+ "rewards/accuracies": 0.84375,
1210
+ "rewards/chosen": -0.2829940617084503,
1211
+ "rewards/margins": 0.27976807951927185,
1212
+ "rewards/rejected": -0.5627621412277222,
1213
+ "step": 850
1214
+ },
1215
+ {
1216
+ "epoch": 0.92,
1217
+ "learning_rate": 1.0222424784546853e-07,
1218
+ "logits/chosen": -0.22862930595874786,
1219
+ "logits/rejected": -0.21136240661144257,
1220
+ "logps/chosen": -638.4495239257812,
1221
+ "logps/rejected": -1061.037109375,
1222
+ "loss": 0.1807,
1223
+ "rewards/accuracies": 0.8062499761581421,
1224
+ "rewards/chosen": -0.2733309268951416,
1225
+ "rewards/margins": 0.28016480803489685,
1226
+ "rewards/rejected": -0.5534957647323608,
1227
+ "step": 860
1228
+ },
1229
+ {
1230
+ "epoch": 0.93,
1231
+ "learning_rate": 7.752588612816553e-08,
1232
+ "logits/chosen": -0.2907347083091736,
1233
+ "logits/rejected": -0.2525694966316223,
1234
+ "logps/chosen": -626.2022705078125,
1235
+ "logps/rejected": -999.1546630859375,
1236
+ "loss": 0.19,
1237
+ "rewards/accuracies": 0.8374999761581421,
1238
+ "rewards/chosen": -0.2700883746147156,
1239
+ "rewards/margins": 0.2750420868396759,
1240
+ "rewards/rejected": -0.5451304316520691,
1241
+ "step": 870
1242
+ },
1243
+ {
1244
+ "epoch": 0.94,
1245
+ "learning_rate": 5.619150497236991e-08,
1246
+ "logits/chosen": -0.2824448347091675,
1247
+ "logits/rejected": -0.21472208201885223,
1248
+ "logps/chosen": -694.7796630859375,
1249
+ "logps/rejected": -1101.357666015625,
1250
+ "loss": 0.1808,
1251
+ "rewards/accuracies": 0.8187500238418579,
1252
+ "rewards/chosen": -0.30949831008911133,
1253
+ "rewards/margins": 0.29203012585639954,
1254
+ "rewards/rejected": -0.601528525352478,
1255
+ "step": 880
1256
+ },
1257
+ {
1258
+ "epoch": 0.95,
1259
+ "learning_rate": 3.825073047112743e-08,
1260
+ "logits/chosen": -0.2339903563261032,
1261
+ "logits/rejected": -0.25414004921913147,
1262
+ "logps/chosen": -677.6926879882812,
1263
+ "logps/rejected": -1110.830810546875,
1264
+ "loss": 0.175,
1265
+ "rewards/accuracies": 0.8812500238418579,
1266
+ "rewards/chosen": -0.2908310890197754,
1267
+ "rewards/margins": 0.3078031539916992,
1268
+ "rewards/rejected": -0.5986341834068298,
1269
+ "step": 890
1270
+ },
1271
+ {
1272
+ "epoch": 0.96,
1273
+ "learning_rate": 2.372847616895685e-08,
1274
+ "logits/chosen": -0.2827141582965851,
1275
+ "logits/rejected": -0.24739205837249756,
1276
+ "logps/chosen": -659.048583984375,
1277
+ "logps/rejected": -1077.357177734375,
1278
+ "loss": 0.1941,
1279
+ "rewards/accuracies": 0.793749988079071,
1280
+ "rewards/chosen": -0.28590255975723267,
1281
+ "rewards/margins": 0.280953586101532,
1282
+ "rewards/rejected": -0.5668561458587646,
1283
+ "step": 900
1284
+ },
1285
+ {
1286
+ "epoch": 0.97,
1287
+ "learning_rate": 1.264490846553279e-08,
1288
+ "logits/chosen": -0.29719191789627075,
1289
+ "logits/rejected": -0.23341615498065948,
1290
+ "logps/chosen": -676.8151245117188,
1291
+ "logps/rejected": -1061.142822265625,
1292
+ "loss": 0.1835,
1293
+ "rewards/accuracies": 0.8187500238418579,
1294
+ "rewards/chosen": -0.2869683802127838,
1295
+ "rewards/margins": 0.2846153676509857,
1296
+ "rewards/rejected": -0.57158362865448,
1297
+ "step": 910
1298
+ },
1299
+ {
1300
+ "epoch": 0.98,
1301
+ "learning_rate": 5.015418611516165e-09,
1302
+ "logits/chosen": -0.22863967716693878,
1303
+ "logits/rejected": -0.20865638554096222,
1304
+ "logps/chosen": -656.0875854492188,
1305
+ "logps/rejected": -1003.3201904296875,
1306
+ "loss": 0.1869,
1307
+ "rewards/accuracies": 0.800000011920929,
1308
+ "rewards/chosen": -0.2743567228317261,
1309
+ "rewards/margins": 0.260437935590744,
1310
+ "rewards/rejected": -0.5347946286201477,
1311
+ "step": 920
1312
+ },
1313
+ {
1314
+ "epoch": 0.99,
1315
+ "learning_rate": 8.506013354186993e-10,
1316
+ "logits/chosen": -0.22728657722473145,
1317
+ "logits/rejected": -0.21279653906822205,
1318
+ "logps/chosen": -687.8726806640625,
1319
+ "logps/rejected": -1077.039306640625,
1320
+ "loss": 0.1753,
1321
+ "rewards/accuracies": 0.824999988079071,
1322
+ "rewards/chosen": -0.28496915102005005,
1323
+ "rewards/margins": 0.2890021502971649,
1324
+ "rewards/rejected": -0.5739713907241821,
1325
+ "step": 930
1326
+ },
1327
+ {
1328
+ "epoch": 1.0,
1329
+ "step": 937,
1330
+ "total_flos": 0.0,
1331
+ "train_loss": 0.08644322651809155,
1332
+ "train_runtime": 5283.9744,
1333
+ "train_samples_per_second": 5.678,
1334
+ "train_steps_per_second": 0.177
1335
+ }
1336
+ ],
1337
+ "logging_steps": 10,
1338
+ "max_steps": 937,
1339
+ "num_input_tokens_seen": 0,
1340
+ "num_train_epochs": 1,
1341
+ "save_steps": 100,
1342
+ "total_flos": 0.0,
1343
+ "train_batch_size": 4,
1344
+ "trial_name": null,
1345
+ "trial_params": null
1346
+ }