chansung commited on
Commit
88160fb
1 Parent(s): b797f84

Model save

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ base_model: google/gemma-2b
9
+ datasets:
10
+ - generator
11
+ model-index:
12
+ - name: gemma2b-summarize-gpt4o-32k
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # gemma2b-summarize-gpt4o-32k
20
+
21
+ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 2.5618
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0002
43
+ - train_batch_size: 8
44
+ - eval_batch_size: 8
45
+ - seed: 42
46
+ - distributed_type: multi-GPU
47
+ - num_devices: 3
48
+ - gradient_accumulation_steps: 2
49
+ - total_train_batch_size: 48
50
+ - total_eval_batch_size: 24
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: cosine
53
+ - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 10
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss |
59
+ |:-------------:|:-----:|:----:|:---------------:|
60
+ | 1.4358 | 1.0 | 73 | 2.5470 |
61
+ | 1.2092 | 2.0 | 146 | 2.5064 |
62
+ | 1.1501 | 3.0 | 219 | 2.5000 |
63
+ | 1.0995 | 4.0 | 292 | 2.5064 |
64
+ | 1.0809 | 5.0 | 365 | 2.5177 |
65
+ | 1.0583 | 6.0 | 438 | 2.5419 |
66
+ | 1.04 | 7.0 | 511 | 2.5488 |
67
+ | 1.0248 | 8.0 | 584 | 2.5574 |
68
+ | 1.021 | 9.0 | 657 | 2.5614 |
69
+ | 1.0179 | 10.0 | 730 | 2.5618 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - PEFT 0.11.1
75
+ - Transformers 4.41.2
76
+ - Pytorch 2.3.0+cu121
77
+ - Datasets 2.19.2
78
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fdfebf905de8f760478e090b7ca22dec253b047be84ee6dcc24229ecf10cb01f
3
  size 19644912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf020752b7d1b1382c9b71928e7ee85e7e8c494fe1f52c525d9e3eae0aa3c788
3
  size 19644912
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "total_flos": 4.287825372721971e+17,
4
+ "train_loss": 1.1969801510850044,
5
+ "train_runtime": 3891.9077,
6
+ "train_samples": 32305,
7
+ "train_samples_per_second": 9.003,
8
+ "train_steps_per_second": 0.188
9
+ }
runs/Jun05_14-45-19_user-HP-Z8-Fury-G5-Workstation-Desktop-PC/events.out.tfevents.1717566340.user-HP-Z8-Fury-G5-Workstation-Desktop-PC.26332.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:916ed1fb4f8f73436ed0ea938fe9576bc244ca4a695a3f83c387b713b4af1703
3
- size 37678
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43e4fcd25cd566ab9892786c2497c8b3c2721b4c747d9df5153003e068768803
3
+ size 39569
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "total_flos": 4.287825372721971e+17,
4
+ "train_loss": 1.1969801510850044,
5
+ "train_runtime": 3891.9077,
6
+ "train_samples": 32305,
7
+ "train_samples_per_second": 9.003,
8
+ "train_steps_per_second": 0.188
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 10.0,
5
+ "eval_steps": 500,
6
+ "global_step": 730,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0136986301369863,
13
+ "grad_norm": 3.5625,
14
+ "learning_rate": 2.7397260273972604e-06,
15
+ "loss": 3.0538,
16
+ "step": 1
17
+ },
18
+ {
19
+ "epoch": 0.0684931506849315,
20
+ "grad_norm": 2.296875,
21
+ "learning_rate": 1.3698630136986302e-05,
22
+ "loss": 3.0804,
23
+ "step": 5
24
+ },
25
+ {
26
+ "epoch": 0.136986301369863,
27
+ "grad_norm": 2.59375,
28
+ "learning_rate": 2.7397260273972603e-05,
29
+ "loss": 3.036,
30
+ "step": 10
31
+ },
32
+ {
33
+ "epoch": 0.2054794520547945,
34
+ "grad_norm": 2.1875,
35
+ "learning_rate": 4.1095890410958905e-05,
36
+ "loss": 3.0119,
37
+ "step": 15
38
+ },
39
+ {
40
+ "epoch": 0.273972602739726,
41
+ "grad_norm": 1.84375,
42
+ "learning_rate": 5.479452054794521e-05,
43
+ "loss": 2.7939,
44
+ "step": 20
45
+ },
46
+ {
47
+ "epoch": 0.3424657534246575,
48
+ "grad_norm": 4.4375,
49
+ "learning_rate": 6.84931506849315e-05,
50
+ "loss": 2.5661,
51
+ "step": 25
52
+ },
53
+ {
54
+ "epoch": 0.410958904109589,
55
+ "grad_norm": 7.25,
56
+ "learning_rate": 8.219178082191781e-05,
57
+ "loss": 2.3942,
58
+ "step": 30
59
+ },
60
+ {
61
+ "epoch": 0.4794520547945205,
62
+ "grad_norm": 2.28125,
63
+ "learning_rate": 9.58904109589041e-05,
64
+ "loss": 2.2555,
65
+ "step": 35
66
+ },
67
+ {
68
+ "epoch": 0.547945205479452,
69
+ "grad_norm": 0.94140625,
70
+ "learning_rate": 0.00010958904109589041,
71
+ "loss": 2.1131,
72
+ "step": 40
73
+ },
74
+ {
75
+ "epoch": 0.6164383561643836,
76
+ "grad_norm": 0.6875,
77
+ "learning_rate": 0.0001232876712328767,
78
+ "loss": 1.9336,
79
+ "step": 45
80
+ },
81
+ {
82
+ "epoch": 0.684931506849315,
83
+ "grad_norm": 0.462890625,
84
+ "learning_rate": 0.000136986301369863,
85
+ "loss": 1.7804,
86
+ "step": 50
87
+ },
88
+ {
89
+ "epoch": 0.7534246575342466,
90
+ "grad_norm": 0.39453125,
91
+ "learning_rate": 0.00015068493150684933,
92
+ "loss": 1.6514,
93
+ "step": 55
94
+ },
95
+ {
96
+ "epoch": 0.821917808219178,
97
+ "grad_norm": 0.4453125,
98
+ "learning_rate": 0.00016438356164383562,
99
+ "loss": 1.5682,
100
+ "step": 60
101
+ },
102
+ {
103
+ "epoch": 0.8904109589041096,
104
+ "grad_norm": 0.39453125,
105
+ "learning_rate": 0.00017808219178082192,
106
+ "loss": 1.5013,
107
+ "step": 65
108
+ },
109
+ {
110
+ "epoch": 0.958904109589041,
111
+ "grad_norm": 0.4375,
112
+ "learning_rate": 0.0001917808219178082,
113
+ "loss": 1.4358,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 1.0,
118
+ "eval_loss": 2.546966075897217,
119
+ "eval_runtime": 0.552,
120
+ "eval_samples_per_second": 18.116,
121
+ "eval_steps_per_second": 1.812,
122
+ "step": 73
123
+ },
124
+ {
125
+ "epoch": 1.0273972602739727,
126
+ "grad_norm": 0.2412109375,
127
+ "learning_rate": 0.00019999542705801296,
128
+ "loss": 1.3872,
129
+ "step": 75
130
+ },
131
+ {
132
+ "epoch": 1.095890410958904,
133
+ "grad_norm": 0.326171875,
134
+ "learning_rate": 0.00019994398626371643,
135
+ "loss": 1.3485,
136
+ "step": 80
137
+ },
138
+ {
139
+ "epoch": 1.1643835616438356,
140
+ "grad_norm": 0.478515625,
141
+ "learning_rate": 0.0001998354179989585,
142
+ "loss": 1.3316,
143
+ "step": 85
144
+ },
145
+ {
146
+ "epoch": 1.2328767123287672,
147
+ "grad_norm": 1.1015625,
148
+ "learning_rate": 0.00019966978432080316,
149
+ "loss": 1.3049,
150
+ "step": 90
151
+ },
152
+ {
153
+ "epoch": 1.3013698630136985,
154
+ "grad_norm": 0.458984375,
155
+ "learning_rate": 0.00019944717990461207,
156
+ "loss": 1.2894,
157
+ "step": 95
158
+ },
159
+ {
160
+ "epoch": 1.36986301369863,
161
+ "grad_norm": 0.427734375,
162
+ "learning_rate": 0.000199167731989929,
163
+ "loss": 1.2703,
164
+ "step": 100
165
+ },
166
+ {
167
+ "epoch": 1.4383561643835616,
168
+ "grad_norm": 0.2890625,
169
+ "learning_rate": 0.00019883160030775016,
170
+ "loss": 1.2547,
171
+ "step": 105
172
+ },
173
+ {
174
+ "epoch": 1.5068493150684932,
175
+ "grad_norm": 0.291015625,
176
+ "learning_rate": 0.00019843897698922284,
177
+ "loss": 1.2613,
178
+ "step": 110
179
+ },
180
+ {
181
+ "epoch": 1.5753424657534247,
182
+ "grad_norm": 0.2734375,
183
+ "learning_rate": 0.0001979900864558242,
184
+ "loss": 1.2388,
185
+ "step": 115
186
+ },
187
+ {
188
+ "epoch": 1.643835616438356,
189
+ "grad_norm": 0.275390625,
190
+ "learning_rate": 0.00019748518529108316,
191
+ "loss": 1.2528,
192
+ "step": 120
193
+ },
194
+ {
195
+ "epoch": 1.7123287671232876,
196
+ "grad_norm": 0.419921875,
197
+ "learning_rate": 0.00019692456209391846,
198
+ "loss": 1.2373,
199
+ "step": 125
200
+ },
201
+ {
202
+ "epoch": 1.7808219178082192,
203
+ "grad_norm": 0.8046875,
204
+ "learning_rate": 0.00019630853731367713,
205
+ "loss": 1.2241,
206
+ "step": 130
207
+ },
208
+ {
209
+ "epoch": 1.8493150684931505,
210
+ "grad_norm": 0.33984375,
211
+ "learning_rate": 0.0001956374630669672,
212
+ "loss": 1.2263,
213
+ "step": 135
214
+ },
215
+ {
216
+ "epoch": 1.9178082191780823,
217
+ "grad_norm": 0.287109375,
218
+ "learning_rate": 0.00019491172293638968,
219
+ "loss": 1.212,
220
+ "step": 140
221
+ },
222
+ {
223
+ "epoch": 1.9863013698630136,
224
+ "grad_norm": 0.2734375,
225
+ "learning_rate": 0.00019413173175128473,
226
+ "loss": 1.2092,
227
+ "step": 145
228
+ },
229
+ {
230
+ "epoch": 2.0,
231
+ "eval_loss": 2.5063788890838623,
232
+ "eval_runtime": 0.5512,
233
+ "eval_samples_per_second": 18.144,
234
+ "eval_steps_per_second": 1.814,
235
+ "step": 146
236
+ },
237
+ {
238
+ "epoch": 2.0547945205479454,
239
+ "grad_norm": 0.380859375,
240
+ "learning_rate": 0.00019329793535061723,
241
+ "loss": 1.1791,
242
+ "step": 150
243
+ },
244
+ {
245
+ "epoch": 2.1232876712328768,
246
+ "grad_norm": 0.458984375,
247
+ "learning_rate": 0.00019241081032813772,
248
+ "loss": 1.1745,
249
+ "step": 155
250
+ },
251
+ {
252
+ "epoch": 2.191780821917808,
253
+ "grad_norm": 0.380859375,
254
+ "learning_rate": 0.0001914708637599636,
255
+ "loss": 1.1783,
256
+ "step": 160
257
+ },
258
+ {
259
+ "epoch": 2.26027397260274,
260
+ "grad_norm": 0.2734375,
261
+ "learning_rate": 0.00019047863291473717,
262
+ "loss": 1.156,
263
+ "step": 165
264
+ },
265
+ {
266
+ "epoch": 2.328767123287671,
267
+ "grad_norm": 0.5625,
268
+ "learning_rate": 0.0001894346849465257,
269
+ "loss": 1.1642,
270
+ "step": 170
271
+ },
272
+ {
273
+ "epoch": 2.3972602739726026,
274
+ "grad_norm": 0.65625,
275
+ "learning_rate": 0.00018833961657063885,
276
+ "loss": 1.176,
277
+ "step": 175
278
+ },
279
+ {
280
+ "epoch": 2.4657534246575343,
281
+ "grad_norm": 0.640625,
282
+ "learning_rate": 0.00018719405372254948,
283
+ "loss": 1.16,
284
+ "step": 180
285
+ },
286
+ {
287
+ "epoch": 2.5342465753424657,
288
+ "grad_norm": 0.400390625,
289
+ "learning_rate": 0.00018599865120011192,
290
+ "loss": 1.1653,
291
+ "step": 185
292
+ },
293
+ {
294
+ "epoch": 2.602739726027397,
295
+ "grad_norm": 0.3125,
296
+ "learning_rate": 0.00018475409228928312,
297
+ "loss": 1.1616,
298
+ "step": 190
299
+ },
300
+ {
301
+ "epoch": 2.671232876712329,
302
+ "grad_norm": 0.6796875,
303
+ "learning_rate": 0.00018346108837355972,
304
+ "loss": 1.1549,
305
+ "step": 195
306
+ },
307
+ {
308
+ "epoch": 2.73972602739726,
309
+ "grad_norm": 0.54296875,
310
+ "learning_rate": 0.00018212037852735486,
311
+ "loss": 1.1518,
312
+ "step": 200
313
+ },
314
+ {
315
+ "epoch": 2.808219178082192,
316
+ "grad_norm": 0.52734375,
317
+ "learning_rate": 0.00018073272909354727,
318
+ "loss": 1.1516,
319
+ "step": 205
320
+ },
321
+ {
322
+ "epoch": 2.8767123287671232,
323
+ "grad_norm": 0.375,
324
+ "learning_rate": 0.00017929893324544332,
325
+ "loss": 1.154,
326
+ "step": 210
327
+ },
328
+ {
329
+ "epoch": 2.9452054794520546,
330
+ "grad_norm": 0.44921875,
331
+ "learning_rate": 0.00017781981053340337,
332
+ "loss": 1.1501,
333
+ "step": 215
334
+ },
335
+ {
336
+ "epoch": 3.0,
337
+ "eval_loss": 2.4999959468841553,
338
+ "eval_runtime": 0.5406,
339
+ "eval_samples_per_second": 18.499,
340
+ "eval_steps_per_second": 1.85,
341
+ "step": 219
342
+ },
343
+ {
344
+ "epoch": 3.0136986301369864,
345
+ "grad_norm": 0.353515625,
346
+ "learning_rate": 0.00017629620641639103,
347
+ "loss": 1.1368,
348
+ "step": 220
349
+ },
350
+ {
351
+ "epoch": 3.0821917808219177,
352
+ "grad_norm": 0.337890625,
353
+ "learning_rate": 0.00017472899177871297,
354
+ "loss": 1.118,
355
+ "step": 225
356
+ },
357
+ {
358
+ "epoch": 3.1506849315068495,
359
+ "grad_norm": 0.353515625,
360
+ "learning_rate": 0.00017311906243222614,
361
+ "loss": 1.1122,
362
+ "step": 230
363
+ },
364
+ {
365
+ "epoch": 3.219178082191781,
366
+ "grad_norm": 0.34765625,
367
+ "learning_rate": 0.00017146733860429612,
368
+ "loss": 1.1205,
369
+ "step": 235
370
+ },
371
+ {
372
+ "epoch": 3.287671232876712,
373
+ "grad_norm": 0.5625,
374
+ "learning_rate": 0.00016977476441179992,
375
+ "loss": 1.1197,
376
+ "step": 240
377
+ },
378
+ {
379
+ "epoch": 3.356164383561644,
380
+ "grad_norm": 0.796875,
381
+ "learning_rate": 0.0001680423073214737,
382
+ "loss": 1.1155,
383
+ "step": 245
384
+ },
385
+ {
386
+ "epoch": 3.4246575342465753,
387
+ "grad_norm": 0.47265625,
388
+ "learning_rate": 0.00016627095759691362,
389
+ "loss": 1.1255,
390
+ "step": 250
391
+ },
392
+ {
393
+ "epoch": 3.493150684931507,
394
+ "grad_norm": 0.451171875,
395
+ "learning_rate": 0.00016446172773254629,
396
+ "loss": 1.1113,
397
+ "step": 255
398
+ },
399
+ {
400
+ "epoch": 3.5616438356164384,
401
+ "grad_norm": 0.486328125,
402
+ "learning_rate": 0.0001626156518748922,
403
+ "loss": 1.1073,
404
+ "step": 260
405
+ },
406
+ {
407
+ "epoch": 3.6301369863013697,
408
+ "grad_norm": 1.046875,
409
+ "learning_rate": 0.0001607337852314527,
410
+ "loss": 1.1155,
411
+ "step": 265
412
+ },
413
+ {
414
+ "epoch": 3.6986301369863015,
415
+ "grad_norm": 0.6171875,
416
+ "learning_rate": 0.00015881720346755905,
417
+ "loss": 1.1239,
418
+ "step": 270
419
+ },
420
+ {
421
+ "epoch": 3.767123287671233,
422
+ "grad_norm": 0.416015625,
423
+ "learning_rate": 0.00015686700209152738,
424
+ "loss": 1.1185,
425
+ "step": 275
426
+ },
427
+ {
428
+ "epoch": 3.8356164383561646,
429
+ "grad_norm": 0.380859375,
430
+ "learning_rate": 0.00015488429582847192,
431
+ "loss": 1.1125,
432
+ "step": 280
433
+ },
434
+ {
435
+ "epoch": 3.904109589041096,
436
+ "grad_norm": 0.51953125,
437
+ "learning_rate": 0.0001528702179831338,
438
+ "loss": 1.0925,
439
+ "step": 285
440
+ },
441
+ {
442
+ "epoch": 3.9726027397260273,
443
+ "grad_norm": 0.3671875,
444
+ "learning_rate": 0.00015082591979208976,
445
+ "loss": 1.0995,
446
+ "step": 290
447
+ },
448
+ {
449
+ "epoch": 4.0,
450
+ "eval_loss": 2.506415843963623,
451
+ "eval_runtime": 0.5555,
452
+ "eval_samples_per_second": 18.001,
453
+ "eval_steps_per_second": 1.8,
454
+ "step": 292
455
+ },
456
+ {
457
+ "epoch": 4.041095890410959,
458
+ "grad_norm": 0.515625,
459
+ "learning_rate": 0.00014875256976571135,
460
+ "loss": 1.0993,
461
+ "step": 295
462
+ },
463
+ {
464
+ "epoch": 4.109589041095891,
465
+ "grad_norm": 0.375,
466
+ "learning_rate": 0.00014665135302025035,
467
+ "loss": 1.0789,
468
+ "step": 300
469
+ },
470
+ {
471
+ "epoch": 4.178082191780822,
472
+ "grad_norm": 0.3984375,
473
+ "learning_rate": 0.00014452347060043237,
474
+ "loss": 1.0724,
475
+ "step": 305
476
+ },
477
+ {
478
+ "epoch": 4.2465753424657535,
479
+ "grad_norm": 0.447265625,
480
+ "learning_rate": 0.0001423701387929459,
481
+ "loss": 1.0874,
482
+ "step": 310
483
+ },
484
+ {
485
+ "epoch": 4.315068493150685,
486
+ "grad_norm": 0.357421875,
487
+ "learning_rate": 0.00014019258843121893,
488
+ "loss": 1.0913,
489
+ "step": 315
490
+ },
491
+ {
492
+ "epoch": 4.383561643835616,
493
+ "grad_norm": 0.35546875,
494
+ "learning_rate": 0.00013799206419188103,
495
+ "loss": 1.085,
496
+ "step": 320
497
+ },
498
+ {
499
+ "epoch": 4.4520547945205475,
500
+ "grad_norm": 0.46875,
501
+ "learning_rate": 0.0001357698238833126,
502
+ "loss": 1.0733,
503
+ "step": 325
504
+ },
505
+ {
506
+ "epoch": 4.52054794520548,
507
+ "grad_norm": 0.44140625,
508
+ "learning_rate": 0.00013352713772668765,
509
+ "loss": 1.0799,
510
+ "step": 330
511
+ },
512
+ {
513
+ "epoch": 4.589041095890411,
514
+ "grad_norm": 0.412109375,
515
+ "learning_rate": 0.00013126528762992247,
516
+ "loss": 1.0774,
517
+ "step": 335
518
+ },
519
+ {
520
+ "epoch": 4.657534246575342,
521
+ "grad_norm": 0.34375,
522
+ "learning_rate": 0.00012898556645494325,
523
+ "loss": 1.0742,
524
+ "step": 340
525
+ },
526
+ {
527
+ "epoch": 4.726027397260274,
528
+ "grad_norm": 0.408203125,
529
+ "learning_rate": 0.0001266892772786929,
530
+ "loss": 1.0806,
531
+ "step": 345
532
+ },
533
+ {
534
+ "epoch": 4.794520547945205,
535
+ "grad_norm": 0.38671875,
536
+ "learning_rate": 0.00012437773264829897,
537
+ "loss": 1.0892,
538
+ "step": 350
539
+ },
540
+ {
541
+ "epoch": 4.863013698630137,
542
+ "grad_norm": 0.482421875,
543
+ "learning_rate": 0.00012205225383082843,
544
+ "loss": 1.0805,
545
+ "step": 355
546
+ },
547
+ {
548
+ "epoch": 4.931506849315069,
549
+ "grad_norm": 0.3828125,
550
+ "learning_rate": 0.00011971417005805818,
551
+ "loss": 1.0786,
552
+ "step": 360
553
+ },
554
+ {
555
+ "epoch": 5.0,
556
+ "grad_norm": 0.37890625,
557
+ "learning_rate": 0.00011736481776669306,
558
+ "loss": 1.0809,
559
+ "step": 365
560
+ },
561
+ {
562
+ "epoch": 5.0,
563
+ "eval_loss": 2.5177359580993652,
564
+ "eval_runtime": 0.545,
565
+ "eval_samples_per_second": 18.35,
566
+ "eval_steps_per_second": 1.835,
567
+ "step": 365
568
+ },
569
+ {
570
+ "epoch": 5.068493150684931,
571
+ "grad_norm": 0.388671875,
572
+ "learning_rate": 0.00011500553983446527,
573
+ "loss": 1.0503,
574
+ "step": 370
575
+ },
576
+ {
577
+ "epoch": 5.136986301369863,
578
+ "grad_norm": 0.38671875,
579
+ "learning_rate": 0.00011263768481255264,
580
+ "loss": 1.0578,
581
+ "step": 375
582
+ },
583
+ {
584
+ "epoch": 5.205479452054795,
585
+ "grad_norm": 0.41015625,
586
+ "learning_rate": 0.00011026260615475333,
587
+ "loss": 1.0601,
588
+ "step": 380
589
+ },
590
+ {
591
+ "epoch": 5.273972602739726,
592
+ "grad_norm": 0.40625,
593
+ "learning_rate": 0.00010788166144385888,
594
+ "loss": 1.0519,
595
+ "step": 385
596
+ },
597
+ {
598
+ "epoch": 5.342465753424658,
599
+ "grad_norm": 0.515625,
600
+ "learning_rate": 0.0001054962116156667,
601
+ "loss": 1.0555,
602
+ "step": 390
603
+ },
604
+ {
605
+ "epoch": 5.410958904109589,
606
+ "grad_norm": 0.56640625,
607
+ "learning_rate": 0.0001031076201810762,
608
+ "loss": 1.0673,
609
+ "step": 395
610
+ },
611
+ {
612
+ "epoch": 5.47945205479452,
613
+ "grad_norm": 0.48046875,
614
+ "learning_rate": 0.00010071725244671282,
615
+ "loss": 1.0495,
616
+ "step": 400
617
+ },
618
+ {
619
+ "epoch": 5.5479452054794525,
620
+ "grad_norm": 0.478515625,
621
+ "learning_rate": 9.83264747345259e-05,
622
+ "loss": 1.054,
623
+ "step": 405
624
+ },
625
+ {
626
+ "epoch": 5.616438356164384,
627
+ "grad_norm": 0.408203125,
628
+ "learning_rate": 9.593665360080599e-05,
629
+ "loss": 1.0631,
630
+ "step": 410
631
+ },
632
+ {
633
+ "epoch": 5.684931506849315,
634
+ "grad_norm": 0.376953125,
635
+ "learning_rate": 9.354915505506839e-05,
636
+ "loss": 1.0599,
637
+ "step": 415
638
+ },
639
+ {
640
+ "epoch": 5.7534246575342465,
641
+ "grad_norm": 0.373046875,
642
+ "learning_rate": 9.116534377924883e-05,
643
+ "loss": 1.0554,
644
+ "step": 420
645
+ },
646
+ {
647
+ "epoch": 5.821917808219178,
648
+ "grad_norm": 0.40625,
649
+ "learning_rate": 8.878658234765858e-05,
650
+ "loss": 1.0503,
651
+ "step": 425
652
+ },
653
+ {
654
+ "epoch": 5.890410958904109,
655
+ "grad_norm": 0.39453125,
656
+ "learning_rate": 8.641423044814374e-05,
657
+ "loss": 1.0632,
658
+ "step": 430
659
+ },
660
+ {
661
+ "epoch": 5.958904109589041,
662
+ "grad_norm": 0.431640625,
663
+ "learning_rate": 8.404964410489485e-05,
664
+ "loss": 1.0583,
665
+ "step": 435
666
+ },
667
+ {
668
+ "epoch": 6.0,
669
+ "eval_loss": 2.541933536529541,
670
+ "eval_runtime": 0.5507,
671
+ "eval_samples_per_second": 18.16,
672
+ "eval_steps_per_second": 1.816,
673
+ "step": 438
674
+ },
675
+ {
676
+ "epoch": 6.027397260273973,
677
+ "grad_norm": 0.3828125,
678
+ "learning_rate": 8.169417490335007e-05,
679
+ "loss": 1.0504,
680
+ "step": 440
681
+ },
682
+ {
683
+ "epoch": 6.095890410958904,
684
+ "grad_norm": 0.38671875,
685
+ "learning_rate": 7.934916921763628e-05,
686
+ "loss": 1.0301,
687
+ "step": 445
688
+ },
689
+ {
690
+ "epoch": 6.164383561643835,
691
+ "grad_norm": 0.3828125,
692
+ "learning_rate": 7.701596744098818e-05,
693
+ "loss": 1.0358,
694
+ "step": 450
695
+ },
696
+ {
697
+ "epoch": 6.232876712328767,
698
+ "grad_norm": 0.40625,
699
+ "learning_rate": 7.469590321958662e-05,
700
+ "loss": 1.0316,
701
+ "step": 455
702
+ },
703
+ {
704
+ "epoch": 6.301369863013699,
705
+ "grad_norm": 0.59765625,
706
+ "learning_rate": 7.239030269025311e-05,
707
+ "loss": 1.0415,
708
+ "step": 460
709
+ },
710
+ {
711
+ "epoch": 6.36986301369863,
712
+ "grad_norm": 0.482421875,
713
+ "learning_rate": 7.010048372243698e-05,
714
+ "loss": 1.0336,
715
+ "step": 465
716
+ },
717
+ {
718
+ "epoch": 6.438356164383562,
719
+ "grad_norm": 0.40625,
720
+ "learning_rate": 6.782775516492771e-05,
721
+ "loss": 1.0496,
722
+ "step": 470
723
+ },
724
+ {
725
+ "epoch": 6.506849315068493,
726
+ "grad_norm": 0.38671875,
727
+ "learning_rate": 6.5573416097724e-05,
728
+ "loss": 1.04,
729
+ "step": 475
730
+ },
731
+ {
732
+ "epoch": 6.575342465753424,
733
+ "grad_norm": 0.46875,
734
+ "learning_rate": 6.333875508948593e-05,
735
+ "loss": 1.0461,
736
+ "step": 480
737
+ },
738
+ {
739
+ "epoch": 6.6438356164383565,
740
+ "grad_norm": 0.4296875,
741
+ "learning_rate": 6.112504946099604e-05,
742
+ "loss": 1.0439,
743
+ "step": 485
744
+ },
745
+ {
746
+ "epoch": 6.712328767123288,
747
+ "grad_norm": 0.42578125,
748
+ "learning_rate": 5.8933564555049105e-05,
749
+ "loss": 1.0358,
750
+ "step": 490
751
+ },
752
+ {
753
+ "epoch": 6.780821917808219,
754
+ "grad_norm": 0.5234375,
755
+ "learning_rate": 5.6765553013188766e-05,
756
+ "loss": 1.045,
757
+ "step": 495
758
+ },
759
+ {
760
+ "epoch": 6.8493150684931505,
761
+ "grad_norm": 0.365234375,
762
+ "learning_rate": 5.462225405970401e-05,
763
+ "loss": 1.0329,
764
+ "step": 500
765
+ },
766
+ {
767
+ "epoch": 6.917808219178082,
768
+ "grad_norm": 0.388671875,
769
+ "learning_rate": 5.2504892793295e-05,
770
+ "loss": 1.0442,
771
+ "step": 505
772
+ },
773
+ {
774
+ "epoch": 6.986301369863014,
775
+ "grad_norm": 0.37109375,
776
+ "learning_rate": 5.041467948681269e-05,
777
+ "loss": 1.04,
778
+ "step": 510
779
+ },
780
+ {
781
+ "epoch": 7.0,
782
+ "eval_loss": 2.5487561225891113,
783
+ "eval_runtime": 0.5536,
784
+ "eval_samples_per_second": 18.064,
785
+ "eval_steps_per_second": 1.806,
786
+ "step": 511
787
+ },
788
+ {
789
+ "epoch": 7.054794520547945,
790
+ "grad_norm": 0.384765625,
791
+ "learning_rate": 4.835280889547351e-05,
792
+ "loss": 1.0298,
793
+ "step": 515
794
+ },
795
+ {
796
+ "epoch": 7.123287671232877,
797
+ "grad_norm": 0.376953125,
798
+ "learning_rate": 4.6320459573942856e-05,
799
+ "loss": 1.0213,
800
+ "step": 520
801
+ },
802
+ {
803
+ "epoch": 7.191780821917808,
804
+ "grad_norm": 0.390625,
805
+ "learning_rate": 4.431879320267972e-05,
806
+ "loss": 1.0334,
807
+ "step": 525
808
+ },
809
+ {
810
+ "epoch": 7.260273972602739,
811
+ "grad_norm": 0.390625,
812
+ "learning_rate": 4.2348953923925916e-05,
813
+ "loss": 1.0208,
814
+ "step": 530
815
+ },
816
+ {
817
+ "epoch": 7.328767123287671,
818
+ "grad_norm": 0.365234375,
819
+ "learning_rate": 4.041206768772022e-05,
820
+ "loss": 1.0323,
821
+ "step": 535
822
+ },
823
+ {
824
+ "epoch": 7.397260273972603,
825
+ "grad_norm": 0.41015625,
826
+ "learning_rate": 3.850924160831115e-05,
827
+ "loss": 1.0295,
828
+ "step": 540
829
+ },
830
+ {
831
+ "epoch": 7.465753424657534,
832
+ "grad_norm": 0.373046875,
833
+ "learning_rate": 3.6641563331336125e-05,
834
+ "loss": 1.0266,
835
+ "step": 545
836
+ },
837
+ {
838
+ "epoch": 7.534246575342466,
839
+ "grad_norm": 0.3828125,
840
+ "learning_rate": 3.4810100412128747e-05,
841
+ "loss": 1.0218,
842
+ "step": 550
843
+ },
844
+ {
845
+ "epoch": 7.602739726027397,
846
+ "grad_norm": 0.375,
847
+ "learning_rate": 3.3015899705509734e-05,
848
+ "loss": 1.0369,
849
+ "step": 555
850
+ },
851
+ {
852
+ "epoch": 7.671232876712329,
853
+ "grad_norm": 0.40234375,
854
+ "learning_rate": 3.125998676740987e-05,
855
+ "loss": 1.0308,
856
+ "step": 560
857
+ },
858
+ {
859
+ "epoch": 7.739726027397261,
860
+ "grad_norm": 0.3671875,
861
+ "learning_rate": 2.9543365268667867e-05,
862
+ "loss": 1.0225,
863
+ "step": 565
864
+ },
865
+ {
866
+ "epoch": 7.808219178082192,
867
+ "grad_norm": 0.36328125,
868
+ "learning_rate": 2.7867016421336776e-05,
869
+ "loss": 1.0236,
870
+ "step": 570
871
+ },
872
+ {
873
+ "epoch": 7.876712328767123,
874
+ "grad_norm": 0.3828125,
875
+ "learning_rate": 2.6231898417828603e-05,
876
+ "loss": 1.0272,
877
+ "step": 575
878
+ },
879
+ {
880
+ "epoch": 7.945205479452055,
881
+ "grad_norm": 0.37890625,
882
+ "learning_rate": 2.4638945883216235e-05,
883
+ "loss": 1.0248,
884
+ "step": 580
885
+ },
886
+ {
887
+ "epoch": 8.0,
888
+ "eval_loss": 2.557366132736206,
889
+ "eval_runtime": 0.554,
890
+ "eval_samples_per_second": 18.051,
891
+ "eval_steps_per_second": 1.805,
892
+ "step": 584
893
+ },
894
+ {
895
+ "epoch": 8.013698630136986,
896
+ "grad_norm": 0.365234375,
897
+ "learning_rate": 2.3089069341006565e-05,
898
+ "loss": 1.0197,
899
+ "step": 585
900
+ },
901
+ {
902
+ "epoch": 8.082191780821917,
903
+ "grad_norm": 0.359375,
904
+ "learning_rate": 2.1583154692689976e-05,
905
+ "loss": 1.0234,
906
+ "step": 590
907
+ },
908
+ {
909
+ "epoch": 8.150684931506849,
910
+ "grad_norm": 0.435546875,
911
+ "learning_rate": 2.0122062711363532e-05,
912
+ "loss": 1.0212,
913
+ "step": 595
914
+ },
915
+ {
916
+ "epoch": 8.219178082191782,
917
+ "grad_norm": 0.357421875,
918
+ "learning_rate": 1.8706628549717452e-05,
919
+ "loss": 1.0168,
920
+ "step": 600
921
+ },
922
+ {
923
+ "epoch": 8.287671232876713,
924
+ "grad_norm": 0.37109375,
925
+ "learning_rate": 1.7337661262666294e-05,
926
+ "loss": 1.0172,
927
+ "step": 605
928
+ },
929
+ {
930
+ "epoch": 8.356164383561644,
931
+ "grad_norm": 0.373046875,
932
+ "learning_rate": 1.601594334489702e-05,
933
+ "loss": 1.0167,
934
+ "step": 610
935
+ },
936
+ {
937
+ "epoch": 8.424657534246576,
938
+ "grad_norm": 0.3828125,
939
+ "learning_rate": 1.474223028359939e-05,
940
+ "loss": 1.0271,
941
+ "step": 615
942
+ },
943
+ {
944
+ "epoch": 8.493150684931507,
945
+ "grad_norm": 0.369140625,
946
+ "learning_rate": 1.3517250126632986e-05,
947
+ "loss": 1.0233,
948
+ "step": 620
949
+ },
950
+ {
951
+ "epoch": 8.561643835616438,
952
+ "grad_norm": 0.3671875,
953
+ "learning_rate": 1.2341703066379074e-05,
954
+ "loss": 1.0209,
955
+ "step": 625
956
+ },
957
+ {
958
+ "epoch": 8.63013698630137,
959
+ "grad_norm": 0.376953125,
960
+ "learning_rate": 1.1216261039514087e-05,
961
+ "loss": 1.0143,
962
+ "step": 630
963
+ },
964
+ {
965
+ "epoch": 8.698630136986301,
966
+ "grad_norm": 0.380859375,
967
+ "learning_rate": 1.0141567342934132e-05,
968
+ "loss": 1.0231,
969
+ "step": 635
970
+ },
971
+ {
972
+ "epoch": 8.767123287671232,
973
+ "grad_norm": 0.35546875,
974
+ "learning_rate": 9.118236266049707e-06,
975
+ "loss": 1.014,
976
+ "step": 640
977
+ },
978
+ {
979
+ "epoch": 8.835616438356164,
980
+ "grad_norm": 0.375,
981
+ "learning_rate": 8.146852739661105e-06,
982
+ "loss": 1.016,
983
+ "step": 645
984
+ },
985
+ {
986
+ "epoch": 8.904109589041095,
987
+ "grad_norm": 0.373046875,
988
+ "learning_rate": 7.2279720016148244e-06,
989
+ "loss": 1.0274,
990
+ "step": 650
991
+ },
992
+ {
993
+ "epoch": 8.972602739726028,
994
+ "grad_norm": 0.392578125,
995
+ "learning_rate": 6.36211927943271e-06,
996
+ "loss": 1.021,
997
+ "step": 655
998
+ },
999
+ {
1000
+ "epoch": 9.0,
1001
+ "eval_loss": 2.5613903999328613,
1002
+ "eval_runtime": 0.5421,
1003
+ "eval_samples_per_second": 18.447,
1004
+ "eval_steps_per_second": 1.845,
1005
+ "step": 657
1006
+ },
1007
+ {
1008
+ "epoch": 9.04109589041096,
1009
+ "grad_norm": 0.361328125,
1010
+ "learning_rate": 5.549789490094304e-06,
1011
+ "loss": 1.0166,
1012
+ "step": 660
1013
+ },
1014
+ {
1015
+ "epoch": 9.10958904109589,
1016
+ "grad_norm": 0.365234375,
1017
+ "learning_rate": 4.79144695714504e-06,
1018
+ "loss": 1.0251,
1019
+ "step": 665
1020
+ },
1021
+ {
1022
+ "epoch": 9.178082191780822,
1023
+ "grad_norm": 0.390625,
1024
+ "learning_rate": 4.087525145291204e-06,
1025
+ "loss": 1.0213,
1026
+ "step": 670
1027
+ },
1028
+ {
1029
+ "epoch": 9.246575342465754,
1030
+ "grad_norm": 0.404296875,
1031
+ "learning_rate": 3.4384264126337328e-06,
1032
+ "loss": 1.0181,
1033
+ "step": 675
1034
+ },
1035
+ {
1036
+ "epoch": 9.315068493150685,
1037
+ "grad_norm": 0.357421875,
1038
+ "learning_rate": 2.8445217806824077e-06,
1039
+ "loss": 1.0202,
1040
+ "step": 680
1041
+ },
1042
+ {
1043
+ "epoch": 9.383561643835616,
1044
+ "grad_norm": 0.361328125,
1045
+ "learning_rate": 2.30615072228183e-06,
1046
+ "loss": 1.0139,
1047
+ "step": 685
1048
+ },
1049
+ {
1050
+ "epoch": 9.452054794520548,
1051
+ "grad_norm": 0.359375,
1052
+ "learning_rate": 1.8236209675705274e-06,
1053
+ "loss": 1.0292,
1054
+ "step": 690
1055
+ },
1056
+ {
1057
+ "epoch": 9.520547945205479,
1058
+ "grad_norm": 0.36328125,
1059
+ "learning_rate": 1.397208328083921e-06,
1060
+ "loss": 1.016,
1061
+ "step": 695
1062
+ },
1063
+ {
1064
+ "epoch": 9.58904109589041,
1065
+ "grad_norm": 0.35546875,
1066
+ "learning_rate": 1.0271565391018922e-06,
1067
+ "loss": 1.0206,
1068
+ "step": 700
1069
+ },
1070
+ {
1071
+ "epoch": 9.657534246575342,
1072
+ "grad_norm": 0.369140625,
1073
+ "learning_rate": 7.136771203310245e-07,
1074
+ "loss": 1.0181,
1075
+ "step": 705
1076
+ },
1077
+ {
1078
+ "epoch": 9.726027397260275,
1079
+ "grad_norm": 0.359375,
1080
+ "learning_rate": 4.569492550008603e-07,
1081
+ "loss": 1.0123,
1082
+ "step": 710
1083
+ },
1084
+ {
1085
+ "epoch": 9.794520547945206,
1086
+ "grad_norm": 0.3515625,
1087
+ "learning_rate": 2.5711968744382974e-07,
1088
+ "loss": 1.0101,
1089
+ "step": 715
1090
+ },
1091
+ {
1092
+ "epoch": 9.863013698630137,
1093
+ "grad_norm": 0.3671875,
1094
+ "learning_rate": 1.143026392168789e-07,
1095
+ "loss": 1.0103,
1096
+ "step": 720
1097
+ },
1098
+ {
1099
+ "epoch": 9.931506849315069,
1100
+ "grad_norm": 0.365234375,
1101
+ "learning_rate": 2.8579743813006432e-08,
1102
+ "loss": 1.0282,
1103
+ "step": 725
1104
+ },
1105
+ {
1106
+ "epoch": 10.0,
1107
+ "grad_norm": 0.36328125,
1108
+ "learning_rate": 0.0,
1109
+ "loss": 1.0179,
1110
+ "step": 730
1111
+ },
1112
+ {
1113
+ "epoch": 10.0,
1114
+ "eval_loss": 2.5617620944976807,
1115
+ "eval_runtime": 0.5453,
1116
+ "eval_samples_per_second": 18.339,
1117
+ "eval_steps_per_second": 1.834,
1118
+ "step": 730
1119
+ },
1120
+ {
1121
+ "epoch": 10.0,
1122
+ "step": 730,
1123
+ "total_flos": 4.287825372721971e+17,
1124
+ "train_loss": 1.1969801510850044,
1125
+ "train_runtime": 3891.9077,
1126
+ "train_samples_per_second": 9.003,
1127
+ "train_steps_per_second": 0.188
1128
+ }
1129
+ ],
1130
+ "logging_steps": 5,
1131
+ "max_steps": 730,
1132
+ "num_input_tokens_seen": 0,
1133
+ "num_train_epochs": 10,
1134
+ "save_steps": 100,
1135
+ "stateful_callbacks": {
1136
+ "TrainerControl": {
1137
+ "args": {
1138
+ "should_epoch_stop": false,
1139
+ "should_evaluate": false,
1140
+ "should_log": false,
1141
+ "should_save": true,
1142
+ "should_training_stop": false
1143
+ },
1144
+ "attributes": {}
1145
+ }
1146
+ },
1147
+ "total_flos": 4.287825372721971e+17,
1148
+ "train_batch_size": 8,
1149
+ "trial_name": null,
1150
+ "trial_params": null
1151
+ }