rshrott commited on
Commit
4ed4a4a
1 Parent(s): b182a1d

🍻 cheers

Browse files
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
 
5
  - generated_from_trainer
6
  metrics:
7
  - accuracy
@@ -15,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # colab20240326ryan
17
 
18
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.8880
21
- - Accuracy: 0.6617
22
 
23
  ## Model description
24
 
 
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
+ - image-classification
6
  - generated_from_trainer
7
  metrics:
8
  - accuracy
 
16
 
17
  # colab20240326ryan
18
 
19
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.7993
22
+ - Accuracy: 0.6763
23
 
24
  ## Model description
25
 
all_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.16,
3
+ "eval_accuracy": 0.6763005780346821,
4
+ "eval_loss": 0.7993046045303345,
5
+ "eval_runtime": 131.2807,
6
+ "eval_samples_per_second": 30.309,
7
+ "eval_steps_per_second": 3.793,
8
+ "total_flos": 3.099325741767844e+18,
9
+ "train_loss": 0.8266718128204346,
10
+ "train_runtime": 5452.5845,
11
+ "train_samples_per_second": 25.29,
12
+ "train_steps_per_second": 1.581
13
+ }
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.16,
3
+ "eval_accuracy": 0.6763005780346821,
4
+ "eval_loss": 0.7993046045303345,
5
+ "eval_runtime": 131.2807,
6
+ "eval_samples_per_second": 30.309,
7
+ "eval_steps_per_second": 3.793
8
+ }
runs/Mar26_17-14-14_9a05b6a6bd10/events.out.tfevents.1711478858.9a05b6a6bd10.1267.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db733004f23a3e5108189aae576cae6f9f485467bd789c8b2534ab9e06e33618
3
+ size 411
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.16,
3
+ "total_flos": 3.099325741767844e+18,
4
+ "train_loss": 0.8266718128204346,
5
+ "train_runtime": 5452.5845,
6
+ "train_samples_per_second": 25.29,
7
+ "train_steps_per_second": 1.581
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,2005 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.7993046045303345,
3
+ "best_model_checkpoint": "./colab20240326ryan/checkpoint-2100",
4
+ "epoch": 1.160092807424594,
5
+ "eval_steps": 100,
6
+ "global_step": 2500,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "grad_norm": 1.9214508533477783,
14
+ "learning_rate": 0.0001997679814385151,
15
+ "loss": 1.7231,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.01,
20
+ "grad_norm": 1.908971905708313,
21
+ "learning_rate": 0.00019953596287703018,
22
+ "loss": 1.5448,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.01,
27
+ "grad_norm": 1.5993173122406006,
28
+ "learning_rate": 0.00019930394431554523,
29
+ "loss": 1.3519,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.02,
34
+ "grad_norm": 4.219737529754639,
35
+ "learning_rate": 0.00019907192575406032,
36
+ "loss": 1.4004,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.02,
41
+ "grad_norm": 2.771422863006592,
42
+ "learning_rate": 0.00019883990719257543,
43
+ "loss": 1.2249,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.03,
48
+ "grad_norm": 2.5087692737579346,
49
+ "learning_rate": 0.0001986078886310905,
50
+ "loss": 1.3035,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.03,
55
+ "grad_norm": 2.224257707595825,
56
+ "learning_rate": 0.0001983758700696056,
57
+ "loss": 1.2159,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.04,
62
+ "grad_norm": 2.9843618869781494,
63
+ "learning_rate": 0.00019814385150812065,
64
+ "loss": 1.0327,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.04,
69
+ "grad_norm": 1.905066967010498,
70
+ "learning_rate": 0.00019791183294663573,
71
+ "loss": 1.0393,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 0.05,
76
+ "grad_norm": 4.216238021850586,
77
+ "learning_rate": 0.00019767981438515082,
78
+ "loss": 1.1654,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 0.05,
83
+ "eval_accuracy": 0.5813018346318171,
84
+ "eval_loss": 1.081552505493164,
85
+ "eval_runtime": 142.0814,
86
+ "eval_samples_per_second": 28.005,
87
+ "eval_steps_per_second": 3.505,
88
+ "step": 100
89
+ },
90
+ {
91
+ "epoch": 0.05,
92
+ "grad_norm": 2.5341720581054688,
93
+ "learning_rate": 0.0001974477958236659,
94
+ "loss": 1.162,
95
+ "step": 110
96
+ },
97
+ {
98
+ "epoch": 0.06,
99
+ "grad_norm": 3.2532262802124023,
100
+ "learning_rate": 0.00019721577726218098,
101
+ "loss": 1.04,
102
+ "step": 120
103
+ },
104
+ {
105
+ "epoch": 0.06,
106
+ "grad_norm": 2.41017746925354,
107
+ "learning_rate": 0.00019698375870069607,
108
+ "loss": 1.0566,
109
+ "step": 130
110
+ },
111
+ {
112
+ "epoch": 0.06,
113
+ "grad_norm": 1.7879230976104736,
114
+ "learning_rate": 0.00019675174013921115,
115
+ "loss": 1.0105,
116
+ "step": 140
117
+ },
118
+ {
119
+ "epoch": 0.07,
120
+ "grad_norm": 3.4111428260803223,
121
+ "learning_rate": 0.00019651972157772623,
122
+ "loss": 1.0504,
123
+ "step": 150
124
+ },
125
+ {
126
+ "epoch": 0.07,
127
+ "grad_norm": 3.4261136054992676,
128
+ "learning_rate": 0.00019628770301624132,
129
+ "loss": 1.0128,
130
+ "step": 160
131
+ },
132
+ {
133
+ "epoch": 0.08,
134
+ "grad_norm": 4.207861423492432,
135
+ "learning_rate": 0.0001960556844547564,
136
+ "loss": 0.9814,
137
+ "step": 170
138
+ },
139
+ {
140
+ "epoch": 0.08,
141
+ "grad_norm": 3.604964256286621,
142
+ "learning_rate": 0.00019582366589327148,
143
+ "loss": 1.17,
144
+ "step": 180
145
+ },
146
+ {
147
+ "epoch": 0.09,
148
+ "grad_norm": 2.710505485534668,
149
+ "learning_rate": 0.00019559164733178654,
150
+ "loss": 1.0255,
151
+ "step": 190
152
+ },
153
+ {
154
+ "epoch": 0.09,
155
+ "grad_norm": 4.121289253234863,
156
+ "learning_rate": 0.00019535962877030162,
157
+ "loss": 1.1321,
158
+ "step": 200
159
+ },
160
+ {
161
+ "epoch": 0.09,
162
+ "eval_accuracy": 0.5996481528022116,
163
+ "eval_loss": 0.9905579686164856,
164
+ "eval_runtime": 135.9688,
165
+ "eval_samples_per_second": 29.264,
166
+ "eval_steps_per_second": 3.663,
167
+ "step": 200
168
+ },
169
+ {
170
+ "epoch": 0.1,
171
+ "grad_norm": 2.4283132553100586,
172
+ "learning_rate": 0.0001951276102088167,
173
+ "loss": 1.0087,
174
+ "step": 210
175
+ },
176
+ {
177
+ "epoch": 0.1,
178
+ "grad_norm": 2.141366958618164,
179
+ "learning_rate": 0.0001948955916473318,
180
+ "loss": 0.8722,
181
+ "step": 220
182
+ },
183
+ {
184
+ "epoch": 0.11,
185
+ "grad_norm": 3.6067655086517334,
186
+ "learning_rate": 0.00019466357308584687,
187
+ "loss": 1.0173,
188
+ "step": 230
189
+ },
190
+ {
191
+ "epoch": 0.11,
192
+ "grad_norm": 2.5523955821990967,
193
+ "learning_rate": 0.00019445475638051046,
194
+ "loss": 0.9303,
195
+ "step": 240
196
+ },
197
+ {
198
+ "epoch": 0.12,
199
+ "grad_norm": 2.983736753463745,
200
+ "learning_rate": 0.00019422273781902555,
201
+ "loss": 0.9035,
202
+ "step": 250
203
+ },
204
+ {
205
+ "epoch": 0.12,
206
+ "grad_norm": 3.1925017833709717,
207
+ "learning_rate": 0.00019399071925754063,
208
+ "loss": 0.9329,
209
+ "step": 260
210
+ },
211
+ {
212
+ "epoch": 0.13,
213
+ "grad_norm": 4.603178977966309,
214
+ "learning_rate": 0.00019375870069605569,
215
+ "loss": 0.9351,
216
+ "step": 270
217
+ },
218
+ {
219
+ "epoch": 0.13,
220
+ "grad_norm": 3.129456043243408,
221
+ "learning_rate": 0.00019352668213457077,
222
+ "loss": 1.0367,
223
+ "step": 280
224
+ },
225
+ {
226
+ "epoch": 0.13,
227
+ "grad_norm": 3.650508403778076,
228
+ "learning_rate": 0.00019329466357308585,
229
+ "loss": 0.9677,
230
+ "step": 290
231
+ },
232
+ {
233
+ "epoch": 0.14,
234
+ "grad_norm": 2.0717406272888184,
235
+ "learning_rate": 0.00019306264501160094,
236
+ "loss": 0.9389,
237
+ "step": 300
238
+ },
239
+ {
240
+ "epoch": 0.14,
241
+ "eval_accuracy": 0.625031414928374,
242
+ "eval_loss": 0.9222464561462402,
243
+ "eval_runtime": 133.436,
244
+ "eval_samples_per_second": 29.82,
245
+ "eval_steps_per_second": 3.732,
246
+ "step": 300
247
+ },
248
+ {
249
+ "epoch": 0.14,
250
+ "grad_norm": 3.079808473587036,
251
+ "learning_rate": 0.00019283062645011602,
252
+ "loss": 0.9204,
253
+ "step": 310
254
+ },
255
+ {
256
+ "epoch": 0.15,
257
+ "grad_norm": 3.8033320903778076,
258
+ "learning_rate": 0.0001925986078886311,
259
+ "loss": 0.9505,
260
+ "step": 320
261
+ },
262
+ {
263
+ "epoch": 0.15,
264
+ "grad_norm": 3.029008150100708,
265
+ "learning_rate": 0.0001923665893271462,
266
+ "loss": 0.9715,
267
+ "step": 330
268
+ },
269
+ {
270
+ "epoch": 0.16,
271
+ "grad_norm": 2.543546199798584,
272
+ "learning_rate": 0.00019213457076566127,
273
+ "loss": 0.9928,
274
+ "step": 340
275
+ },
276
+ {
277
+ "epoch": 0.16,
278
+ "grad_norm": 1.7682580947875977,
279
+ "learning_rate": 0.00019190255220417635,
280
+ "loss": 1.0367,
281
+ "step": 350
282
+ },
283
+ {
284
+ "epoch": 0.17,
285
+ "grad_norm": 2.006638526916504,
286
+ "learning_rate": 0.00019167053364269144,
287
+ "loss": 0.932,
288
+ "step": 360
289
+ },
290
+ {
291
+ "epoch": 0.17,
292
+ "grad_norm": 2.4894397258758545,
293
+ "learning_rate": 0.00019143851508120652,
294
+ "loss": 0.8321,
295
+ "step": 370
296
+ },
297
+ {
298
+ "epoch": 0.18,
299
+ "grad_norm": 3.1492834091186523,
300
+ "learning_rate": 0.00019120649651972158,
301
+ "loss": 1.0013,
302
+ "step": 380
303
+ },
304
+ {
305
+ "epoch": 0.18,
306
+ "grad_norm": 4.911842346191406,
307
+ "learning_rate": 0.00019097447795823666,
308
+ "loss": 0.987,
309
+ "step": 390
310
+ },
311
+ {
312
+ "epoch": 0.19,
313
+ "grad_norm": 2.6950294971466064,
314
+ "learning_rate": 0.00019074245939675174,
315
+ "loss": 0.816,
316
+ "step": 400
317
+ },
318
+ {
319
+ "epoch": 0.19,
320
+ "eval_accuracy": 0.5740135712490575,
321
+ "eval_loss": 1.0586621761322021,
322
+ "eval_runtime": 131.2597,
323
+ "eval_samples_per_second": 30.314,
324
+ "eval_steps_per_second": 3.794,
325
+ "step": 400
326
+ },
327
+ {
328
+ "epoch": 0.19,
329
+ "grad_norm": 2.0810515880584717,
330
+ "learning_rate": 0.00019051044083526683,
331
+ "loss": 1.0748,
332
+ "step": 410
333
+ },
334
+ {
335
+ "epoch": 0.19,
336
+ "grad_norm": 3.0597758293151855,
337
+ "learning_rate": 0.0001902784222737819,
338
+ "loss": 0.851,
339
+ "step": 420
340
+ },
341
+ {
342
+ "epoch": 0.2,
343
+ "grad_norm": 2.7524383068084717,
344
+ "learning_rate": 0.000190046403712297,
345
+ "loss": 0.9123,
346
+ "step": 430
347
+ },
348
+ {
349
+ "epoch": 0.2,
350
+ "grad_norm": 4.255000591278076,
351
+ "learning_rate": 0.00018981438515081208,
352
+ "loss": 1.0082,
353
+ "step": 440
354
+ },
355
+ {
356
+ "epoch": 0.21,
357
+ "grad_norm": 2.834663152694702,
358
+ "learning_rate": 0.00018958236658932716,
359
+ "loss": 1.0721,
360
+ "step": 450
361
+ },
362
+ {
363
+ "epoch": 0.21,
364
+ "grad_norm": 2.59566593170166,
365
+ "learning_rate": 0.00018935034802784224,
366
+ "loss": 0.9448,
367
+ "step": 460
368
+ },
369
+ {
370
+ "epoch": 0.22,
371
+ "grad_norm": 2.0671868324279785,
372
+ "learning_rate": 0.00018911832946635733,
373
+ "loss": 0.823,
374
+ "step": 470
375
+ },
376
+ {
377
+ "epoch": 0.22,
378
+ "grad_norm": 2.022857189178467,
379
+ "learning_rate": 0.00018888631090487238,
380
+ "loss": 0.9282,
381
+ "step": 480
382
+ },
383
+ {
384
+ "epoch": 0.23,
385
+ "grad_norm": 1.6197035312652588,
386
+ "learning_rate": 0.00018865429234338747,
387
+ "loss": 0.8492,
388
+ "step": 490
389
+ },
390
+ {
391
+ "epoch": 0.23,
392
+ "grad_norm": 2.2592787742614746,
393
+ "learning_rate": 0.00018842227378190255,
394
+ "loss": 0.7273,
395
+ "step": 500
396
+ },
397
+ {
398
+ "epoch": 0.23,
399
+ "eval_accuracy": 0.6267906509173159,
400
+ "eval_loss": 0.918483555316925,
401
+ "eval_runtime": 131.2522,
402
+ "eval_samples_per_second": 30.316,
403
+ "eval_steps_per_second": 3.794,
404
+ "step": 500
405
+ },
406
+ {
407
+ "epoch": 0.24,
408
+ "grad_norm": 3.200505256652832,
409
+ "learning_rate": 0.00018819025522041763,
410
+ "loss": 0.9205,
411
+ "step": 510
412
+ },
413
+ {
414
+ "epoch": 0.24,
415
+ "grad_norm": 2.9970719814300537,
416
+ "learning_rate": 0.00018795823665893272,
417
+ "loss": 0.8762,
418
+ "step": 520
419
+ },
420
+ {
421
+ "epoch": 0.25,
422
+ "grad_norm": 1.8891489505767822,
423
+ "learning_rate": 0.00018772621809744783,
424
+ "loss": 0.9051,
425
+ "step": 530
426
+ },
427
+ {
428
+ "epoch": 0.25,
429
+ "grad_norm": 2.4764907360076904,
430
+ "learning_rate": 0.00018749419953596288,
431
+ "loss": 0.9494,
432
+ "step": 540
433
+ },
434
+ {
435
+ "epoch": 0.26,
436
+ "grad_norm": 2.9991800785064697,
437
+ "learning_rate": 0.00018726218097447797,
438
+ "loss": 0.9639,
439
+ "step": 550
440
+ },
441
+ {
442
+ "epoch": 0.26,
443
+ "grad_norm": 2.954806327819824,
444
+ "learning_rate": 0.00018703016241299305,
445
+ "loss": 0.9069,
446
+ "step": 560
447
+ },
448
+ {
449
+ "epoch": 0.26,
450
+ "grad_norm": 3.1720399856567383,
451
+ "learning_rate": 0.00018679814385150813,
452
+ "loss": 0.84,
453
+ "step": 570
454
+ },
455
+ {
456
+ "epoch": 0.27,
457
+ "grad_norm": 1.5639662742614746,
458
+ "learning_rate": 0.00018656612529002322,
459
+ "loss": 0.8589,
460
+ "step": 580
461
+ },
462
+ {
463
+ "epoch": 0.27,
464
+ "grad_norm": 3.4077460765838623,
465
+ "learning_rate": 0.00018633410672853827,
466
+ "loss": 0.8529,
467
+ "step": 590
468
+ },
469
+ {
470
+ "epoch": 0.28,
471
+ "grad_norm": 3.089357852935791,
472
+ "learning_rate": 0.00018610208816705336,
473
+ "loss": 0.8282,
474
+ "step": 600
475
+ },
476
+ {
477
+ "epoch": 0.28,
478
+ "eval_accuracy": 0.6293038451872329,
479
+ "eval_loss": 0.9175940155982971,
480
+ "eval_runtime": 132.213,
481
+ "eval_samples_per_second": 30.095,
482
+ "eval_steps_per_second": 3.767,
483
+ "step": 600
484
+ },
485
+ {
486
+ "epoch": 0.28,
487
+ "grad_norm": 2.7197611331939697,
488
+ "learning_rate": 0.00018587006960556844,
489
+ "loss": 0.8377,
490
+ "step": 610
491
+ },
492
+ {
493
+ "epoch": 0.29,
494
+ "grad_norm": 2.838669538497925,
495
+ "learning_rate": 0.00018563805104408355,
496
+ "loss": 0.88,
497
+ "step": 620
498
+ },
499
+ {
500
+ "epoch": 0.29,
501
+ "grad_norm": 4.2069902420043945,
502
+ "learning_rate": 0.00018540603248259864,
503
+ "loss": 0.8175,
504
+ "step": 630
505
+ },
506
+ {
507
+ "epoch": 0.3,
508
+ "grad_norm": 3.0792770385742188,
509
+ "learning_rate": 0.0001851740139211137,
510
+ "loss": 0.8921,
511
+ "step": 640
512
+ },
513
+ {
514
+ "epoch": 0.3,
515
+ "grad_norm": 3.4577174186706543,
516
+ "learning_rate": 0.00018494199535962877,
517
+ "loss": 0.8612,
518
+ "step": 650
519
+ },
520
+ {
521
+ "epoch": 0.31,
522
+ "grad_norm": 3.424455165863037,
523
+ "learning_rate": 0.00018470997679814386,
524
+ "loss": 0.9775,
525
+ "step": 660
526
+ },
527
+ {
528
+ "epoch": 0.31,
529
+ "grad_norm": 2.300741672515869,
530
+ "learning_rate": 0.00018447795823665894,
531
+ "loss": 1.0909,
532
+ "step": 670
533
+ },
534
+ {
535
+ "epoch": 0.32,
536
+ "grad_norm": 1.8668731451034546,
537
+ "learning_rate": 0.00018424593967517403,
538
+ "loss": 0.8205,
539
+ "step": 680
540
+ },
541
+ {
542
+ "epoch": 0.32,
543
+ "grad_norm": 3.170844793319702,
544
+ "learning_rate": 0.0001840139211136891,
545
+ "loss": 0.8701,
546
+ "step": 690
547
+ },
548
+ {
549
+ "epoch": 0.32,
550
+ "grad_norm": 2.8682425022125244,
551
+ "learning_rate": 0.00018378190255220417,
552
+ "loss": 0.8,
553
+ "step": 700
554
+ },
555
+ {
556
+ "epoch": 0.32,
557
+ "eval_accuracy": 0.6272932897712993,
558
+ "eval_loss": 0.9006840586662292,
559
+ "eval_runtime": 129.3938,
560
+ "eval_samples_per_second": 30.751,
561
+ "eval_steps_per_second": 3.849,
562
+ "step": 700
563
+ },
564
+ {
565
+ "epoch": 0.33,
566
+ "grad_norm": 3.769883871078491,
567
+ "learning_rate": 0.00018354988399071928,
568
+ "loss": 0.9765,
569
+ "step": 710
570
+ },
571
+ {
572
+ "epoch": 0.33,
573
+ "grad_norm": 3.9122543334960938,
574
+ "learning_rate": 0.00018331786542923436,
575
+ "loss": 0.9744,
576
+ "step": 720
577
+ },
578
+ {
579
+ "epoch": 0.34,
580
+ "grad_norm": 3.644559860229492,
581
+ "learning_rate": 0.00018308584686774944,
582
+ "loss": 0.8877,
583
+ "step": 730
584
+ },
585
+ {
586
+ "epoch": 0.34,
587
+ "grad_norm": 3.476562976837158,
588
+ "learning_rate": 0.00018285382830626453,
589
+ "loss": 0.867,
590
+ "step": 740
591
+ },
592
+ {
593
+ "epoch": 0.35,
594
+ "grad_norm": 3.0982306003570557,
595
+ "learning_rate": 0.00018262180974477958,
596
+ "loss": 0.9965,
597
+ "step": 750
598
+ },
599
+ {
600
+ "epoch": 0.35,
601
+ "grad_norm": 2.395843505859375,
602
+ "learning_rate": 0.00018238979118329467,
603
+ "loss": 0.8635,
604
+ "step": 760
605
+ },
606
+ {
607
+ "epoch": 0.36,
608
+ "grad_norm": 3.6630685329437256,
609
+ "learning_rate": 0.00018215777262180975,
610
+ "loss": 0.8562,
611
+ "step": 770
612
+ },
613
+ {
614
+ "epoch": 0.36,
615
+ "grad_norm": 1.064682960510254,
616
+ "learning_rate": 0.00018192575406032483,
617
+ "loss": 0.7875,
618
+ "step": 780
619
+ },
620
+ {
621
+ "epoch": 0.37,
622
+ "grad_norm": 3.3986759185791016,
623
+ "learning_rate": 0.00018169373549883992,
624
+ "loss": 0.8583,
625
+ "step": 790
626
+ },
627
+ {
628
+ "epoch": 0.37,
629
+ "grad_norm": 3.0697574615478516,
630
+ "learning_rate": 0.000181461716937355,
631
+ "loss": 0.8777,
632
+ "step": 800
633
+ },
634
+ {
635
+ "epoch": 0.37,
636
+ "eval_accuracy": 0.6202563458155316,
637
+ "eval_loss": 0.9337747693061829,
638
+ "eval_runtime": 131.2508,
639
+ "eval_samples_per_second": 30.316,
640
+ "eval_steps_per_second": 3.794,
641
+ "step": 800
642
+ },
643
+ {
644
+ "epoch": 0.38,
645
+ "grad_norm": 1.8958779573440552,
646
+ "learning_rate": 0.00018122969837587008,
647
+ "loss": 0.9388,
648
+ "step": 810
649
+ },
650
+ {
651
+ "epoch": 0.38,
652
+ "grad_norm": 1.3388631343841553,
653
+ "learning_rate": 0.00018099767981438517,
654
+ "loss": 0.8248,
655
+ "step": 820
656
+ },
657
+ {
658
+ "epoch": 0.39,
659
+ "grad_norm": 4.1161298751831055,
660
+ "learning_rate": 0.00018076566125290025,
661
+ "loss": 0.8008,
662
+ "step": 830
663
+ },
664
+ {
665
+ "epoch": 0.39,
666
+ "grad_norm": 1.5049020051956177,
667
+ "learning_rate": 0.00018053364269141533,
668
+ "loss": 0.8932,
669
+ "step": 840
670
+ },
671
+ {
672
+ "epoch": 0.39,
673
+ "grad_norm": 2.5027551651000977,
674
+ "learning_rate": 0.00018030162412993042,
675
+ "loss": 0.7656,
676
+ "step": 850
677
+ },
678
+ {
679
+ "epoch": 0.4,
680
+ "grad_norm": 2.0704867839813232,
681
+ "learning_rate": 0.00018006960556844547,
682
+ "loss": 0.8206,
683
+ "step": 860
684
+ },
685
+ {
686
+ "epoch": 0.4,
687
+ "grad_norm": 3.6864800453186035,
688
+ "learning_rate": 0.00017983758700696056,
689
+ "loss": 0.8875,
690
+ "step": 870
691
+ },
692
+ {
693
+ "epoch": 0.41,
694
+ "grad_norm": 3.732292890548706,
695
+ "learning_rate": 0.00017960556844547564,
696
+ "loss": 0.9411,
697
+ "step": 880
698
+ },
699
+ {
700
+ "epoch": 0.41,
701
+ "grad_norm": 3.7439608573913574,
702
+ "learning_rate": 0.00017937354988399072,
703
+ "loss": 0.904,
704
+ "step": 890
705
+ },
706
+ {
707
+ "epoch": 0.42,
708
+ "grad_norm": 2.159684181213379,
709
+ "learning_rate": 0.0001791415313225058,
710
+ "loss": 0.7142,
711
+ "step": 900
712
+ },
713
+ {
714
+ "epoch": 0.42,
715
+ "eval_accuracy": 0.614727318421714,
716
+ "eval_loss": 0.9442586302757263,
717
+ "eval_runtime": 130.2393,
718
+ "eval_samples_per_second": 30.551,
719
+ "eval_steps_per_second": 3.824,
720
+ "step": 900
721
+ },
722
+ {
723
+ "epoch": 0.42,
724
+ "grad_norm": 2.202846050262451,
725
+ "learning_rate": 0.0001789095127610209,
726
+ "loss": 0.8798,
727
+ "step": 910
728
+ },
729
+ {
730
+ "epoch": 0.43,
731
+ "grad_norm": 2.4513931274414062,
732
+ "learning_rate": 0.00017867749419953597,
733
+ "loss": 0.8558,
734
+ "step": 920
735
+ },
736
+ {
737
+ "epoch": 0.43,
738
+ "grad_norm": 2.3939168453216553,
739
+ "learning_rate": 0.00017844547563805106,
740
+ "loss": 0.917,
741
+ "step": 930
742
+ },
743
+ {
744
+ "epoch": 0.44,
745
+ "grad_norm": 2.8509578704833984,
746
+ "learning_rate": 0.00017821345707656614,
747
+ "loss": 0.8373,
748
+ "step": 940
749
+ },
750
+ {
751
+ "epoch": 0.44,
752
+ "grad_norm": 1.6446682214736938,
753
+ "learning_rate": 0.00017798143851508122,
754
+ "loss": 0.7641,
755
+ "step": 950
756
+ },
757
+ {
758
+ "epoch": 0.45,
759
+ "grad_norm": 2.431823968887329,
760
+ "learning_rate": 0.00017774941995359628,
761
+ "loss": 0.8709,
762
+ "step": 960
763
+ },
764
+ {
765
+ "epoch": 0.45,
766
+ "grad_norm": 2.1793901920318604,
767
+ "learning_rate": 0.00017751740139211136,
768
+ "loss": 0.7838,
769
+ "step": 970
770
+ },
771
+ {
772
+ "epoch": 0.45,
773
+ "grad_norm": 2.03481125831604,
774
+ "learning_rate": 0.00017728538283062645,
775
+ "loss": 0.844,
776
+ "step": 980
777
+ },
778
+ {
779
+ "epoch": 0.46,
780
+ "grad_norm": 2.9181933403015137,
781
+ "learning_rate": 0.00017705336426914153,
782
+ "loss": 0.9402,
783
+ "step": 990
784
+ },
785
+ {
786
+ "epoch": 0.46,
787
+ "grad_norm": 2.2899041175842285,
788
+ "learning_rate": 0.00017682134570765661,
789
+ "loss": 0.8452,
790
+ "step": 1000
791
+ },
792
+ {
793
+ "epoch": 0.46,
794
+ "eval_accuracy": 0.6282985674792662,
795
+ "eval_loss": 0.8846696615219116,
796
+ "eval_runtime": 130.7794,
797
+ "eval_samples_per_second": 30.425,
798
+ "eval_steps_per_second": 3.808,
799
+ "step": 1000
800
+ },
801
+ {
802
+ "epoch": 0.47,
803
+ "grad_norm": 3.7156896591186523,
804
+ "learning_rate": 0.00017658932714617172,
805
+ "loss": 0.8338,
806
+ "step": 1010
807
+ },
808
+ {
809
+ "epoch": 0.47,
810
+ "grad_norm": 1.9189355373382568,
811
+ "learning_rate": 0.00017635730858468678,
812
+ "loss": 0.9293,
813
+ "step": 1020
814
+ },
815
+ {
816
+ "epoch": 0.48,
817
+ "grad_norm": 3.5769336223602295,
818
+ "learning_rate": 0.00017612529002320186,
819
+ "loss": 0.9528,
820
+ "step": 1030
821
+ },
822
+ {
823
+ "epoch": 0.48,
824
+ "grad_norm": 3.103059768676758,
825
+ "learning_rate": 0.00017589327146171695,
826
+ "loss": 0.8122,
827
+ "step": 1040
828
+ },
829
+ {
830
+ "epoch": 0.49,
831
+ "grad_norm": 1.972256064414978,
832
+ "learning_rate": 0.00017566125290023203,
833
+ "loss": 0.9213,
834
+ "step": 1050
835
+ },
836
+ {
837
+ "epoch": 0.49,
838
+ "grad_norm": 2.265113592147827,
839
+ "learning_rate": 0.00017542923433874711,
840
+ "loss": 0.9075,
841
+ "step": 1060
842
+ },
843
+ {
844
+ "epoch": 0.5,
845
+ "grad_norm": 2.6354522705078125,
846
+ "learning_rate": 0.00017519721577726217,
847
+ "loss": 0.8548,
848
+ "step": 1070
849
+ },
850
+ {
851
+ "epoch": 0.5,
852
+ "grad_norm": 4.182709217071533,
853
+ "learning_rate": 0.00017496519721577725,
854
+ "loss": 0.7877,
855
+ "step": 1080
856
+ },
857
+ {
858
+ "epoch": 0.51,
859
+ "grad_norm": 2.5550811290740967,
860
+ "learning_rate": 0.00017473317865429236,
861
+ "loss": 0.9916,
862
+ "step": 1090
863
+ },
864
+ {
865
+ "epoch": 0.51,
866
+ "grad_norm": 2.5702245235443115,
867
+ "learning_rate": 0.00017450116009280745,
868
+ "loss": 0.845,
869
+ "step": 1100
870
+ },
871
+ {
872
+ "epoch": 0.51,
873
+ "eval_accuracy": 0.6622266901231465,
874
+ "eval_loss": 0.8412047624588013,
875
+ "eval_runtime": 129.4336,
876
+ "eval_samples_per_second": 30.742,
877
+ "eval_steps_per_second": 3.848,
878
+ "step": 1100
879
+ },
880
+ {
881
+ "epoch": 0.52,
882
+ "grad_norm": 3.416830539703369,
883
+ "learning_rate": 0.00017426914153132253,
884
+ "loss": 0.6856,
885
+ "step": 1110
886
+ },
887
+ {
888
+ "epoch": 0.52,
889
+ "grad_norm": 1.638490915298462,
890
+ "learning_rate": 0.0001740371229698376,
891
+ "loss": 0.7501,
892
+ "step": 1120
893
+ },
894
+ {
895
+ "epoch": 0.52,
896
+ "grad_norm": 4.172976016998291,
897
+ "learning_rate": 0.00017380510440835267,
898
+ "loss": 0.8201,
899
+ "step": 1130
900
+ },
901
+ {
902
+ "epoch": 0.53,
903
+ "grad_norm": 2.498607873916626,
904
+ "learning_rate": 0.00017357308584686775,
905
+ "loss": 0.7571,
906
+ "step": 1140
907
+ },
908
+ {
909
+ "epoch": 0.53,
910
+ "grad_norm": 5.480504035949707,
911
+ "learning_rate": 0.00017334106728538284,
912
+ "loss": 0.7706,
913
+ "step": 1150
914
+ },
915
+ {
916
+ "epoch": 0.54,
917
+ "grad_norm": 3.2535948753356934,
918
+ "learning_rate": 0.00017310904872389792,
919
+ "loss": 0.8646,
920
+ "step": 1160
921
+ },
922
+ {
923
+ "epoch": 0.54,
924
+ "grad_norm": 4.1205878257751465,
925
+ "learning_rate": 0.000172877030162413,
926
+ "loss": 0.9275,
927
+ "step": 1170
928
+ },
929
+ {
930
+ "epoch": 0.55,
931
+ "grad_norm": 3.1862285137176514,
932
+ "learning_rate": 0.0001726450116009281,
933
+ "loss": 0.7335,
934
+ "step": 1180
935
+ },
936
+ {
937
+ "epoch": 0.55,
938
+ "grad_norm": 2.7202231884002686,
939
+ "learning_rate": 0.00017241299303944317,
940
+ "loss": 0.7428,
941
+ "step": 1190
942
+ },
943
+ {
944
+ "epoch": 0.56,
945
+ "grad_norm": 2.3965518474578857,
946
+ "learning_rate": 0.00017218097447795826,
947
+ "loss": 0.9167,
948
+ "step": 1200
949
+ },
950
+ {
951
+ "epoch": 0.56,
952
+ "eval_accuracy": 0.6526765518974617,
953
+ "eval_loss": 0.87410569190979,
954
+ "eval_runtime": 130.959,
955
+ "eval_samples_per_second": 30.384,
956
+ "eval_steps_per_second": 3.803,
957
+ "step": 1200
958
+ },
959
+ {
960
+ "epoch": 0.56,
961
+ "grad_norm": 3.0782244205474854,
962
+ "learning_rate": 0.00017194895591647334,
963
+ "loss": 0.8603,
964
+ "step": 1210
965
+ },
966
+ {
967
+ "epoch": 0.57,
968
+ "grad_norm": 2.4333736896514893,
969
+ "learning_rate": 0.00017171693735498842,
970
+ "loss": 0.779,
971
+ "step": 1220
972
+ },
973
+ {
974
+ "epoch": 0.57,
975
+ "grad_norm": 3.9308993816375732,
976
+ "learning_rate": 0.00017148491879350348,
977
+ "loss": 0.7695,
978
+ "step": 1230
979
+ },
980
+ {
981
+ "epoch": 0.58,
982
+ "grad_norm": 2.4168589115142822,
983
+ "learning_rate": 0.00017125290023201856,
984
+ "loss": 0.8655,
985
+ "step": 1240
986
+ },
987
+ {
988
+ "epoch": 0.58,
989
+ "grad_norm": 3.680983304977417,
990
+ "learning_rate": 0.00017102088167053365,
991
+ "loss": 0.7862,
992
+ "step": 1250
993
+ },
994
+ {
995
+ "epoch": 0.58,
996
+ "grad_norm": 3.8315815925598145,
997
+ "learning_rate": 0.00017078886310904873,
998
+ "loss": 0.7972,
999
+ "step": 1260
1000
+ },
1001
+ {
1002
+ "epoch": 0.59,
1003
+ "grad_norm": 1.910196304321289,
1004
+ "learning_rate": 0.0001705568445475638,
1005
+ "loss": 0.7454,
1006
+ "step": 1270
1007
+ },
1008
+ {
1009
+ "epoch": 0.59,
1010
+ "grad_norm": 1.9004689455032349,
1011
+ "learning_rate": 0.0001703248259860789,
1012
+ "loss": 0.7709,
1013
+ "step": 1280
1014
+ },
1015
+ {
1016
+ "epoch": 0.6,
1017
+ "grad_norm": 3.2291324138641357,
1018
+ "learning_rate": 0.00017009280742459398,
1019
+ "loss": 0.7085,
1020
+ "step": 1290
1021
+ },
1022
+ {
1023
+ "epoch": 0.6,
1024
+ "grad_norm": 2.6493847370147705,
1025
+ "learning_rate": 0.00016986078886310906,
1026
+ "loss": 0.8226,
1027
+ "step": 1300
1028
+ },
1029
+ {
1030
+ "epoch": 0.6,
1031
+ "eval_accuracy": 0.6659964815280222,
1032
+ "eval_loss": 0.8283097743988037,
1033
+ "eval_runtime": 130.9921,
1034
+ "eval_samples_per_second": 30.376,
1035
+ "eval_steps_per_second": 3.802,
1036
+ "step": 1300
1037
+ },
1038
+ {
1039
+ "epoch": 0.61,
1040
+ "grad_norm": 4.123025417327881,
1041
+ "learning_rate": 0.00016962877030162415,
1042
+ "loss": 0.9109,
1043
+ "step": 1310
1044
+ },
1045
+ {
1046
+ "epoch": 0.61,
1047
+ "grad_norm": 3.537853479385376,
1048
+ "learning_rate": 0.00016939675174013923,
1049
+ "loss": 0.9124,
1050
+ "step": 1320
1051
+ },
1052
+ {
1053
+ "epoch": 0.62,
1054
+ "grad_norm": 2.515120506286621,
1055
+ "learning_rate": 0.0001691647331786543,
1056
+ "loss": 0.8357,
1057
+ "step": 1330
1058
+ },
1059
+ {
1060
+ "epoch": 0.62,
1061
+ "grad_norm": 1.7295467853546143,
1062
+ "learning_rate": 0.00016893271461716937,
1063
+ "loss": 0.7425,
1064
+ "step": 1340
1065
+ },
1066
+ {
1067
+ "epoch": 0.63,
1068
+ "grad_norm": 2.3161306381225586,
1069
+ "learning_rate": 0.00016870069605568445,
1070
+ "loss": 0.7518,
1071
+ "step": 1350
1072
+ },
1073
+ {
1074
+ "epoch": 0.63,
1075
+ "grad_norm": 2.593114137649536,
1076
+ "learning_rate": 0.00016846867749419954,
1077
+ "loss": 0.816,
1078
+ "step": 1360
1079
+ },
1080
+ {
1081
+ "epoch": 0.64,
1082
+ "grad_norm": 2.4234368801116943,
1083
+ "learning_rate": 0.00016823665893271462,
1084
+ "loss": 0.8256,
1085
+ "step": 1370
1086
+ },
1087
+ {
1088
+ "epoch": 0.64,
1089
+ "grad_norm": 2.0647542476654053,
1090
+ "learning_rate": 0.0001680046403712297,
1091
+ "loss": 0.9176,
1092
+ "step": 1380
1093
+ },
1094
+ {
1095
+ "epoch": 0.65,
1096
+ "grad_norm": 1.5590307712554932,
1097
+ "learning_rate": 0.00016777262180974479,
1098
+ "loss": 0.7476,
1099
+ "step": 1390
1100
+ },
1101
+ {
1102
+ "epoch": 0.65,
1103
+ "grad_norm": 2.5730812549591064,
1104
+ "learning_rate": 0.00016754060324825987,
1105
+ "loss": 0.7738,
1106
+ "step": 1400
1107
+ },
1108
+ {
1109
+ "epoch": 0.65,
1110
+ "eval_accuracy": 0.6401105805478764,
1111
+ "eval_loss": 0.8641374111175537,
1112
+ "eval_runtime": 131.4185,
1113
+ "eval_samples_per_second": 30.277,
1114
+ "eval_steps_per_second": 3.789,
1115
+ "step": 1400
1116
+ },
1117
+ {
1118
+ "epoch": 0.65,
1119
+ "grad_norm": 3.080822467803955,
1120
+ "learning_rate": 0.00016730858468677495,
1121
+ "loss": 0.8131,
1122
+ "step": 1410
1123
+ },
1124
+ {
1125
+ "epoch": 0.66,
1126
+ "grad_norm": 3.1145131587982178,
1127
+ "learning_rate": 0.00016707656612529004,
1128
+ "loss": 0.8048,
1129
+ "step": 1420
1130
+ },
1131
+ {
1132
+ "epoch": 0.66,
1133
+ "grad_norm": 2.4306788444519043,
1134
+ "learning_rate": 0.00016684454756380512,
1135
+ "loss": 0.7241,
1136
+ "step": 1430
1137
+ },
1138
+ {
1139
+ "epoch": 0.67,
1140
+ "grad_norm": 3.0480475425720215,
1141
+ "learning_rate": 0.00016661252900232018,
1142
+ "loss": 0.6803,
1143
+ "step": 1440
1144
+ },
1145
+ {
1146
+ "epoch": 0.67,
1147
+ "grad_norm": 2.5454821586608887,
1148
+ "learning_rate": 0.00016638051044083526,
1149
+ "loss": 0.773,
1150
+ "step": 1450
1151
+ },
1152
+ {
1153
+ "epoch": 0.68,
1154
+ "grad_norm": 2.2483272552490234,
1155
+ "learning_rate": 0.00016614849187935034,
1156
+ "loss": 0.8333,
1157
+ "step": 1460
1158
+ },
1159
+ {
1160
+ "epoch": 0.68,
1161
+ "grad_norm": 1.9373365640640259,
1162
+ "learning_rate": 0.00016591647331786543,
1163
+ "loss": 0.7289,
1164
+ "step": 1470
1165
+ },
1166
+ {
1167
+ "epoch": 0.69,
1168
+ "grad_norm": 2.8379623889923096,
1169
+ "learning_rate": 0.00016568445475638054,
1170
+ "loss": 0.7733,
1171
+ "step": 1480
1172
+ },
1173
+ {
1174
+ "epoch": 0.69,
1175
+ "grad_norm": 2.349510431289673,
1176
+ "learning_rate": 0.00016545243619489562,
1177
+ "loss": 0.8449,
1178
+ "step": 1490
1179
+ },
1180
+ {
1181
+ "epoch": 0.7,
1182
+ "grad_norm": 4.029337406158447,
1183
+ "learning_rate": 0.00016522041763341068,
1184
+ "loss": 0.8427,
1185
+ "step": 1500
1186
+ },
1187
+ {
1188
+ "epoch": 0.7,
1189
+ "eval_accuracy": 0.6725307866298065,
1190
+ "eval_loss": 0.803027331829071,
1191
+ "eval_runtime": 131.7174,
1192
+ "eval_samples_per_second": 30.209,
1193
+ "eval_steps_per_second": 3.781,
1194
+ "step": 1500
1195
+ },
1196
+ {
1197
+ "epoch": 0.7,
1198
+ "grad_norm": 2.8301985263824463,
1199
+ "learning_rate": 0.00016498839907192576,
1200
+ "loss": 0.7437,
1201
+ "step": 1510
1202
+ },
1203
+ {
1204
+ "epoch": 0.71,
1205
+ "grad_norm": 2.7581472396850586,
1206
+ "learning_rate": 0.00016475638051044084,
1207
+ "loss": 0.7888,
1208
+ "step": 1520
1209
+ },
1210
+ {
1211
+ "epoch": 0.71,
1212
+ "grad_norm": 2.044255256652832,
1213
+ "learning_rate": 0.00016452436194895593,
1214
+ "loss": 0.7,
1215
+ "step": 1530
1216
+ },
1217
+ {
1218
+ "epoch": 0.71,
1219
+ "grad_norm": 4.427280426025391,
1220
+ "learning_rate": 0.000164292343387471,
1221
+ "loss": 0.8854,
1222
+ "step": 1540
1223
+ },
1224
+ {
1225
+ "epoch": 0.72,
1226
+ "grad_norm": 3.044015884399414,
1227
+ "learning_rate": 0.00016406032482598607,
1228
+ "loss": 0.677,
1229
+ "step": 1550
1230
+ },
1231
+ {
1232
+ "epoch": 0.72,
1233
+ "grad_norm": 2.954887628555298,
1234
+ "learning_rate": 0.00016382830626450115,
1235
+ "loss": 0.7198,
1236
+ "step": 1560
1237
+ },
1238
+ {
1239
+ "epoch": 0.73,
1240
+ "grad_norm": 2.2452878952026367,
1241
+ "learning_rate": 0.00016359628770301626,
1242
+ "loss": 0.7495,
1243
+ "step": 1570
1244
+ },
1245
+ {
1246
+ "epoch": 0.73,
1247
+ "grad_norm": 2.0875964164733887,
1248
+ "learning_rate": 0.00016336426914153134,
1249
+ "loss": 0.8106,
1250
+ "step": 1580
1251
+ },
1252
+ {
1253
+ "epoch": 0.74,
1254
+ "grad_norm": 1.8363144397735596,
1255
+ "learning_rate": 0.00016313225058004643,
1256
+ "loss": 0.6737,
1257
+ "step": 1590
1258
+ },
1259
+ {
1260
+ "epoch": 0.74,
1261
+ "grad_norm": 3.34063982963562,
1262
+ "learning_rate": 0.00016290023201856148,
1263
+ "loss": 0.6783,
1264
+ "step": 1600
1265
+ },
1266
+ {
1267
+ "epoch": 0.74,
1268
+ "eval_accuracy": 0.6564463433023373,
1269
+ "eval_loss": 0.8367487192153931,
1270
+ "eval_runtime": 129.8582,
1271
+ "eval_samples_per_second": 30.641,
1272
+ "eval_steps_per_second": 3.835,
1273
+ "step": 1600
1274
+ },
1275
+ {
1276
+ "epoch": 0.75,
1277
+ "grad_norm": 4.197628974914551,
1278
+ "learning_rate": 0.00016266821345707657,
1279
+ "loss": 0.7794,
1280
+ "step": 1610
1281
+ },
1282
+ {
1283
+ "epoch": 0.75,
1284
+ "grad_norm": 2.9976580142974854,
1285
+ "learning_rate": 0.00016243619489559165,
1286
+ "loss": 0.832,
1287
+ "step": 1620
1288
+ },
1289
+ {
1290
+ "epoch": 0.76,
1291
+ "grad_norm": 2.8508596420288086,
1292
+ "learning_rate": 0.00016220417633410673,
1293
+ "loss": 0.86,
1294
+ "step": 1630
1295
+ },
1296
+ {
1297
+ "epoch": 0.76,
1298
+ "grad_norm": 2.7021024227142334,
1299
+ "learning_rate": 0.00016197215777262182,
1300
+ "loss": 0.7531,
1301
+ "step": 1640
1302
+ },
1303
+ {
1304
+ "epoch": 0.77,
1305
+ "grad_norm": 2.3222107887268066,
1306
+ "learning_rate": 0.0001617401392111369,
1307
+ "loss": 0.7338,
1308
+ "step": 1650
1309
+ },
1310
+ {
1311
+ "epoch": 0.77,
1312
+ "grad_norm": 2.1219635009765625,
1313
+ "learning_rate": 0.00016150812064965198,
1314
+ "loss": 0.8965,
1315
+ "step": 1660
1316
+ },
1317
+ {
1318
+ "epoch": 0.77,
1319
+ "grad_norm": 11.041630744934082,
1320
+ "learning_rate": 0.00016127610208816707,
1321
+ "loss": 0.7939,
1322
+ "step": 1670
1323
+ },
1324
+ {
1325
+ "epoch": 0.78,
1326
+ "grad_norm": 2.4006307125091553,
1327
+ "learning_rate": 0.00016104408352668215,
1328
+ "loss": 0.736,
1329
+ "step": 1680
1330
+ },
1331
+ {
1332
+ "epoch": 0.78,
1333
+ "grad_norm": 2.535405158996582,
1334
+ "learning_rate": 0.00016081206496519723,
1335
+ "loss": 0.7982,
1336
+ "step": 1690
1337
+ },
1338
+ {
1339
+ "epoch": 0.79,
1340
+ "grad_norm": 2.8077518939971924,
1341
+ "learning_rate": 0.00016058004640371232,
1342
+ "loss": 0.7856,
1343
+ "step": 1700
1344
+ },
1345
+ {
1346
+ "epoch": 0.79,
1347
+ "eval_accuracy": 0.6051771801960292,
1348
+ "eval_loss": 0.9696215391159058,
1349
+ "eval_runtime": 130.9284,
1350
+ "eval_samples_per_second": 30.391,
1351
+ "eval_steps_per_second": 3.804,
1352
+ "step": 1700
1353
+ },
1354
+ {
1355
+ "epoch": 0.79,
1356
+ "grad_norm": 2.17937970161438,
1357
+ "learning_rate": 0.00016034802784222737,
1358
+ "loss": 0.8074,
1359
+ "step": 1710
1360
+ },
1361
+ {
1362
+ "epoch": 0.8,
1363
+ "grad_norm": 3.2899444103240967,
1364
+ "learning_rate": 0.00016011600928074246,
1365
+ "loss": 0.8416,
1366
+ "step": 1720
1367
+ },
1368
+ {
1369
+ "epoch": 0.8,
1370
+ "grad_norm": 3.247441530227661,
1371
+ "learning_rate": 0.00015988399071925754,
1372
+ "loss": 0.8302,
1373
+ "step": 1730
1374
+ },
1375
+ {
1376
+ "epoch": 0.81,
1377
+ "grad_norm": 2.508978843688965,
1378
+ "learning_rate": 0.00015965197215777262,
1379
+ "loss": 0.7192,
1380
+ "step": 1740
1381
+ },
1382
+ {
1383
+ "epoch": 0.81,
1384
+ "grad_norm": 3.634054183959961,
1385
+ "learning_rate": 0.0001594199535962877,
1386
+ "loss": 0.7919,
1387
+ "step": 1750
1388
+ },
1389
+ {
1390
+ "epoch": 0.82,
1391
+ "grad_norm": 2.7715981006622314,
1392
+ "learning_rate": 0.0001591879350348028,
1393
+ "loss": 0.703,
1394
+ "step": 1760
1395
+ },
1396
+ {
1397
+ "epoch": 0.82,
1398
+ "grad_norm": 1.9510867595672607,
1399
+ "learning_rate": 0.00015895591647331787,
1400
+ "loss": 0.7246,
1401
+ "step": 1770
1402
+ },
1403
+ {
1404
+ "epoch": 0.83,
1405
+ "grad_norm": 2.5826807022094727,
1406
+ "learning_rate": 0.00015872389791183296,
1407
+ "loss": 0.8291,
1408
+ "step": 1780
1409
+ },
1410
+ {
1411
+ "epoch": 0.83,
1412
+ "grad_norm": 2.8682587146759033,
1413
+ "learning_rate": 0.00015849187935034804,
1414
+ "loss": 0.7284,
1415
+ "step": 1790
1416
+ },
1417
+ {
1418
+ "epoch": 0.84,
1419
+ "grad_norm": 2.7725648880004883,
1420
+ "learning_rate": 0.00015825986078886313,
1421
+ "loss": 0.7356,
1422
+ "step": 1800
1423
+ },
1424
+ {
1425
+ "epoch": 0.84,
1426
+ "eval_accuracy": 0.6516712741894949,
1427
+ "eval_loss": 0.857125461101532,
1428
+ "eval_runtime": 130.6056,
1429
+ "eval_samples_per_second": 30.466,
1430
+ "eval_steps_per_second": 3.813,
1431
+ "step": 1800
1432
+ },
1433
+ {
1434
+ "epoch": 0.84,
1435
+ "grad_norm": 1.756324291229248,
1436
+ "learning_rate": 0.0001580278422273782,
1437
+ "loss": 0.8463,
1438
+ "step": 1810
1439
+ },
1440
+ {
1441
+ "epoch": 0.84,
1442
+ "grad_norm": 1.613060712814331,
1443
+ "learning_rate": 0.00015779582366589326,
1444
+ "loss": 0.7517,
1445
+ "step": 1820
1446
+ },
1447
+ {
1448
+ "epoch": 0.85,
1449
+ "grad_norm": 3.3475732803344727,
1450
+ "learning_rate": 0.00015756380510440835,
1451
+ "loss": 0.835,
1452
+ "step": 1830
1453
+ },
1454
+ {
1455
+ "epoch": 0.85,
1456
+ "grad_norm": 2.118978977203369,
1457
+ "learning_rate": 0.00015733178654292343,
1458
+ "loss": 0.8143,
1459
+ "step": 1840
1460
+ },
1461
+ {
1462
+ "epoch": 0.86,
1463
+ "grad_norm": 2.3323171138763428,
1464
+ "learning_rate": 0.00015709976798143852,
1465
+ "loss": 0.8139,
1466
+ "step": 1850
1467
+ },
1468
+ {
1469
+ "epoch": 0.86,
1470
+ "grad_norm": 2.580026865005493,
1471
+ "learning_rate": 0.00015686774941995363,
1472
+ "loss": 0.8665,
1473
+ "step": 1860
1474
+ },
1475
+ {
1476
+ "epoch": 0.87,
1477
+ "grad_norm": 2.8367908000946045,
1478
+ "learning_rate": 0.00015663573085846868,
1479
+ "loss": 0.7187,
1480
+ "step": 1870
1481
+ },
1482
+ {
1483
+ "epoch": 0.87,
1484
+ "grad_norm": 3.431257724761963,
1485
+ "learning_rate": 0.00015640371229698377,
1486
+ "loss": 0.8616,
1487
+ "step": 1880
1488
+ },
1489
+ {
1490
+ "epoch": 0.88,
1491
+ "grad_norm": 3.8367366790771484,
1492
+ "learning_rate": 0.00015617169373549885,
1493
+ "loss": 0.7892,
1494
+ "step": 1890
1495
+ },
1496
+ {
1497
+ "epoch": 0.88,
1498
+ "grad_norm": 2.648777723312378,
1499
+ "learning_rate": 0.00015593967517401393,
1500
+ "loss": 0.9186,
1501
+ "step": 1900
1502
+ },
1503
+ {
1504
+ "epoch": 0.88,
1505
+ "eval_accuracy": 0.6675043980899723,
1506
+ "eval_loss": 0.8260459899902344,
1507
+ "eval_runtime": 131.0193,
1508
+ "eval_samples_per_second": 30.37,
1509
+ "eval_steps_per_second": 3.801,
1510
+ "step": 1900
1511
+ },
1512
+ {
1513
+ "epoch": 0.89,
1514
+ "grad_norm": 2.5578160285949707,
1515
+ "learning_rate": 0.00015570765661252902,
1516
+ "loss": 0.6849,
1517
+ "step": 1910
1518
+ },
1519
+ {
1520
+ "epoch": 0.89,
1521
+ "grad_norm": 2.5033838748931885,
1522
+ "learning_rate": 0.00015547563805104407,
1523
+ "loss": 0.6708,
1524
+ "step": 1920
1525
+ },
1526
+ {
1527
+ "epoch": 0.9,
1528
+ "grad_norm": 2.074505090713501,
1529
+ "learning_rate": 0.00015524361948955916,
1530
+ "loss": 0.717,
1531
+ "step": 1930
1532
+ },
1533
+ {
1534
+ "epoch": 0.9,
1535
+ "grad_norm": 2.335425853729248,
1536
+ "learning_rate": 0.00015501160092807424,
1537
+ "loss": 0.9028,
1538
+ "step": 1940
1539
+ },
1540
+ {
1541
+ "epoch": 0.9,
1542
+ "grad_norm": 3.3634660243988037,
1543
+ "learning_rate": 0.00015477958236658935,
1544
+ "loss": 0.6975,
1545
+ "step": 1950
1546
+ },
1547
+ {
1548
+ "epoch": 0.91,
1549
+ "grad_norm": 2.022599697113037,
1550
+ "learning_rate": 0.00015454756380510443,
1551
+ "loss": 0.6303,
1552
+ "step": 1960
1553
+ },
1554
+ {
1555
+ "epoch": 0.91,
1556
+ "grad_norm": 4.197246551513672,
1557
+ "learning_rate": 0.0001543155452436195,
1558
+ "loss": 0.7147,
1559
+ "step": 1970
1560
+ },
1561
+ {
1562
+ "epoch": 0.92,
1563
+ "grad_norm": 3.748758554458618,
1564
+ "learning_rate": 0.00015408352668213457,
1565
+ "loss": 0.7944,
1566
+ "step": 1980
1567
+ },
1568
+ {
1569
+ "epoch": 0.92,
1570
+ "grad_norm": 2.029123067855835,
1571
+ "learning_rate": 0.00015385150812064966,
1572
+ "loss": 0.7363,
1573
+ "step": 1990
1574
+ },
1575
+ {
1576
+ "epoch": 0.93,
1577
+ "grad_norm": 2.3515162467956543,
1578
+ "learning_rate": 0.00015361948955916474,
1579
+ "loss": 0.8218,
1580
+ "step": 2000
1581
+ },
1582
+ {
1583
+ "epoch": 0.93,
1584
+ "eval_accuracy": 0.654938426740387,
1585
+ "eval_loss": 0.8351722359657288,
1586
+ "eval_runtime": 129.1873,
1587
+ "eval_samples_per_second": 30.8,
1588
+ "eval_steps_per_second": 3.855,
1589
+ "step": 2000
1590
+ },
1591
+ {
1592
+ "epoch": 0.93,
1593
+ "grad_norm": 3.3627982139587402,
1594
+ "learning_rate": 0.00015338747099767982,
1595
+ "loss": 0.7137,
1596
+ "step": 2010
1597
+ },
1598
+ {
1599
+ "epoch": 0.94,
1600
+ "grad_norm": 3.0946731567382812,
1601
+ "learning_rate": 0.0001531554524361949,
1602
+ "loss": 0.8327,
1603
+ "step": 2020
1604
+ },
1605
+ {
1606
+ "epoch": 0.94,
1607
+ "grad_norm": 1.9171329736709595,
1608
+ "learning_rate": 0.00015292343387470996,
1609
+ "loss": 0.7805,
1610
+ "step": 2030
1611
+ },
1612
+ {
1613
+ "epoch": 0.95,
1614
+ "grad_norm": 3.749093532562256,
1615
+ "learning_rate": 0.00015269141531322507,
1616
+ "loss": 0.8531,
1617
+ "step": 2040
1618
+ },
1619
+ {
1620
+ "epoch": 0.95,
1621
+ "grad_norm": 2.960636615753174,
1622
+ "learning_rate": 0.00015245939675174016,
1623
+ "loss": 0.7838,
1624
+ "step": 2050
1625
+ },
1626
+ {
1627
+ "epoch": 0.96,
1628
+ "grad_norm": 2.5994982719421387,
1629
+ "learning_rate": 0.00015222737819025524,
1630
+ "loss": 0.6932,
1631
+ "step": 2060
1632
+ },
1633
+ {
1634
+ "epoch": 0.96,
1635
+ "grad_norm": 2.6657791137695312,
1636
+ "learning_rate": 0.00015199535962877032,
1637
+ "loss": 0.731,
1638
+ "step": 2070
1639
+ },
1640
+ {
1641
+ "epoch": 0.97,
1642
+ "grad_norm": 1.7149091958999634,
1643
+ "learning_rate": 0.00015176334106728538,
1644
+ "loss": 0.8234,
1645
+ "step": 2080
1646
+ },
1647
+ {
1648
+ "epoch": 0.97,
1649
+ "grad_norm": 1.5878645181655884,
1650
+ "learning_rate": 0.00015153132250580046,
1651
+ "loss": 0.7098,
1652
+ "step": 2090
1653
+ },
1654
+ {
1655
+ "epoch": 0.97,
1656
+ "grad_norm": 1.8897655010223389,
1657
+ "learning_rate": 0.00015129930394431555,
1658
+ "loss": 0.6245,
1659
+ "step": 2100
1660
+ },
1661
+ {
1662
+ "epoch": 0.97,
1663
+ "eval_accuracy": 0.6763005780346821,
1664
+ "eval_loss": 0.7993046045303345,
1665
+ "eval_runtime": 133.106,
1666
+ "eval_samples_per_second": 29.893,
1667
+ "eval_steps_per_second": 3.741,
1668
+ "step": 2100
1669
+ },
1670
+ {
1671
+ "epoch": 0.98,
1672
+ "grad_norm": 3.2624616622924805,
1673
+ "learning_rate": 0.00015106728538283063,
1674
+ "loss": 0.8209,
1675
+ "step": 2110
1676
+ },
1677
+ {
1678
+ "epoch": 0.98,
1679
+ "grad_norm": 2.469926595687866,
1680
+ "learning_rate": 0.00015083526682134571,
1681
+ "loss": 0.7127,
1682
+ "step": 2120
1683
+ },
1684
+ {
1685
+ "epoch": 0.99,
1686
+ "grad_norm": 3.8582072257995605,
1687
+ "learning_rate": 0.0001506032482598608,
1688
+ "loss": 0.655,
1689
+ "step": 2130
1690
+ },
1691
+ {
1692
+ "epoch": 0.99,
1693
+ "grad_norm": 3.1348557472229004,
1694
+ "learning_rate": 0.00015037122969837588,
1695
+ "loss": 0.8291,
1696
+ "step": 2140
1697
+ },
1698
+ {
1699
+ "epoch": 1.0,
1700
+ "grad_norm": 3.1626625061035156,
1701
+ "learning_rate": 0.00015013921113689096,
1702
+ "loss": 0.8373,
1703
+ "step": 2150
1704
+ },
1705
+ {
1706
+ "epoch": 1.0,
1707
+ "grad_norm": 1.9470633268356323,
1708
+ "learning_rate": 0.00014990719257540605,
1709
+ "loss": 0.6225,
1710
+ "step": 2160
1711
+ },
1712
+ {
1713
+ "epoch": 1.01,
1714
+ "grad_norm": 2.336871862411499,
1715
+ "learning_rate": 0.00014967517401392113,
1716
+ "loss": 0.5857,
1717
+ "step": 2170
1718
+ },
1719
+ {
1720
+ "epoch": 1.01,
1721
+ "grad_norm": 1.737004280090332,
1722
+ "learning_rate": 0.00014944315545243621,
1723
+ "loss": 0.5935,
1724
+ "step": 2180
1725
+ },
1726
+ {
1727
+ "epoch": 1.02,
1728
+ "grad_norm": 2.336336612701416,
1729
+ "learning_rate": 0.00014921113689095127,
1730
+ "loss": 0.5824,
1731
+ "step": 2190
1732
+ },
1733
+ {
1734
+ "epoch": 1.02,
1735
+ "grad_norm": 2.338193655014038,
1736
+ "learning_rate": 0.00014897911832946635,
1737
+ "loss": 0.4945,
1738
+ "step": 2200
1739
+ },
1740
+ {
1741
+ "epoch": 1.02,
1742
+ "eval_accuracy": 0.6589595375722543,
1743
+ "eval_loss": 0.8315911889076233,
1744
+ "eval_runtime": 132.4435,
1745
+ "eval_samples_per_second": 30.043,
1746
+ "eval_steps_per_second": 3.76,
1747
+ "step": 2200
1748
+ },
1749
+ {
1750
+ "epoch": 1.03,
1751
+ "grad_norm": 2.667480707168579,
1752
+ "learning_rate": 0.00014874709976798144,
1753
+ "loss": 0.6186,
1754
+ "step": 2210
1755
+ },
1756
+ {
1757
+ "epoch": 1.03,
1758
+ "grad_norm": 2.019312858581543,
1759
+ "learning_rate": 0.00014851508120649652,
1760
+ "loss": 0.5037,
1761
+ "step": 2220
1762
+ },
1763
+ {
1764
+ "epoch": 1.03,
1765
+ "grad_norm": 2.4240450859069824,
1766
+ "learning_rate": 0.0001482830626450116,
1767
+ "loss": 0.5781,
1768
+ "step": 2230
1769
+ },
1770
+ {
1771
+ "epoch": 1.04,
1772
+ "grad_norm": 2.2333035469055176,
1773
+ "learning_rate": 0.0001480510440835267,
1774
+ "loss": 0.4923,
1775
+ "step": 2240
1776
+ },
1777
+ {
1778
+ "epoch": 1.04,
1779
+ "grad_norm": 2.023408889770508,
1780
+ "learning_rate": 0.00014781902552204177,
1781
+ "loss": 0.5855,
1782
+ "step": 2250
1783
+ },
1784
+ {
1785
+ "epoch": 1.05,
1786
+ "grad_norm": 2.4406158924102783,
1787
+ "learning_rate": 0.00014758700696055685,
1788
+ "loss": 0.5579,
1789
+ "step": 2260
1790
+ },
1791
+ {
1792
+ "epoch": 1.05,
1793
+ "grad_norm": 3.5192463397979736,
1794
+ "learning_rate": 0.00014735498839907194,
1795
+ "loss": 0.6066,
1796
+ "step": 2270
1797
+ },
1798
+ {
1799
+ "epoch": 1.06,
1800
+ "grad_norm": 4.174234390258789,
1801
+ "learning_rate": 0.00014712296983758702,
1802
+ "loss": 0.6238,
1803
+ "step": 2280
1804
+ },
1805
+ {
1806
+ "epoch": 1.06,
1807
+ "grad_norm": 2.916022539138794,
1808
+ "learning_rate": 0.00014689095127610208,
1809
+ "loss": 0.5475,
1810
+ "step": 2290
1811
+ },
1812
+ {
1813
+ "epoch": 1.07,
1814
+ "grad_norm": 3.1607933044433594,
1815
+ "learning_rate": 0.00014665893271461716,
1816
+ "loss": 0.6064,
1817
+ "step": 2300
1818
+ },
1819
+ {
1820
+ "epoch": 1.07,
1821
+ "eval_accuracy": 0.6680070369439558,
1822
+ "eval_loss": 0.8378371596336365,
1823
+ "eval_runtime": 132.7481,
1824
+ "eval_samples_per_second": 29.974,
1825
+ "eval_steps_per_second": 3.751,
1826
+ "step": 2300
1827
+ },
1828
+ {
1829
+ "epoch": 1.07,
1830
+ "grad_norm": 1.8557933568954468,
1831
+ "learning_rate": 0.00014642691415313224,
1832
+ "loss": 0.6037,
1833
+ "step": 2310
1834
+ },
1835
+ {
1836
+ "epoch": 1.08,
1837
+ "grad_norm": 4.142065048217773,
1838
+ "learning_rate": 0.00014619489559164733,
1839
+ "loss": 0.6552,
1840
+ "step": 2320
1841
+ },
1842
+ {
1843
+ "epoch": 1.08,
1844
+ "grad_norm": 3.622699499130249,
1845
+ "learning_rate": 0.00014596287703016244,
1846
+ "loss": 0.4755,
1847
+ "step": 2330
1848
+ },
1849
+ {
1850
+ "epoch": 1.09,
1851
+ "grad_norm": 3.9584805965423584,
1852
+ "learning_rate": 0.00014573085846867752,
1853
+ "loss": 0.677,
1854
+ "step": 2340
1855
+ },
1856
+ {
1857
+ "epoch": 1.09,
1858
+ "grad_norm": 2.498189926147461,
1859
+ "learning_rate": 0.00014549883990719258,
1860
+ "loss": 0.5519,
1861
+ "step": 2350
1862
+ },
1863
+ {
1864
+ "epoch": 1.1,
1865
+ "grad_norm": 5.097834587097168,
1866
+ "learning_rate": 0.00014526682134570766,
1867
+ "loss": 0.6068,
1868
+ "step": 2360
1869
+ },
1870
+ {
1871
+ "epoch": 1.1,
1872
+ "grad_norm": 3.7660293579101562,
1873
+ "learning_rate": 0.00014503480278422275,
1874
+ "loss": 0.5356,
1875
+ "step": 2370
1876
+ },
1877
+ {
1878
+ "epoch": 1.1,
1879
+ "grad_norm": 3.423243999481201,
1880
+ "learning_rate": 0.00014480278422273783,
1881
+ "loss": 0.6954,
1882
+ "step": 2380
1883
+ },
1884
+ {
1885
+ "epoch": 1.11,
1886
+ "grad_norm": 3.099900484085083,
1887
+ "learning_rate": 0.0001445707656612529,
1888
+ "loss": 0.4953,
1889
+ "step": 2390
1890
+ },
1891
+ {
1892
+ "epoch": 1.11,
1893
+ "grad_norm": 4.438780784606934,
1894
+ "learning_rate": 0.00014433874709976797,
1895
+ "loss": 0.638,
1896
+ "step": 2400
1897
+ },
1898
+ {
1899
+ "epoch": 1.11,
1900
+ "eval_accuracy": 0.6828348831364665,
1901
+ "eval_loss": 0.8223534822463989,
1902
+ "eval_runtime": 130.5252,
1903
+ "eval_samples_per_second": 30.485,
1904
+ "eval_steps_per_second": 3.815,
1905
+ "step": 2400
1906
+ },
1907
+ {
1908
+ "epoch": 1.12,
1909
+ "grad_norm": 2.960069417953491,
1910
+ "learning_rate": 0.00014410672853828305,
1911
+ "loss": 0.5635,
1912
+ "step": 2410
1913
+ },
1914
+ {
1915
+ "epoch": 1.12,
1916
+ "grad_norm": 1.4343048334121704,
1917
+ "learning_rate": 0.00014387470997679816,
1918
+ "loss": 0.5937,
1919
+ "step": 2420
1920
+ },
1921
+ {
1922
+ "epoch": 1.13,
1923
+ "grad_norm": 1.8916206359863281,
1924
+ "learning_rate": 0.00014364269141531325,
1925
+ "loss": 0.5543,
1926
+ "step": 2430
1927
+ },
1928
+ {
1929
+ "epoch": 1.13,
1930
+ "grad_norm": 2.315703868865967,
1931
+ "learning_rate": 0.00014341067285382833,
1932
+ "loss": 0.4934,
1933
+ "step": 2440
1934
+ },
1935
+ {
1936
+ "epoch": 1.14,
1937
+ "grad_norm": 2.3009326457977295,
1938
+ "learning_rate": 0.00014317865429234339,
1939
+ "loss": 0.5487,
1940
+ "step": 2450
1941
+ },
1942
+ {
1943
+ "epoch": 1.14,
1944
+ "grad_norm": 2.4745514392852783,
1945
+ "learning_rate": 0.00014294663573085847,
1946
+ "loss": 0.6988,
1947
+ "step": 2460
1948
+ },
1949
+ {
1950
+ "epoch": 1.15,
1951
+ "grad_norm": 2.8443727493286133,
1952
+ "learning_rate": 0.00014271461716937355,
1953
+ "loss": 0.5079,
1954
+ "step": 2470
1955
+ },
1956
+ {
1957
+ "epoch": 1.15,
1958
+ "grad_norm": 3.124251127243042,
1959
+ "learning_rate": 0.00014248259860788864,
1960
+ "loss": 0.5499,
1961
+ "step": 2480
1962
+ },
1963
+ {
1964
+ "epoch": 1.16,
1965
+ "grad_norm": 3.0896270275115967,
1966
+ "learning_rate": 0.00014225058004640372,
1967
+ "loss": 0.6026,
1968
+ "step": 2490
1969
+ },
1970
+ {
1971
+ "epoch": 1.16,
1972
+ "grad_norm": 2.8856897354125977,
1973
+ "learning_rate": 0.0001420185614849188,
1974
+ "loss": 0.6253,
1975
+ "step": 2500
1976
+ },
1977
+ {
1978
+ "epoch": 1.16,
1979
+ "eval_accuracy": 0.6617240512691631,
1980
+ "eval_loss": 0.8880072236061096,
1981
+ "eval_runtime": 129.7205,
1982
+ "eval_samples_per_second": 30.674,
1983
+ "eval_steps_per_second": 3.839,
1984
+ "step": 2500
1985
+ },
1986
+ {
1987
+ "epoch": 1.16,
1988
+ "step": 2500,
1989
+ "total_flos": 3.099325741767844e+18,
1990
+ "train_loss": 0.8266718128204346,
1991
+ "train_runtime": 5452.5845,
1992
+ "train_samples_per_second": 25.29,
1993
+ "train_steps_per_second": 1.581
1994
+ }
1995
+ ],
1996
+ "logging_steps": 10,
1997
+ "max_steps": 8620,
1998
+ "num_input_tokens_seen": 0,
1999
+ "num_train_epochs": 4,
2000
+ "save_steps": 100,
2001
+ "total_flos": 3.099325741767844e+18,
2002
+ "train_batch_size": 16,
2003
+ "trial_name": null,
2004
+ "trial_params": null
2005
+ }