erbacher commited on
Commit
33adfe6
1 Parent(s): f2b03f6

Model save

Browse files
Files changed (5) hide show
  1. README.md +71 -0
  2. all_results.json +13 -0
  3. eval_results.json +8 -0
  4. train_results.json +8 -0
  5. trainer_state.json +2172 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: HuggingFaceH4/zephyr-7b-beta
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: zephyr-convsearch-7b
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # zephyr-convsearch-7b
15
+
16
+ This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.4475
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 2e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - distributed_type: multi-GPU
42
+ - num_devices: 2
43
+ - gradient_accumulation_steps: 32
44
+ - total_train_batch_size: 512
45
+ - total_eval_batch_size: 16
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - num_epochs: 10
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-----:|:----:|:---------------:|
54
+ | 0.5796 | 0.62 | 34 | 0.5780 |
55
+ | 0.5119 | 1.62 | 68 | 0.5122 |
56
+ | 0.4799 | 2.63 | 103 | 0.4831 |
57
+ | 0.4712 | 3.62 | 137 | 0.4706 |
58
+ | 0.4615 | 4.63 | 172 | 0.4631 |
59
+ | 0.4584 | 5.62 | 206 | 0.4574 |
60
+ | 0.4524 | 6.63 | 241 | 0.4529 |
61
+ | 0.4507 | 7.62 | 275 | 0.4502 |
62
+ | 0.4478 | 8.63 | 310 | 0.4480 |
63
+ | 0.4467 | 9.62 | 344 | 0.4475 |
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - Transformers 4.35.0
69
+ - Pytorch 2.1.1+cu118
70
+ - Datasets 2.14.6
71
+ - Tokenizers 0.14.1
all_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 9.62,
3
+ "eval_loss": 0.44749271869659424,
4
+ "eval_runtime": 23.1279,
5
+ "eval_samples": 200,
6
+ "eval_samples_per_second": 8.648,
7
+ "eval_steps_per_second": 0.562,
8
+ "train_loss": 0.4900048969443454,
9
+ "train_runtime": 108147.3514,
10
+ "train_samples": 27870,
11
+ "train_samples_per_second": 2.577,
12
+ "train_steps_per_second": 0.005
13
+ }
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 9.62,
3
+ "eval_loss": 0.44749271869659424,
4
+ "eval_runtime": 23.1279,
5
+ "eval_samples": 200,
6
+ "eval_samples_per_second": 8.648,
7
+ "eval_steps_per_second": 0.562
8
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 9.62,
3
+ "train_loss": 0.4900048969443454,
4
+ "train_runtime": 108147.3514,
5
+ "train_samples": 27870,
6
+ "train_samples_per_second": 2.577,
7
+ "train_steps_per_second": 0.005
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,2172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 9.620551090700344,
5
+ "eval_steps": 500,
6
+ "global_step": 344,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.02,
13
+ "learning_rate": 1.9999830768577445e-05,
14
+ "loss": 0.7381,
15
+ "step": 1
16
+ },
17
+ {
18
+ "epoch": 0.04,
19
+ "learning_rate": 1.9999323080037623e-05,
20
+ "loss": 0.7407,
21
+ "step": 2
22
+ },
23
+ {
24
+ "epoch": 0.06,
25
+ "learning_rate": 1.9998476951563914e-05,
26
+ "loss": 0.7359,
27
+ "step": 3
28
+ },
29
+ {
30
+ "epoch": 0.07,
31
+ "learning_rate": 1.999729241179462e-05,
32
+ "loss": 0.7298,
33
+ "step": 4
34
+ },
35
+ {
36
+ "epoch": 0.09,
37
+ "learning_rate": 1.9995769500822007e-05,
38
+ "loss": 0.7305,
39
+ "step": 5
40
+ },
41
+ {
42
+ "epoch": 0.11,
43
+ "learning_rate": 1.999390827019096e-05,
44
+ "loss": 0.7124,
45
+ "step": 6
46
+ },
47
+ {
48
+ "epoch": 0.13,
49
+ "learning_rate": 1.9991708782897214e-05,
50
+ "loss": 0.7076,
51
+ "step": 7
52
+ },
53
+ {
54
+ "epoch": 0.15,
55
+ "learning_rate": 1.998917111338525e-05,
56
+ "loss": 0.6927,
57
+ "step": 8
58
+ },
59
+ {
60
+ "epoch": 0.17,
61
+ "learning_rate": 1.9986295347545738e-05,
62
+ "loss": 0.6858,
63
+ "step": 9
64
+ },
65
+ {
66
+ "epoch": 0.18,
67
+ "learning_rate": 1.9983081582712684e-05,
68
+ "loss": 0.6812,
69
+ "step": 10
70
+ },
71
+ {
72
+ "epoch": 0.2,
73
+ "learning_rate": 1.9979529927660076e-05,
74
+ "loss": 0.6788,
75
+ "step": 11
76
+ },
77
+ {
78
+ "epoch": 0.22,
79
+ "learning_rate": 1.9975640502598243e-05,
80
+ "loss": 0.6633,
81
+ "step": 12
82
+ },
83
+ {
84
+ "epoch": 0.24,
85
+ "learning_rate": 1.9971413439169777e-05,
86
+ "loss": 0.6589,
87
+ "step": 13
88
+ },
89
+ {
90
+ "epoch": 0.26,
91
+ "learning_rate": 1.996684888044506e-05,
92
+ "loss": 0.6471,
93
+ "step": 14
94
+ },
95
+ {
96
+ "epoch": 0.28,
97
+ "learning_rate": 1.9961946980917457e-05,
98
+ "loss": 0.6474,
99
+ "step": 15
100
+ },
101
+ {
102
+ "epoch": 0.29,
103
+ "learning_rate": 1.9956707906498046e-05,
104
+ "loss": 0.6419,
105
+ "step": 16
106
+ },
107
+ {
108
+ "epoch": 0.31,
109
+ "learning_rate": 1.9951131834510034e-05,
110
+ "loss": 0.6345,
111
+ "step": 17
112
+ },
113
+ {
114
+ "epoch": 0.33,
115
+ "learning_rate": 1.9945218953682736e-05,
116
+ "loss": 0.635,
117
+ "step": 18
118
+ },
119
+ {
120
+ "epoch": 0.35,
121
+ "learning_rate": 1.99389694641452e-05,
122
+ "loss": 0.6309,
123
+ "step": 19
124
+ },
125
+ {
126
+ "epoch": 0.37,
127
+ "learning_rate": 1.9932383577419432e-05,
128
+ "loss": 0.619,
129
+ "step": 20
130
+ },
131
+ {
132
+ "epoch": 0.39,
133
+ "learning_rate": 1.9925461516413224e-05,
134
+ "loss": 0.6184,
135
+ "step": 21
136
+ },
137
+ {
138
+ "epoch": 0.4,
139
+ "learning_rate": 1.9918203515412616e-05,
140
+ "loss": 0.6127,
141
+ "step": 22
142
+ },
143
+ {
144
+ "epoch": 0.42,
145
+ "learning_rate": 1.9910609820073986e-05,
146
+ "loss": 0.6108,
147
+ "step": 23
148
+ },
149
+ {
150
+ "epoch": 0.44,
151
+ "learning_rate": 1.9902680687415704e-05,
152
+ "loss": 0.6138,
153
+ "step": 24
154
+ },
155
+ {
156
+ "epoch": 0.46,
157
+ "learning_rate": 1.9894416385809444e-05,
158
+ "loss": 0.6039,
159
+ "step": 25
160
+ },
161
+ {
162
+ "epoch": 0.48,
163
+ "learning_rate": 1.9885817194971116e-05,
164
+ "loss": 0.6123,
165
+ "step": 26
166
+ },
167
+ {
168
+ "epoch": 0.5,
169
+ "learning_rate": 1.9876883405951378e-05,
170
+ "loss": 0.5973,
171
+ "step": 27
172
+ },
173
+ {
174
+ "epoch": 0.51,
175
+ "learning_rate": 1.9867615321125796e-05,
176
+ "loss": 0.6002,
177
+ "step": 28
178
+ },
179
+ {
180
+ "epoch": 0.53,
181
+ "learning_rate": 1.9858013254184597e-05,
182
+ "loss": 0.5908,
183
+ "step": 29
184
+ },
185
+ {
186
+ "epoch": 0.55,
187
+ "learning_rate": 1.9848077530122083e-05,
188
+ "loss": 0.596,
189
+ "step": 30
190
+ },
191
+ {
192
+ "epoch": 0.57,
193
+ "learning_rate": 1.983780848522559e-05,
194
+ "loss": 0.5886,
195
+ "step": 31
196
+ },
197
+ {
198
+ "epoch": 0.59,
199
+ "learning_rate": 1.9827206467064133e-05,
200
+ "loss": 0.5908,
201
+ "step": 32
202
+ },
203
+ {
204
+ "epoch": 0.61,
205
+ "learning_rate": 1.9816271834476642e-05,
206
+ "loss": 0.5821,
207
+ "step": 33
208
+ },
209
+ {
210
+ "epoch": 0.62,
211
+ "learning_rate": 1.9805004957559795e-05,
212
+ "loss": 0.5796,
213
+ "step": 34
214
+ },
215
+ {
216
+ "epoch": 0.62,
217
+ "eval_loss": 0.5779749751091003,
218
+ "eval_runtime": 24.4017,
219
+ "eval_samples_per_second": 8.196,
220
+ "eval_steps_per_second": 0.533,
221
+ "step": 34
222
+ },
223
+ {
224
+ "epoch": 1.01,
225
+ "learning_rate": 1.9793406217655516e-05,
226
+ "loss": 0.5719,
227
+ "step": 35
228
+ },
229
+ {
230
+ "epoch": 1.03,
231
+ "learning_rate": 1.9781476007338058e-05,
232
+ "loss": 0.5771,
233
+ "step": 36
234
+ },
235
+ {
236
+ "epoch": 1.05,
237
+ "learning_rate": 1.976921473040071e-05,
238
+ "loss": 0.5709,
239
+ "step": 37
240
+ },
241
+ {
242
+ "epoch": 1.06,
243
+ "learning_rate": 1.9756622801842144e-05,
244
+ "loss": 0.567,
245
+ "step": 38
246
+ },
247
+ {
248
+ "epoch": 1.08,
249
+ "learning_rate": 1.9743700647852356e-05,
250
+ "loss": 0.5746,
251
+ "step": 39
252
+ },
253
+ {
254
+ "epoch": 1.1,
255
+ "learning_rate": 1.973044870579824e-05,
256
+ "loss": 0.5651,
257
+ "step": 40
258
+ },
259
+ {
260
+ "epoch": 1.12,
261
+ "learning_rate": 1.9716867424208805e-05,
262
+ "loss": 0.5621,
263
+ "step": 41
264
+ },
265
+ {
266
+ "epoch": 1.14,
267
+ "learning_rate": 1.9702957262759964e-05,
268
+ "loss": 0.5581,
269
+ "step": 42
270
+ },
271
+ {
272
+ "epoch": 1.16,
273
+ "learning_rate": 1.9688718692259007e-05,
274
+ "loss": 0.5555,
275
+ "step": 43
276
+ },
277
+ {
278
+ "epoch": 1.18,
279
+ "learning_rate": 1.967415219462864e-05,
280
+ "loss": 0.5512,
281
+ "step": 44
282
+ },
283
+ {
284
+ "epoch": 1.19,
285
+ "learning_rate": 1.9659258262890683e-05,
286
+ "loss": 0.554,
287
+ "step": 45
288
+ },
289
+ {
290
+ "epoch": 1.21,
291
+ "learning_rate": 1.964403740114939e-05,
292
+ "loss": 0.5507,
293
+ "step": 46
294
+ },
295
+ {
296
+ "epoch": 1.23,
297
+ "learning_rate": 1.962849012457438e-05,
298
+ "loss": 0.5482,
299
+ "step": 47
300
+ },
301
+ {
302
+ "epoch": 1.25,
303
+ "learning_rate": 1.961261695938319e-05,
304
+ "loss": 0.5439,
305
+ "step": 48
306
+ },
307
+ {
308
+ "epoch": 1.27,
309
+ "learning_rate": 1.9596418442823495e-05,
310
+ "loss": 0.5387,
311
+ "step": 49
312
+ },
313
+ {
314
+ "epoch": 1.29,
315
+ "learning_rate": 1.957989512315489e-05,
316
+ "loss": 0.5434,
317
+ "step": 50
318
+ },
319
+ {
320
+ "epoch": 1.3,
321
+ "learning_rate": 1.9563047559630356e-05,
322
+ "loss": 0.5388,
323
+ "step": 51
324
+ },
325
+ {
326
+ "epoch": 1.32,
327
+ "learning_rate": 1.954587632247732e-05,
328
+ "loss": 0.5385,
329
+ "step": 52
330
+ },
331
+ {
332
+ "epoch": 1.34,
333
+ "learning_rate": 1.9528381992878362e-05,
334
+ "loss": 0.5364,
335
+ "step": 53
336
+ },
337
+ {
338
+ "epoch": 1.36,
339
+ "learning_rate": 1.9510565162951538e-05,
340
+ "loss": 0.535,
341
+ "step": 54
342
+ },
343
+ {
344
+ "epoch": 1.38,
345
+ "learning_rate": 1.949242643573034e-05,
346
+ "loss": 0.5253,
347
+ "step": 55
348
+ },
349
+ {
350
+ "epoch": 1.4,
351
+ "learning_rate": 1.9473966425143292e-05,
352
+ "loss": 0.5278,
353
+ "step": 56
354
+ },
355
+ {
356
+ "epoch": 1.41,
357
+ "learning_rate": 1.945518575599317e-05,
358
+ "loss": 0.5316,
359
+ "step": 57
360
+ },
361
+ {
362
+ "epoch": 1.43,
363
+ "learning_rate": 1.9436085063935837e-05,
364
+ "loss": 0.5267,
365
+ "step": 58
366
+ },
367
+ {
368
+ "epoch": 1.45,
369
+ "learning_rate": 1.9416664995458756e-05,
370
+ "loss": 0.5265,
371
+ "step": 59
372
+ },
373
+ {
374
+ "epoch": 1.47,
375
+ "learning_rate": 1.9396926207859085e-05,
376
+ "loss": 0.5235,
377
+ "step": 60
378
+ },
379
+ {
380
+ "epoch": 1.49,
381
+ "learning_rate": 1.937686936922145e-05,
382
+ "loss": 0.5217,
383
+ "step": 61
384
+ },
385
+ {
386
+ "epoch": 1.51,
387
+ "learning_rate": 1.9356495158395317e-05,
388
+ "loss": 0.5243,
389
+ "step": 62
390
+ },
391
+ {
392
+ "epoch": 1.52,
393
+ "learning_rate": 1.9335804264972018e-05,
394
+ "loss": 0.5189,
395
+ "step": 63
396
+ },
397
+ {
398
+ "epoch": 1.54,
399
+ "learning_rate": 1.9314797389261426e-05,
400
+ "loss": 0.5193,
401
+ "step": 64
402
+ },
403
+ {
404
+ "epoch": 1.56,
405
+ "learning_rate": 1.9293475242268224e-05,
406
+ "loss": 0.5182,
407
+ "step": 65
408
+ },
409
+ {
410
+ "epoch": 1.58,
411
+ "learning_rate": 1.9271838545667876e-05,
412
+ "loss": 0.5222,
413
+ "step": 66
414
+ },
415
+ {
416
+ "epoch": 1.6,
417
+ "learning_rate": 1.924988803178216e-05,
418
+ "loss": 0.516,
419
+ "step": 67
420
+ },
421
+ {
422
+ "epoch": 1.62,
423
+ "learning_rate": 1.9227624443554425e-05,
424
+ "loss": 0.5119,
425
+ "step": 68
426
+ },
427
+ {
428
+ "epoch": 1.62,
429
+ "eval_loss": 0.5121633410453796,
430
+ "eval_runtime": 23.8105,
431
+ "eval_samples_per_second": 8.4,
432
+ "eval_steps_per_second": 0.546,
433
+ "step": 68
434
+ },
435
+ {
436
+ "epoch": 2.0,
437
+ "learning_rate": 1.9205048534524405e-05,
438
+ "loss": 0.5095,
439
+ "step": 69
440
+ },
441
+ {
442
+ "epoch": 2.02,
443
+ "learning_rate": 1.9182161068802742e-05,
444
+ "loss": 0.5091,
445
+ "step": 70
446
+ },
447
+ {
448
+ "epoch": 2.04,
449
+ "learning_rate": 1.9158962821045113e-05,
450
+ "loss": 0.5086,
451
+ "step": 71
452
+ },
453
+ {
454
+ "epoch": 2.06,
455
+ "learning_rate": 1.913545457642601e-05,
456
+ "loss": 0.507,
457
+ "step": 72
458
+ },
459
+ {
460
+ "epoch": 2.07,
461
+ "learning_rate": 1.9111637130612172e-05,
462
+ "loss": 0.5096,
463
+ "step": 73
464
+ },
465
+ {
466
+ "epoch": 2.09,
467
+ "learning_rate": 1.9087511289735646e-05,
468
+ "loss": 0.5124,
469
+ "step": 74
470
+ },
471
+ {
472
+ "epoch": 2.11,
473
+ "learning_rate": 1.9063077870366504e-05,
474
+ "loss": 0.5013,
475
+ "step": 75
476
+ },
477
+ {
478
+ "epoch": 2.13,
479
+ "learning_rate": 1.9038337699485207e-05,
480
+ "loss": 0.5048,
481
+ "step": 76
482
+ },
483
+ {
484
+ "epoch": 2.15,
485
+ "learning_rate": 1.9013291614454622e-05,
486
+ "loss": 0.4988,
487
+ "step": 77
488
+ },
489
+ {
490
+ "epoch": 2.17,
491
+ "learning_rate": 1.8987940462991673e-05,
492
+ "loss": 0.502,
493
+ "step": 78
494
+ },
495
+ {
496
+ "epoch": 2.18,
497
+ "learning_rate": 1.8962285103138637e-05,
498
+ "loss": 0.4969,
499
+ "step": 79
500
+ },
501
+ {
502
+ "epoch": 2.2,
503
+ "learning_rate": 1.8936326403234125e-05,
504
+ "loss": 0.503,
505
+ "step": 80
506
+ },
507
+ {
508
+ "epoch": 2.22,
509
+ "learning_rate": 1.891006524188368e-05,
510
+ "loss": 0.4978,
511
+ "step": 81
512
+ },
513
+ {
514
+ "epoch": 2.24,
515
+ "learning_rate": 1.8883502507930044e-05,
516
+ "loss": 0.4954,
517
+ "step": 82
518
+ },
519
+ {
520
+ "epoch": 2.26,
521
+ "learning_rate": 1.8856639100423062e-05,
522
+ "loss": 0.4939,
523
+ "step": 83
524
+ },
525
+ {
526
+ "epoch": 2.28,
527
+ "learning_rate": 1.8829475928589272e-05,
528
+ "loss": 0.4942,
529
+ "step": 84
530
+ },
531
+ {
532
+ "epoch": 2.3,
533
+ "learning_rate": 1.880201391180111e-05,
534
+ "loss": 0.4933,
535
+ "step": 85
536
+ },
537
+ {
538
+ "epoch": 2.31,
539
+ "learning_rate": 1.877425397954582e-05,
540
+ "loss": 0.4946,
541
+ "step": 86
542
+ },
543
+ {
544
+ "epoch": 2.33,
545
+ "learning_rate": 1.874619707139396e-05,
546
+ "loss": 0.4939,
547
+ "step": 87
548
+ },
549
+ {
550
+ "epoch": 2.35,
551
+ "learning_rate": 1.8717844136967626e-05,
552
+ "loss": 0.4922,
553
+ "step": 88
554
+ },
555
+ {
556
+ "epoch": 2.37,
557
+ "learning_rate": 1.8689196135908303e-05,
558
+ "loss": 0.4925,
559
+ "step": 89
560
+ },
561
+ {
562
+ "epoch": 2.39,
563
+ "learning_rate": 1.866025403784439e-05,
564
+ "loss": 0.4902,
565
+ "step": 90
566
+ },
567
+ {
568
+ "epoch": 2.41,
569
+ "learning_rate": 1.8631018822358363e-05,
570
+ "loss": 0.4866,
571
+ "step": 91
572
+ },
573
+ {
574
+ "epoch": 2.42,
575
+ "learning_rate": 1.860149147895366e-05,
576
+ "loss": 0.4937,
577
+ "step": 92
578
+ },
579
+ {
580
+ "epoch": 2.44,
581
+ "learning_rate": 1.8571673007021124e-05,
582
+ "loss": 0.4897,
583
+ "step": 93
584
+ },
585
+ {
586
+ "epoch": 2.46,
587
+ "learning_rate": 1.854156441580526e-05,
588
+ "loss": 0.4879,
589
+ "step": 94
590
+ },
591
+ {
592
+ "epoch": 2.48,
593
+ "learning_rate": 1.8511166724369997e-05,
594
+ "loss": 0.4888,
595
+ "step": 95
596
+ },
597
+ {
598
+ "epoch": 2.5,
599
+ "learning_rate": 1.848048096156426e-05,
600
+ "loss": 0.4908,
601
+ "step": 96
602
+ },
603
+ {
604
+ "epoch": 2.52,
605
+ "learning_rate": 1.8449508165987106e-05,
606
+ "loss": 0.4856,
607
+ "step": 97
608
+ },
609
+ {
610
+ "epoch": 2.53,
611
+ "learning_rate": 1.8418249385952575e-05,
612
+ "loss": 0.4842,
613
+ "step": 98
614
+ },
615
+ {
616
+ "epoch": 2.55,
617
+ "learning_rate": 1.8386705679454243e-05,
618
+ "loss": 0.4877,
619
+ "step": 99
620
+ },
621
+ {
622
+ "epoch": 2.57,
623
+ "learning_rate": 1.8354878114129368e-05,
624
+ "loss": 0.491,
625
+ "step": 100
626
+ },
627
+ {
628
+ "epoch": 2.59,
629
+ "learning_rate": 1.832276776722278e-05,
630
+ "loss": 0.4877,
631
+ "step": 101
632
+ },
633
+ {
634
+ "epoch": 2.61,
635
+ "learning_rate": 1.8290375725550417e-05,
636
+ "loss": 0.4863,
637
+ "step": 102
638
+ },
639
+ {
640
+ "epoch": 2.63,
641
+ "learning_rate": 1.8257703085462542e-05,
642
+ "loss": 0.4799,
643
+ "step": 103
644
+ },
645
+ {
646
+ "epoch": 2.63,
647
+ "eval_loss": 0.4830611050128937,
648
+ "eval_runtime": 23.8066,
649
+ "eval_samples_per_second": 8.401,
650
+ "eval_steps_per_second": 0.546,
651
+ "step": 103
652
+ },
653
+ {
654
+ "epoch": 3.01,
655
+ "learning_rate": 1.8224750952806626e-05,
656
+ "loss": 0.4797,
657
+ "step": 104
658
+ },
659
+ {
660
+ "epoch": 3.03,
661
+ "learning_rate": 1.819152044288992e-05,
662
+ "loss": 0.4833,
663
+ "step": 105
664
+ },
665
+ {
666
+ "epoch": 3.05,
667
+ "learning_rate": 1.8158012680441723e-05,
668
+ "loss": 0.4833,
669
+ "step": 106
670
+ },
671
+ {
672
+ "epoch": 3.07,
673
+ "learning_rate": 1.8124228799575295e-05,
674
+ "loss": 0.4818,
675
+ "step": 107
676
+ },
677
+ {
678
+ "epoch": 3.08,
679
+ "learning_rate": 1.8090169943749477e-05,
680
+ "loss": 0.4885,
681
+ "step": 108
682
+ },
683
+ {
684
+ "epoch": 3.1,
685
+ "learning_rate": 1.8055837265729996e-05,
686
+ "loss": 0.4785,
687
+ "step": 109
688
+ },
689
+ {
690
+ "epoch": 3.12,
691
+ "learning_rate": 1.802123192755044e-05,
692
+ "loss": 0.4789,
693
+ "step": 110
694
+ },
695
+ {
696
+ "epoch": 3.14,
697
+ "learning_rate": 1.798635510047293e-05,
698
+ "loss": 0.4795,
699
+ "step": 111
700
+ },
701
+ {
702
+ "epoch": 3.16,
703
+ "learning_rate": 1.795120796494848e-05,
704
+ "loss": 0.478,
705
+ "step": 112
706
+ },
707
+ {
708
+ "epoch": 3.18,
709
+ "learning_rate": 1.7915791710577035e-05,
710
+ "loss": 0.477,
711
+ "step": 113
712
+ },
713
+ {
714
+ "epoch": 3.19,
715
+ "learning_rate": 1.788010753606722e-05,
716
+ "loss": 0.4746,
717
+ "step": 114
718
+ },
719
+ {
720
+ "epoch": 3.21,
721
+ "learning_rate": 1.784415664919576e-05,
722
+ "loss": 0.4811,
723
+ "step": 115
724
+ },
725
+ {
726
+ "epoch": 3.23,
727
+ "learning_rate": 1.7807940266766595e-05,
728
+ "loss": 0.477,
729
+ "step": 116
730
+ },
731
+ {
732
+ "epoch": 3.25,
733
+ "learning_rate": 1.777145961456971e-05,
734
+ "loss": 0.4744,
735
+ "step": 117
736
+ },
737
+ {
738
+ "epoch": 3.27,
739
+ "learning_rate": 1.7734715927339642e-05,
740
+ "loss": 0.4753,
741
+ "step": 118
742
+ },
743
+ {
744
+ "epoch": 3.29,
745
+ "learning_rate": 1.769771044871368e-05,
746
+ "loss": 0.4754,
747
+ "step": 119
748
+ },
749
+ {
750
+ "epoch": 3.3,
751
+ "learning_rate": 1.766044443118978e-05,
752
+ "loss": 0.4773,
753
+ "step": 120
754
+ },
755
+ {
756
+ "epoch": 3.32,
757
+ "learning_rate": 1.7622919136084183e-05,
758
+ "loss": 0.4743,
759
+ "step": 121
760
+ },
761
+ {
762
+ "epoch": 3.34,
763
+ "learning_rate": 1.7585135833488692e-05,
764
+ "loss": 0.479,
765
+ "step": 122
766
+ },
767
+ {
768
+ "epoch": 3.36,
769
+ "learning_rate": 1.7547095802227723e-05,
770
+ "loss": 0.4747,
771
+ "step": 123
772
+ },
773
+ {
774
+ "epoch": 3.38,
775
+ "learning_rate": 1.7508800329814993e-05,
776
+ "loss": 0.4763,
777
+ "step": 124
778
+ },
779
+ {
780
+ "epoch": 3.4,
781
+ "learning_rate": 1.7470250712409963e-05,
782
+ "loss": 0.47,
783
+ "step": 125
784
+ },
785
+ {
786
+ "epoch": 3.42,
787
+ "learning_rate": 1.7431448254773943e-05,
788
+ "loss": 0.4741,
789
+ "step": 126
790
+ },
791
+ {
792
+ "epoch": 3.43,
793
+ "learning_rate": 1.739239427022596e-05,
794
+ "loss": 0.4753,
795
+ "step": 127
796
+ },
797
+ {
798
+ "epoch": 3.45,
799
+ "learning_rate": 1.735309008059829e-05,
800
+ "loss": 0.4738,
801
+ "step": 128
802
+ },
803
+ {
804
+ "epoch": 3.47,
805
+ "learning_rate": 1.7313537016191706e-05,
806
+ "loss": 0.4769,
807
+ "step": 129
808
+ },
809
+ {
810
+ "epoch": 3.49,
811
+ "learning_rate": 1.7273736415730488e-05,
812
+ "loss": 0.4749,
813
+ "step": 130
814
+ },
815
+ {
816
+ "epoch": 3.51,
817
+ "learning_rate": 1.723368962631708e-05,
818
+ "loss": 0.4703,
819
+ "step": 131
820
+ },
821
+ {
822
+ "epoch": 3.53,
823
+ "learning_rate": 1.7193398003386514e-05,
824
+ "loss": 0.4722,
825
+ "step": 132
826
+ },
827
+ {
828
+ "epoch": 3.54,
829
+ "learning_rate": 1.7152862910660516e-05,
830
+ "loss": 0.474,
831
+ "step": 133
832
+ },
833
+ {
834
+ "epoch": 3.56,
835
+ "learning_rate": 1.711208572010137e-05,
836
+ "loss": 0.4721,
837
+ "step": 134
838
+ },
839
+ {
840
+ "epoch": 3.58,
841
+ "learning_rate": 1.7071067811865477e-05,
842
+ "loss": 0.4755,
843
+ "step": 135
844
+ },
845
+ {
846
+ "epoch": 3.6,
847
+ "learning_rate": 1.702981057425662e-05,
848
+ "loss": 0.4717,
849
+ "step": 136
850
+ },
851
+ {
852
+ "epoch": 3.62,
853
+ "learning_rate": 1.6988315403679e-05,
854
+ "loss": 0.4712,
855
+ "step": 137
856
+ },
857
+ {
858
+ "epoch": 3.62,
859
+ "eval_loss": 0.470627099275589,
860
+ "eval_runtime": 23.8372,
861
+ "eval_samples_per_second": 8.39,
862
+ "eval_steps_per_second": 0.545,
863
+ "step": 137
864
+ },
865
+ {
866
+ "epoch": 4.0,
867
+ "learning_rate": 1.6946583704589973e-05,
868
+ "loss": 0.4679,
869
+ "step": 138
870
+ },
871
+ {
872
+ "epoch": 4.02,
873
+ "learning_rate": 1.6904616889452497e-05,
874
+ "loss": 0.4707,
875
+ "step": 139
876
+ },
877
+ {
878
+ "epoch": 4.04,
879
+ "learning_rate": 1.686241637868734e-05,
880
+ "loss": 0.4683,
881
+ "step": 140
882
+ },
883
+ {
884
+ "epoch": 4.06,
885
+ "learning_rate": 1.6819983600624986e-05,
886
+ "loss": 0.4715,
887
+ "step": 141
888
+ },
889
+ {
890
+ "epoch": 4.08,
891
+ "learning_rate": 1.6777319991457325e-05,
892
+ "loss": 0.4721,
893
+ "step": 142
894
+ },
895
+ {
896
+ "epoch": 4.09,
897
+ "learning_rate": 1.6734426995189003e-05,
898
+ "loss": 0.4719,
899
+ "step": 143
900
+ },
901
+ {
902
+ "epoch": 4.11,
903
+ "learning_rate": 1.6691306063588583e-05,
904
+ "loss": 0.4662,
905
+ "step": 144
906
+ },
907
+ {
908
+ "epoch": 4.13,
909
+ "learning_rate": 1.6647958656139377e-05,
910
+ "loss": 0.4684,
911
+ "step": 145
912
+ },
913
+ {
914
+ "epoch": 4.15,
915
+ "learning_rate": 1.6604386239990077e-05,
916
+ "loss": 0.4671,
917
+ "step": 146
918
+ },
919
+ {
920
+ "epoch": 4.17,
921
+ "learning_rate": 1.6560590289905074e-05,
922
+ "loss": 0.4671,
923
+ "step": 147
924
+ },
925
+ {
926
+ "epoch": 4.19,
927
+ "learning_rate": 1.6516572288214555e-05,
928
+ "loss": 0.4642,
929
+ "step": 148
930
+ },
931
+ {
932
+ "epoch": 4.2,
933
+ "learning_rate": 1.6472333724764326e-05,
934
+ "loss": 0.468,
935
+ "step": 149
936
+ },
937
+ {
938
+ "epoch": 4.22,
939
+ "learning_rate": 1.6427876096865394e-05,
940
+ "loss": 0.4699,
941
+ "step": 150
942
+ },
943
+ {
944
+ "epoch": 4.24,
945
+ "learning_rate": 1.6383200909243285e-05,
946
+ "loss": 0.4646,
947
+ "step": 151
948
+ },
949
+ {
950
+ "epoch": 4.26,
951
+ "learning_rate": 1.63383096739871e-05,
952
+ "loss": 0.4626,
953
+ "step": 152
954
+ },
955
+ {
956
+ "epoch": 4.28,
957
+ "learning_rate": 1.6293203910498375e-05,
958
+ "loss": 0.4665,
959
+ "step": 153
960
+ },
961
+ {
962
+ "epoch": 4.3,
963
+ "learning_rate": 1.6247885145439602e-05,
964
+ "loss": 0.4654,
965
+ "step": 154
966
+ },
967
+ {
968
+ "epoch": 4.31,
969
+ "learning_rate": 1.6202354912682602e-05,
970
+ "loss": 0.4695,
971
+ "step": 155
972
+ },
973
+ {
974
+ "epoch": 4.33,
975
+ "learning_rate": 1.6156614753256583e-05,
976
+ "loss": 0.4646,
977
+ "step": 156
978
+ },
979
+ {
980
+ "epoch": 4.35,
981
+ "learning_rate": 1.6110666215296e-05,
982
+ "loss": 0.4686,
983
+ "step": 157
984
+ },
985
+ {
986
+ "epoch": 4.37,
987
+ "learning_rate": 1.6064510853988137e-05,
988
+ "loss": 0.4635,
989
+ "step": 158
990
+ },
991
+ {
992
+ "epoch": 4.39,
993
+ "learning_rate": 1.6018150231520486e-05,
994
+ "loss": 0.4638,
995
+ "step": 159
996
+ },
997
+ {
998
+ "epoch": 4.41,
999
+ "learning_rate": 1.5971585917027864e-05,
1000
+ "loss": 0.4666,
1001
+ "step": 160
1002
+ },
1003
+ {
1004
+ "epoch": 4.42,
1005
+ "learning_rate": 1.592481948653931e-05,
1006
+ "loss": 0.4668,
1007
+ "step": 161
1008
+ },
1009
+ {
1010
+ "epoch": 4.44,
1011
+ "learning_rate": 1.5877852522924733e-05,
1012
+ "loss": 0.4656,
1013
+ "step": 162
1014
+ },
1015
+ {
1016
+ "epoch": 4.46,
1017
+ "learning_rate": 1.5830686615841348e-05,
1018
+ "loss": 0.4652,
1019
+ "step": 163
1020
+ },
1021
+ {
1022
+ "epoch": 4.48,
1023
+ "learning_rate": 1.5783323361679865e-05,
1024
+ "loss": 0.4651,
1025
+ "step": 164
1026
+ },
1027
+ {
1028
+ "epoch": 4.5,
1029
+ "learning_rate": 1.573576436351046e-05,
1030
+ "loss": 0.4657,
1031
+ "step": 165
1032
+ },
1033
+ {
1034
+ "epoch": 4.52,
1035
+ "learning_rate": 1.568801123102852e-05,
1036
+ "loss": 0.4654,
1037
+ "step": 166
1038
+ },
1039
+ {
1040
+ "epoch": 4.54,
1041
+ "learning_rate": 1.5640065580500146e-05,
1042
+ "loss": 0.4633,
1043
+ "step": 167
1044
+ },
1045
+ {
1046
+ "epoch": 4.55,
1047
+ "learning_rate": 1.5591929034707468e-05,
1048
+ "loss": 0.4628,
1049
+ "step": 168
1050
+ },
1051
+ {
1052
+ "epoch": 4.57,
1053
+ "learning_rate": 1.5543603222893718e-05,
1054
+ "loss": 0.4691,
1055
+ "step": 169
1056
+ },
1057
+ {
1058
+ "epoch": 4.59,
1059
+ "learning_rate": 1.5495089780708062e-05,
1060
+ "loss": 0.4644,
1061
+ "step": 170
1062
+ },
1063
+ {
1064
+ "epoch": 4.61,
1065
+ "learning_rate": 1.5446390350150272e-05,
1066
+ "loss": 0.4633,
1067
+ "step": 171
1068
+ },
1069
+ {
1070
+ "epoch": 4.63,
1071
+ "learning_rate": 1.539750657951513e-05,
1072
+ "loss": 0.4615,
1073
+ "step": 172
1074
+ },
1075
+ {
1076
+ "epoch": 4.63,
1077
+ "eval_loss": 0.4630681276321411,
1078
+ "eval_runtime": 23.8164,
1079
+ "eval_samples_per_second": 8.398,
1080
+ "eval_steps_per_second": 0.546,
1081
+ "step": 172
1082
+ },
1083
+ {
1084
+ "epoch": 5.01,
1085
+ "learning_rate": 1.5348440123336647e-05,
1086
+ "loss": 0.4604,
1087
+ "step": 173
1088
+ },
1089
+ {
1090
+ "epoch": 5.03,
1091
+ "learning_rate": 1.529919264233205e-05,
1092
+ "loss": 0.4624,
1093
+ "step": 174
1094
+ },
1095
+ {
1096
+ "epoch": 5.05,
1097
+ "learning_rate": 1.5249765803345602e-05,
1098
+ "loss": 0.4629,
1099
+ "step": 175
1100
+ },
1101
+ {
1102
+ "epoch": 5.07,
1103
+ "learning_rate": 1.5200161279292154e-05,
1104
+ "loss": 0.4616,
1105
+ "step": 176
1106
+ },
1107
+ {
1108
+ "epoch": 5.09,
1109
+ "learning_rate": 1.5150380749100545e-05,
1110
+ "loss": 0.4681,
1111
+ "step": 177
1112
+ },
1113
+ {
1114
+ "epoch": 5.1,
1115
+ "learning_rate": 1.5100425897656754e-05,
1116
+ "loss": 0.4653,
1117
+ "step": 178
1118
+ },
1119
+ {
1120
+ "epoch": 5.12,
1121
+ "learning_rate": 1.5050298415746903e-05,
1122
+ "loss": 0.459,
1123
+ "step": 179
1124
+ },
1125
+ {
1126
+ "epoch": 5.14,
1127
+ "learning_rate": 1.5000000000000002e-05,
1128
+ "loss": 0.4589,
1129
+ "step": 180
1130
+ },
1131
+ {
1132
+ "epoch": 5.16,
1133
+ "learning_rate": 1.4949532352830543e-05,
1134
+ "loss": 0.4627,
1135
+ "step": 181
1136
+ },
1137
+ {
1138
+ "epoch": 5.18,
1139
+ "learning_rate": 1.4898897182380872e-05,
1140
+ "loss": 0.4579,
1141
+ "step": 182
1142
+ },
1143
+ {
1144
+ "epoch": 5.2,
1145
+ "learning_rate": 1.4848096202463373e-05,
1146
+ "loss": 0.4612,
1147
+ "step": 183
1148
+ },
1149
+ {
1150
+ "epoch": 5.21,
1151
+ "learning_rate": 1.4797131132502464e-05,
1152
+ "loss": 0.4598,
1153
+ "step": 184
1154
+ },
1155
+ {
1156
+ "epoch": 5.23,
1157
+ "learning_rate": 1.4746003697476406e-05,
1158
+ "loss": 0.4615,
1159
+ "step": 185
1160
+ },
1161
+ {
1162
+ "epoch": 5.25,
1163
+ "learning_rate": 1.469471562785891e-05,
1164
+ "loss": 0.4571,
1165
+ "step": 186
1166
+ },
1167
+ {
1168
+ "epoch": 5.27,
1169
+ "learning_rate": 1.4643268659560571e-05,
1170
+ "loss": 0.4589,
1171
+ "step": 187
1172
+ },
1173
+ {
1174
+ "epoch": 5.29,
1175
+ "learning_rate": 1.4591664533870118e-05,
1176
+ "loss": 0.4608,
1177
+ "step": 188
1178
+ },
1179
+ {
1180
+ "epoch": 5.31,
1181
+ "learning_rate": 1.4539904997395468e-05,
1182
+ "loss": 0.4606,
1183
+ "step": 189
1184
+ },
1185
+ {
1186
+ "epoch": 5.32,
1187
+ "learning_rate": 1.4487991802004625e-05,
1188
+ "loss": 0.4607,
1189
+ "step": 190
1190
+ },
1191
+ {
1192
+ "epoch": 5.34,
1193
+ "learning_rate": 1.4435926704766364e-05,
1194
+ "loss": 0.4605,
1195
+ "step": 191
1196
+ },
1197
+ {
1198
+ "epoch": 5.36,
1199
+ "learning_rate": 1.4383711467890776e-05,
1200
+ "loss": 0.4598,
1201
+ "step": 192
1202
+ },
1203
+ {
1204
+ "epoch": 5.38,
1205
+ "learning_rate": 1.4331347858669631e-05,
1206
+ "loss": 0.4588,
1207
+ "step": 193
1208
+ },
1209
+ {
1210
+ "epoch": 5.4,
1211
+ "learning_rate": 1.4278837649416543e-05,
1212
+ "loss": 0.4575,
1213
+ "step": 194
1214
+ },
1215
+ {
1216
+ "epoch": 5.42,
1217
+ "learning_rate": 1.4226182617406996e-05,
1218
+ "loss": 0.46,
1219
+ "step": 195
1220
+ },
1221
+ {
1222
+ "epoch": 5.43,
1223
+ "learning_rate": 1.417338454481818e-05,
1224
+ "loss": 0.4605,
1225
+ "step": 196
1226
+ },
1227
+ {
1228
+ "epoch": 5.45,
1229
+ "learning_rate": 1.4120445218668687e-05,
1230
+ "loss": 0.4606,
1231
+ "step": 197
1232
+ },
1233
+ {
1234
+ "epoch": 5.47,
1235
+ "learning_rate": 1.4067366430758004e-05,
1236
+ "loss": 0.4584,
1237
+ "step": 198
1238
+ },
1239
+ {
1240
+ "epoch": 5.49,
1241
+ "learning_rate": 1.4014149977605893e-05,
1242
+ "loss": 0.4623,
1243
+ "step": 199
1244
+ },
1245
+ {
1246
+ "epoch": 5.51,
1247
+ "learning_rate": 1.396079766039157e-05,
1248
+ "loss": 0.4586,
1249
+ "step": 200
1250
+ },
1251
+ {
1252
+ "epoch": 5.53,
1253
+ "learning_rate": 1.3907311284892737e-05,
1254
+ "loss": 0.457,
1255
+ "step": 201
1256
+ },
1257
+ {
1258
+ "epoch": 5.54,
1259
+ "learning_rate": 1.3853692661424485e-05,
1260
+ "loss": 0.4577,
1261
+ "step": 202
1262
+ },
1263
+ {
1264
+ "epoch": 5.56,
1265
+ "learning_rate": 1.3799943604777993e-05,
1266
+ "loss": 0.4604,
1267
+ "step": 203
1268
+ },
1269
+ {
1270
+ "epoch": 5.58,
1271
+ "learning_rate": 1.3746065934159123e-05,
1272
+ "loss": 0.4603,
1273
+ "step": 204
1274
+ },
1275
+ {
1276
+ "epoch": 5.6,
1277
+ "learning_rate": 1.3692061473126845e-05,
1278
+ "loss": 0.46,
1279
+ "step": 205
1280
+ },
1281
+ {
1282
+ "epoch": 5.62,
1283
+ "learning_rate": 1.3637932049531517e-05,
1284
+ "loss": 0.4584,
1285
+ "step": 206
1286
+ },
1287
+ {
1288
+ "epoch": 5.62,
1289
+ "eval_loss": 0.4574292004108429,
1290
+ "eval_runtime": 23.8087,
1291
+ "eval_samples_per_second": 8.4,
1292
+ "eval_steps_per_second": 0.546,
1293
+ "step": 206
1294
+ },
1295
+ {
1296
+ "epoch": 6.0,
1297
+ "learning_rate": 1.3583679495453e-05,
1298
+ "loss": 0.4533,
1299
+ "step": 207
1300
+ },
1301
+ {
1302
+ "epoch": 6.02,
1303
+ "learning_rate": 1.3529305647138689e-05,
1304
+ "loss": 0.456,
1305
+ "step": 208
1306
+ },
1307
+ {
1308
+ "epoch": 6.04,
1309
+ "learning_rate": 1.3474812344941315e-05,
1310
+ "loss": 0.4592,
1311
+ "step": 209
1312
+ },
1313
+ {
1314
+ "epoch": 6.06,
1315
+ "learning_rate": 1.342020143325669e-05,
1316
+ "loss": 0.4559,
1317
+ "step": 210
1318
+ },
1319
+ {
1320
+ "epoch": 6.08,
1321
+ "learning_rate": 1.3365474760461265e-05,
1322
+ "loss": 0.4606,
1323
+ "step": 211
1324
+ },
1325
+ {
1326
+ "epoch": 6.1,
1327
+ "learning_rate": 1.3310634178849583e-05,
1328
+ "loss": 0.4607,
1329
+ "step": 212
1330
+ },
1331
+ {
1332
+ "epoch": 6.11,
1333
+ "learning_rate": 1.3255681544571568e-05,
1334
+ "loss": 0.4557,
1335
+ "step": 213
1336
+ },
1337
+ {
1338
+ "epoch": 6.13,
1339
+ "learning_rate": 1.3200618717569716e-05,
1340
+ "loss": 0.4549,
1341
+ "step": 214
1342
+ },
1343
+ {
1344
+ "epoch": 6.15,
1345
+ "learning_rate": 1.3145447561516138e-05,
1346
+ "loss": 0.4555,
1347
+ "step": 215
1348
+ },
1349
+ {
1350
+ "epoch": 6.17,
1351
+ "learning_rate": 1.3090169943749475e-05,
1352
+ "loss": 0.4557,
1353
+ "step": 216
1354
+ },
1355
+ {
1356
+ "epoch": 6.19,
1357
+ "learning_rate": 1.3034787735211708e-05,
1358
+ "loss": 0.4536,
1359
+ "step": 217
1360
+ },
1361
+ {
1362
+ "epoch": 6.21,
1363
+ "learning_rate": 1.297930281038482e-05,
1364
+ "loss": 0.459,
1365
+ "step": 218
1366
+ },
1367
+ {
1368
+ "epoch": 6.22,
1369
+ "learning_rate": 1.2923717047227368e-05,
1370
+ "loss": 0.4543,
1371
+ "step": 219
1372
+ },
1373
+ {
1374
+ "epoch": 6.24,
1375
+ "learning_rate": 1.2868032327110904e-05,
1376
+ "loss": 0.4528,
1377
+ "step": 220
1378
+ },
1379
+ {
1380
+ "epoch": 6.26,
1381
+ "learning_rate": 1.2812250534756307e-05,
1382
+ "loss": 0.4574,
1383
+ "step": 221
1384
+ },
1385
+ {
1386
+ "epoch": 6.28,
1387
+ "learning_rate": 1.2756373558169992e-05,
1388
+ "loss": 0.4533,
1389
+ "step": 222
1390
+ },
1391
+ {
1392
+ "epoch": 6.3,
1393
+ "learning_rate": 1.270040328858001e-05,
1394
+ "loss": 0.4558,
1395
+ "step": 223
1396
+ },
1397
+ {
1398
+ "epoch": 6.32,
1399
+ "learning_rate": 1.2644341620372025e-05,
1400
+ "loss": 0.4584,
1401
+ "step": 224
1402
+ },
1403
+ {
1404
+ "epoch": 6.33,
1405
+ "learning_rate": 1.2588190451025209e-05,
1406
+ "loss": 0.4547,
1407
+ "step": 225
1408
+ },
1409
+ {
1410
+ "epoch": 6.35,
1411
+ "learning_rate": 1.253195168104802e-05,
1412
+ "loss": 0.4568,
1413
+ "step": 226
1414
+ },
1415
+ {
1416
+ "epoch": 6.37,
1417
+ "learning_rate": 1.2475627213913861e-05,
1418
+ "loss": 0.4547,
1419
+ "step": 227
1420
+ },
1421
+ {
1422
+ "epoch": 6.39,
1423
+ "learning_rate": 1.2419218955996677e-05,
1424
+ "loss": 0.4523,
1425
+ "step": 228
1426
+ },
1427
+ {
1428
+ "epoch": 6.41,
1429
+ "learning_rate": 1.2362728816506418e-05,
1430
+ "loss": 0.4556,
1431
+ "step": 229
1432
+ },
1433
+ {
1434
+ "epoch": 6.43,
1435
+ "learning_rate": 1.2306158707424402e-05,
1436
+ "loss": 0.4561,
1437
+ "step": 230
1438
+ },
1439
+ {
1440
+ "epoch": 6.44,
1441
+ "learning_rate": 1.2249510543438652e-05,
1442
+ "loss": 0.4548,
1443
+ "step": 231
1444
+ },
1445
+ {
1446
+ "epoch": 6.46,
1447
+ "learning_rate": 1.2192786241879033e-05,
1448
+ "loss": 0.4569,
1449
+ "step": 232
1450
+ },
1451
+ {
1452
+ "epoch": 6.48,
1453
+ "learning_rate": 1.2135987722652403e-05,
1454
+ "loss": 0.4561,
1455
+ "step": 233
1456
+ },
1457
+ {
1458
+ "epoch": 6.5,
1459
+ "learning_rate": 1.2079116908177592e-05,
1460
+ "loss": 0.4535,
1461
+ "step": 234
1462
+ },
1463
+ {
1464
+ "epoch": 6.52,
1465
+ "learning_rate": 1.2022175723320382e-05,
1466
+ "loss": 0.4538,
1467
+ "step": 235
1468
+ },
1469
+ {
1470
+ "epoch": 6.54,
1471
+ "learning_rate": 1.1965166095328302e-05,
1472
+ "loss": 0.454,
1473
+ "step": 236
1474
+ },
1475
+ {
1476
+ "epoch": 6.55,
1477
+ "learning_rate": 1.190808995376545e-05,
1478
+ "loss": 0.4555,
1479
+ "step": 237
1480
+ },
1481
+ {
1482
+ "epoch": 6.57,
1483
+ "learning_rate": 1.1850949230447146e-05,
1484
+ "loss": 0.4567,
1485
+ "step": 238
1486
+ },
1487
+ {
1488
+ "epoch": 6.59,
1489
+ "learning_rate": 1.1793745859374575e-05,
1490
+ "loss": 0.4571,
1491
+ "step": 239
1492
+ },
1493
+ {
1494
+ "epoch": 6.61,
1495
+ "learning_rate": 1.1736481776669307e-05,
1496
+ "loss": 0.4535,
1497
+ "step": 240
1498
+ },
1499
+ {
1500
+ "epoch": 6.63,
1501
+ "learning_rate": 1.1679158920507773e-05,
1502
+ "loss": 0.4524,
1503
+ "step": 241
1504
+ },
1505
+ {
1506
+ "epoch": 6.63,
1507
+ "eval_loss": 0.45291438698768616,
1508
+ "eval_runtime": 23.8049,
1509
+ "eval_samples_per_second": 8.402,
1510
+ "eval_steps_per_second": 0.546,
1511
+ "step": 241
1512
+ },
1513
+ {
1514
+ "epoch": 7.01,
1515
+ "learning_rate": 1.1621779231055677e-05,
1516
+ "loss": 0.4501,
1517
+ "step": 242
1518
+ },
1519
+ {
1520
+ "epoch": 7.03,
1521
+ "learning_rate": 1.156434465040231e-05,
1522
+ "loss": 0.4529,
1523
+ "step": 243
1524
+ },
1525
+ {
1526
+ "epoch": 7.05,
1527
+ "learning_rate": 1.1506857122494832e-05,
1528
+ "loss": 0.4559,
1529
+ "step": 244
1530
+ },
1531
+ {
1532
+ "epoch": 7.07,
1533
+ "learning_rate": 1.1449318593072468e-05,
1534
+ "loss": 0.4523,
1535
+ "step": 245
1536
+ },
1537
+ {
1538
+ "epoch": 7.09,
1539
+ "learning_rate": 1.1391731009600655e-05,
1540
+ "loss": 0.4572,
1541
+ "step": 246
1542
+ },
1543
+ {
1544
+ "epoch": 7.11,
1545
+ "learning_rate": 1.1334096321205129e-05,
1546
+ "loss": 0.4543,
1547
+ "step": 247
1548
+ },
1549
+ {
1550
+ "epoch": 7.12,
1551
+ "learning_rate": 1.127641647860595e-05,
1552
+ "loss": 0.452,
1553
+ "step": 248
1554
+ },
1555
+ {
1556
+ "epoch": 7.14,
1557
+ "learning_rate": 1.1218693434051475e-05,
1558
+ "loss": 0.4529,
1559
+ "step": 249
1560
+ },
1561
+ {
1562
+ "epoch": 7.16,
1563
+ "learning_rate": 1.1160929141252303e-05,
1564
+ "loss": 0.4528,
1565
+ "step": 250
1566
+ },
1567
+ {
1568
+ "epoch": 7.18,
1569
+ "learning_rate": 1.110312555531512e-05,
1570
+ "loss": 0.4505,
1571
+ "step": 251
1572
+ },
1573
+ {
1574
+ "epoch": 7.2,
1575
+ "learning_rate": 1.1045284632676535e-05,
1576
+ "loss": 0.4491,
1577
+ "step": 252
1578
+ },
1579
+ {
1580
+ "epoch": 7.22,
1581
+ "learning_rate": 1.0987408331036879e-05,
1582
+ "loss": 0.4544,
1583
+ "step": 253
1584
+ },
1585
+ {
1586
+ "epoch": 7.23,
1587
+ "learning_rate": 1.0929498609293925e-05,
1588
+ "loss": 0.4509,
1589
+ "step": 254
1590
+ },
1591
+ {
1592
+ "epoch": 7.25,
1593
+ "learning_rate": 1.0871557427476585e-05,
1594
+ "loss": 0.4517,
1595
+ "step": 255
1596
+ },
1597
+ {
1598
+ "epoch": 7.27,
1599
+ "learning_rate": 1.0813586746678584e-05,
1600
+ "loss": 0.4505,
1601
+ "step": 256
1602
+ },
1603
+ {
1604
+ "epoch": 7.29,
1605
+ "learning_rate": 1.0755588528992082e-05,
1606
+ "loss": 0.4508,
1607
+ "step": 257
1608
+ },
1609
+ {
1610
+ "epoch": 7.31,
1611
+ "learning_rate": 1.0697564737441254e-05,
1612
+ "loss": 0.4537,
1613
+ "step": 258
1614
+ },
1615
+ {
1616
+ "epoch": 7.33,
1617
+ "learning_rate": 1.0639517335915857e-05,
1618
+ "loss": 0.4543,
1619
+ "step": 259
1620
+ },
1621
+ {
1622
+ "epoch": 7.34,
1623
+ "learning_rate": 1.0581448289104759e-05,
1624
+ "loss": 0.4532,
1625
+ "step": 260
1626
+ },
1627
+ {
1628
+ "epoch": 7.36,
1629
+ "learning_rate": 1.0523359562429441e-05,
1630
+ "loss": 0.4532,
1631
+ "step": 261
1632
+ },
1633
+ {
1634
+ "epoch": 7.38,
1635
+ "learning_rate": 1.046525312197747e-05,
1636
+ "loss": 0.4507,
1637
+ "step": 262
1638
+ },
1639
+ {
1640
+ "epoch": 7.4,
1641
+ "learning_rate": 1.040713093443596e-05,
1642
+ "loss": 0.4494,
1643
+ "step": 263
1644
+ },
1645
+ {
1646
+ "epoch": 7.42,
1647
+ "learning_rate": 1.0348994967025012e-05,
1648
+ "loss": 0.4531,
1649
+ "step": 264
1650
+ },
1651
+ {
1652
+ "epoch": 7.44,
1653
+ "learning_rate": 1.0290847187431115e-05,
1654
+ "loss": 0.4525,
1655
+ "step": 265
1656
+ },
1657
+ {
1658
+ "epoch": 7.45,
1659
+ "learning_rate": 1.0232689563740563e-05,
1660
+ "loss": 0.452,
1661
+ "step": 266
1662
+ },
1663
+ {
1664
+ "epoch": 7.47,
1665
+ "learning_rate": 1.0174524064372837e-05,
1666
+ "loss": 0.4527,
1667
+ "step": 267
1668
+ },
1669
+ {
1670
+ "epoch": 7.49,
1671
+ "learning_rate": 1.0116352658013973e-05,
1672
+ "loss": 0.4532,
1673
+ "step": 268
1674
+ },
1675
+ {
1676
+ "epoch": 7.51,
1677
+ "learning_rate": 1.005817731354994e-05,
1678
+ "loss": 0.4507,
1679
+ "step": 269
1680
+ },
1681
+ {
1682
+ "epoch": 7.53,
1683
+ "learning_rate": 1e-05,
1684
+ "loss": 0.4504,
1685
+ "step": 270
1686
+ },
1687
+ {
1688
+ "epoch": 7.55,
1689
+ "learning_rate": 9.941822686450061e-06,
1690
+ "loss": 0.4536,
1691
+ "step": 271
1692
+ },
1693
+ {
1694
+ "epoch": 7.56,
1695
+ "learning_rate": 9.883647341986032e-06,
1696
+ "loss": 0.4515,
1697
+ "step": 272
1698
+ },
1699
+ {
1700
+ "epoch": 7.58,
1701
+ "learning_rate": 9.825475935627165e-06,
1702
+ "loss": 0.4537,
1703
+ "step": 273
1704
+ },
1705
+ {
1706
+ "epoch": 7.6,
1707
+ "learning_rate": 9.767310436259438e-06,
1708
+ "loss": 0.4527,
1709
+ "step": 274
1710
+ },
1711
+ {
1712
+ "epoch": 7.62,
1713
+ "learning_rate": 9.709152812568886e-06,
1714
+ "loss": 0.4507,
1715
+ "step": 275
1716
+ },
1717
+ {
1718
+ "epoch": 7.62,
1719
+ "eval_loss": 0.4502379596233368,
1720
+ "eval_runtime": 23.8128,
1721
+ "eval_samples_per_second": 8.399,
1722
+ "eval_steps_per_second": 0.546,
1723
+ "step": 275
1724
+ },
1725
+ {
1726
+ "epoch": 8.0,
1727
+ "learning_rate": 9.651005032974994e-06,
1728
+ "loss": 0.4474,
1729
+ "step": 276
1730
+ },
1731
+ {
1732
+ "epoch": 8.02,
1733
+ "learning_rate": 9.592869065564043e-06,
1734
+ "loss": 0.4489,
1735
+ "step": 277
1736
+ },
1737
+ {
1738
+ "epoch": 8.04,
1739
+ "learning_rate": 9.534746878022533e-06,
1740
+ "loss": 0.4527,
1741
+ "step": 278
1742
+ },
1743
+ {
1744
+ "epoch": 8.06,
1745
+ "learning_rate": 9.476640437570562e-06,
1746
+ "loss": 0.4495,
1747
+ "step": 279
1748
+ },
1749
+ {
1750
+ "epoch": 8.08,
1751
+ "learning_rate": 9.418551710895243e-06,
1752
+ "loss": 0.4567,
1753
+ "step": 280
1754
+ },
1755
+ {
1756
+ "epoch": 8.1,
1757
+ "learning_rate": 9.360482664084144e-06,
1758
+ "loss": 0.4523,
1759
+ "step": 281
1760
+ },
1761
+ {
1762
+ "epoch": 8.11,
1763
+ "learning_rate": 9.302435262558748e-06,
1764
+ "loss": 0.4485,
1765
+ "step": 282
1766
+ },
1767
+ {
1768
+ "epoch": 8.13,
1769
+ "learning_rate": 9.244411471007923e-06,
1770
+ "loss": 0.4492,
1771
+ "step": 283
1772
+ },
1773
+ {
1774
+ "epoch": 8.15,
1775
+ "learning_rate": 9.18641325332142e-06,
1776
+ "loss": 0.4508,
1777
+ "step": 284
1778
+ },
1779
+ {
1780
+ "epoch": 8.17,
1781
+ "learning_rate": 9.128442572523418e-06,
1782
+ "loss": 0.4486,
1783
+ "step": 285
1784
+ },
1785
+ {
1786
+ "epoch": 8.19,
1787
+ "learning_rate": 9.07050139070608e-06,
1788
+ "loss": 0.4495,
1789
+ "step": 286
1790
+ },
1791
+ {
1792
+ "epoch": 8.21,
1793
+ "learning_rate": 9.012591668963123e-06,
1794
+ "loss": 0.4521,
1795
+ "step": 287
1796
+ },
1797
+ {
1798
+ "epoch": 8.23,
1799
+ "learning_rate": 8.954715367323468e-06,
1800
+ "loss": 0.4472,
1801
+ "step": 288
1802
+ },
1803
+ {
1804
+ "epoch": 8.24,
1805
+ "learning_rate": 8.896874444684882e-06,
1806
+ "loss": 0.4494,
1807
+ "step": 289
1808
+ },
1809
+ {
1810
+ "epoch": 8.26,
1811
+ "learning_rate": 8.839070858747697e-06,
1812
+ "loss": 0.4482,
1813
+ "step": 290
1814
+ },
1815
+ {
1816
+ "epoch": 8.28,
1817
+ "learning_rate": 8.781306565948528e-06,
1818
+ "loss": 0.4488,
1819
+ "step": 291
1820
+ },
1821
+ {
1822
+ "epoch": 8.3,
1823
+ "learning_rate": 8.723583521394054e-06,
1824
+ "loss": 0.4527,
1825
+ "step": 292
1826
+ },
1827
+ {
1828
+ "epoch": 8.32,
1829
+ "learning_rate": 8.665903678794873e-06,
1830
+ "loss": 0.4499,
1831
+ "step": 293
1832
+ },
1833
+ {
1834
+ "epoch": 8.34,
1835
+ "learning_rate": 8.60826899039935e-06,
1836
+ "loss": 0.4511,
1837
+ "step": 294
1838
+ },
1839
+ {
1840
+ "epoch": 8.35,
1841
+ "learning_rate": 8.550681406927534e-06,
1842
+ "loss": 0.4528,
1843
+ "step": 295
1844
+ },
1845
+ {
1846
+ "epoch": 8.37,
1847
+ "learning_rate": 8.49314287750517e-06,
1848
+ "loss": 0.4465,
1849
+ "step": 296
1850
+ },
1851
+ {
1852
+ "epoch": 8.39,
1853
+ "learning_rate": 8.43565534959769e-06,
1854
+ "loss": 0.4467,
1855
+ "step": 297
1856
+ },
1857
+ {
1858
+ "epoch": 8.41,
1859
+ "learning_rate": 8.378220768944328e-06,
1860
+ "loss": 0.4519,
1861
+ "step": 298
1862
+ },
1863
+ {
1864
+ "epoch": 8.43,
1865
+ "learning_rate": 8.32084107949223e-06,
1866
+ "loss": 0.4517,
1867
+ "step": 299
1868
+ },
1869
+ {
1870
+ "epoch": 8.45,
1871
+ "learning_rate": 8.263518223330698e-06,
1872
+ "loss": 0.4487,
1873
+ "step": 300
1874
+ },
1875
+ {
1876
+ "epoch": 8.46,
1877
+ "learning_rate": 8.206254140625425e-06,
1878
+ "loss": 0.4514,
1879
+ "step": 301
1880
+ },
1881
+ {
1882
+ "epoch": 8.48,
1883
+ "learning_rate": 8.149050769552856e-06,
1884
+ "loss": 0.4524,
1885
+ "step": 302
1886
+ },
1887
+ {
1888
+ "epoch": 8.5,
1889
+ "learning_rate": 8.091910046234552e-06,
1890
+ "loss": 0.4493,
1891
+ "step": 303
1892
+ },
1893
+ {
1894
+ "epoch": 8.52,
1895
+ "learning_rate": 8.034833904671698e-06,
1896
+ "loss": 0.4495,
1897
+ "step": 304
1898
+ },
1899
+ {
1900
+ "epoch": 8.54,
1901
+ "learning_rate": 7.977824276679623e-06,
1902
+ "loss": 0.4501,
1903
+ "step": 305
1904
+ },
1905
+ {
1906
+ "epoch": 8.56,
1907
+ "learning_rate": 7.92088309182241e-06,
1908
+ "loss": 0.4482,
1909
+ "step": 306
1910
+ },
1911
+ {
1912
+ "epoch": 8.57,
1913
+ "learning_rate": 7.864012277347602e-06,
1914
+ "loss": 0.4539,
1915
+ "step": 307
1916
+ },
1917
+ {
1918
+ "epoch": 8.59,
1919
+ "learning_rate": 7.807213758120965e-06,
1920
+ "loss": 0.449,
1921
+ "step": 308
1922
+ },
1923
+ {
1924
+ "epoch": 8.61,
1925
+ "learning_rate": 7.750489456561351e-06,
1926
+ "loss": 0.4486,
1927
+ "step": 309
1928
+ },
1929
+ {
1930
+ "epoch": 8.63,
1931
+ "learning_rate": 7.6938412925756e-06,
1932
+ "loss": 0.4478,
1933
+ "step": 310
1934
+ },
1935
+ {
1936
+ "epoch": 8.63,
1937
+ "eval_loss": 0.4479629397392273,
1938
+ "eval_runtime": 23.8304,
1939
+ "eval_samples_per_second": 8.393,
1940
+ "eval_steps_per_second": 0.546,
1941
+ "step": 310
1942
+ },
1943
+ {
1944
+ "epoch": 9.01,
1945
+ "learning_rate": 7.637271183493587e-06,
1946
+ "loss": 0.4482,
1947
+ "step": 311
1948
+ },
1949
+ {
1950
+ "epoch": 9.03,
1951
+ "learning_rate": 7.580781044003324e-06,
1952
+ "loss": 0.4462,
1953
+ "step": 312
1954
+ },
1955
+ {
1956
+ "epoch": 9.05,
1957
+ "learning_rate": 7.524372786086143e-06,
1958
+ "loss": 0.4488,
1959
+ "step": 313
1960
+ },
1961
+ {
1962
+ "epoch": 9.07,
1963
+ "learning_rate": 7.468048318951983e-06,
1964
+ "loss": 0.4521,
1965
+ "step": 314
1966
+ },
1967
+ {
1968
+ "epoch": 9.09,
1969
+ "learning_rate": 7.411809548974792e-06,
1970
+ "loss": 0.4528,
1971
+ "step": 315
1972
+ },
1973
+ {
1974
+ "epoch": 9.11,
1975
+ "learning_rate": 7.355658379627981e-06,
1976
+ "loss": 0.4497,
1977
+ "step": 316
1978
+ },
1979
+ {
1980
+ "epoch": 9.12,
1981
+ "learning_rate": 7.299596711419994e-06,
1982
+ "loss": 0.447,
1983
+ "step": 317
1984
+ },
1985
+ {
1986
+ "epoch": 9.14,
1987
+ "learning_rate": 7.243626441830009e-06,
1988
+ "loss": 0.4488,
1989
+ "step": 318
1990
+ },
1991
+ {
1992
+ "epoch": 9.16,
1993
+ "learning_rate": 7.187749465243694e-06,
1994
+ "loss": 0.4473,
1995
+ "step": 319
1996
+ },
1997
+ {
1998
+ "epoch": 9.18,
1999
+ "learning_rate": 7.131967672889101e-06,
2000
+ "loss": 0.4494,
2001
+ "step": 320
2002
+ },
2003
+ {
2004
+ "epoch": 9.2,
2005
+ "learning_rate": 7.076282952772634e-06,
2006
+ "loss": 0.4461,
2007
+ "step": 321
2008
+ },
2009
+ {
2010
+ "epoch": 9.22,
2011
+ "learning_rate": 7.02069718961518e-06,
2012
+ "loss": 0.4505,
2013
+ "step": 322
2014
+ },
2015
+ {
2016
+ "epoch": 9.23,
2017
+ "learning_rate": 6.9652122647882966e-06,
2018
+ "loss": 0.4476,
2019
+ "step": 323
2020
+ },
2021
+ {
2022
+ "epoch": 9.25,
2023
+ "learning_rate": 6.909830056250527e-06,
2024
+ "loss": 0.445,
2025
+ "step": 324
2026
+ },
2027
+ {
2028
+ "epoch": 9.27,
2029
+ "learning_rate": 6.854552438483866e-06,
2030
+ "loss": 0.4494,
2031
+ "step": 325
2032
+ },
2033
+ {
2034
+ "epoch": 9.29,
2035
+ "learning_rate": 6.799381282430284e-06,
2036
+ "loss": 0.4464,
2037
+ "step": 326
2038
+ },
2039
+ {
2040
+ "epoch": 9.31,
2041
+ "learning_rate": 6.744318455428436e-06,
2042
+ "loss": 0.4503,
2043
+ "step": 327
2044
+ },
2045
+ {
2046
+ "epoch": 9.33,
2047
+ "learning_rate": 6.689365821150421e-06,
2048
+ "loss": 0.4503,
2049
+ "step": 328
2050
+ },
2051
+ {
2052
+ "epoch": 9.35,
2053
+ "learning_rate": 6.634525239538736e-06,
2054
+ "loss": 0.4485,
2055
+ "step": 329
2056
+ },
2057
+ {
2058
+ "epoch": 9.36,
2059
+ "learning_rate": 6.579798566743314e-06,
2060
+ "loss": 0.4492,
2061
+ "step": 330
2062
+ },
2063
+ {
2064
+ "epoch": 9.38,
2065
+ "learning_rate": 6.525187655058687e-06,
2066
+ "loss": 0.4462,
2067
+ "step": 331
2068
+ },
2069
+ {
2070
+ "epoch": 9.4,
2071
+ "learning_rate": 6.4706943528613135e-06,
2072
+ "loss": 0.4499,
2073
+ "step": 332
2074
+ },
2075
+ {
2076
+ "epoch": 9.42,
2077
+ "learning_rate": 6.4163205045469975e-06,
2078
+ "loss": 0.4478,
2079
+ "step": 333
2080
+ },
2081
+ {
2082
+ "epoch": 9.44,
2083
+ "learning_rate": 6.362067950468489e-06,
2084
+ "loss": 0.4491,
2085
+ "step": 334
2086
+ },
2087
+ {
2088
+ "epoch": 9.46,
2089
+ "learning_rate": 6.3079385268731575e-06,
2090
+ "loss": 0.4481,
2091
+ "step": 335
2092
+ },
2093
+ {
2094
+ "epoch": 9.47,
2095
+ "learning_rate": 6.25393406584088e-06,
2096
+ "loss": 0.4509,
2097
+ "step": 336
2098
+ },
2099
+ {
2100
+ "epoch": 9.49,
2101
+ "learning_rate": 6.200056395222012e-06,
2102
+ "loss": 0.4489,
2103
+ "step": 337
2104
+ },
2105
+ {
2106
+ "epoch": 9.51,
2107
+ "learning_rate": 6.146307338575519e-06,
2108
+ "loss": 0.4469,
2109
+ "step": 338
2110
+ },
2111
+ {
2112
+ "epoch": 9.53,
2113
+ "learning_rate": 6.092688715107265e-06,
2114
+ "loss": 0.4472,
2115
+ "step": 339
2116
+ },
2117
+ {
2118
+ "epoch": 9.55,
2119
+ "learning_rate": 6.039202339608432e-06,
2120
+ "loss": 0.4499,
2121
+ "step": 340
2122
+ },
2123
+ {
2124
+ "epoch": 9.57,
2125
+ "learning_rate": 5.9858500223941066e-06,
2126
+ "loss": 0.4483,
2127
+ "step": 341
2128
+ },
2129
+ {
2130
+ "epoch": 9.58,
2131
+ "learning_rate": 5.932633569242e-06,
2132
+ "loss": 0.4494,
2133
+ "step": 342
2134
+ },
2135
+ {
2136
+ "epoch": 9.6,
2137
+ "learning_rate": 5.879554781331317e-06,
2138
+ "loss": 0.4528,
2139
+ "step": 343
2140
+ },
2141
+ {
2142
+ "epoch": 9.62,
2143
+ "learning_rate": 5.8266154551818225e-06,
2144
+ "loss": 0.4467,
2145
+ "step": 344
2146
+ },
2147
+ {
2148
+ "epoch": 9.62,
2149
+ "eval_loss": 0.4474964439868927,
2150
+ "eval_runtime": 23.8091,
2151
+ "eval_samples_per_second": 8.4,
2152
+ "eval_steps_per_second": 0.546,
2153
+ "step": 344
2154
+ },
2155
+ {
2156
+ "epoch": 9.62,
2157
+ "step": 344,
2158
+ "total_flos": 6.414434999835034e+16,
2159
+ "train_loss": 0.4900048969443454,
2160
+ "train_runtime": 108147.3514,
2161
+ "train_samples_per_second": 2.577,
2162
+ "train_steps_per_second": 0.005
2163
+ }
2164
+ ],
2165
+ "logging_steps": 1,
2166
+ "max_steps": 540,
2167
+ "num_train_epochs": 10,
2168
+ "save_steps": 500,
2169
+ "total_flos": 6.414434999835034e+16,
2170
+ "trial_name": null,
2171
+ "trial_params": null
2172
+ }