liswei commited on
Commit
a8c0588
1 Parent(s): 71b1ed7

Model save

Browse files
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: liswei/OpenELM-1_1B-zh-base
3
+ tags:
4
+ - llama-factory
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: OpenELM-1_1B-zh-cp
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # OpenELM-1_1B-zh-cp
15
+
16
+ This model is a fine-tuned version of [liswei/OpenELM-1_1B-zh-base](https://huggingface.co/liswei/OpenELM-1_1B-zh-base) on an unknown dataset.
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - learning_rate: 0.0001
36
+ - train_batch_size: 4
37
+ - eval_batch_size: 8
38
+ - seed: 42
39
+ - distributed_type: multi-GPU
40
+ - num_devices: 4
41
+ - total_train_batch_size: 16
42
+ - total_eval_batch_size: 32
43
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
+ - lr_scheduler_type: cosine
45
+ - lr_scheduler_warmup_ratio: 0.1
46
+ - num_epochs: 1.0
47
+
48
+ ### Training results
49
+
50
+
51
+
52
+ ### Framework versions
53
+
54
+ - Transformers 4.41.1
55
+ - Pytorch 2.3.0+cu121
56
+ - Datasets 2.19.1
57
+ - Tokenizers 0.19.1
configuration_openelm.py ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # For licensing see accompanying LICENSE file.
3
+ # Copyright (C) 2024 Apple Inc. All Rights Reserved.
4
+ #
5
+
6
+ """Implements HF OpenELMConfig based on PretrainedConfig"""
7
+ from numbers import Number
8
+ from typing import List, Optional, Union
9
+
10
+ import numpy as np
11
+ from transformers import PretrainedConfig
12
+
13
+
14
+ def make_divisible(
15
+ v: Union[float, int],
16
+ divisor: Optional[int] = 8,
17
+ min_value: Optional[Union[float, int]] = None,
18
+ ) -> Union[float, int]:
19
+ """
20
+ This function is taken from the original tf repo.
21
+ It ensures that all layers have a channel number that is divisible by the divisor
22
+ It can be seen at:
23
+ https://github.com/tensorflow/models/blob/2cfc99eff5e5eb729c6793d2f3d03aa1c9be2b15/research/slim/nets/mobilenet/mobilenet.py#L62
24
+
25
+ Args:
26
+ v: input value
27
+ divisor: default to 8
28
+ min_value: minimum divisor value
29
+ Returns:
30
+ new_v: new divisible value
31
+ """
32
+ if min_value is None:
33
+ min_value = divisor
34
+ new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
35
+ # Make sure that round down does not go down by more than 10%.
36
+ if new_v < 0.9 * v:
37
+ new_v += divisor
38
+ return new_v
39
+
40
+
41
+ def compute_heads(model_dim: int, head_dim: int) -> int:
42
+ """Compute the number of heads.
43
+
44
+ Args:
45
+ model_dim: Model dimension.
46
+ head_dim: Head dimension.
47
+
48
+ Returns:
49
+ An integer denoting number of heads in multi-head attention is returned.
50
+
51
+ Raises:
52
+ ValueError: if model dimension is not divisible by head dimension.
53
+ """
54
+ if model_dim % head_dim == 0:
55
+ return model_dim // head_dim
56
+ else:
57
+ raise ValueError(
58
+ f"Model dimension should be divisible by head dimension. Got: {model_dim} and {head_dim}."
59
+ )
60
+
61
+
62
+ OpenELM_CONFIGS = {
63
+ "OpenELM-270M": dict(
64
+ num_transformer_layers=16,
65
+ model_dim=1280,
66
+ head_dim=64,
67
+ num_gqa_groups=4,
68
+ normalize_qk_projections=True,
69
+ share_input_output_layers=True,
70
+ # Vary the FFN and QKV multipliers to create variable FFN and attention layers respectively.
71
+ ffn_multipliers=(0.5, 4.0),
72
+ qkv_multipliers=(0.5, 1.0),
73
+ ),
74
+ "OpenELM-450M": dict(
75
+ num_transformer_layers=20,
76
+ model_dim=1536,
77
+ head_dim=64,
78
+ num_gqa_groups=4,
79
+ normalize_qk_projections=True,
80
+ share_input_output_layers=True,
81
+ # Vary the FFN and QKV multipliers to create variable FFN and attention layers respectively.
82
+ ffn_multipliers=(0.5, 4.0),
83
+ qkv_multipliers=(0.5, 1.0),
84
+ ),
85
+ "OpenELM-1_1B": dict(
86
+ num_transformer_layers=28,
87
+ model_dim=2048,
88
+ head_dim=64,
89
+ num_gqa_groups=4,
90
+ normalize_qk_projections=True,
91
+ share_input_output_layers=True,
92
+ # Vary the FFN and QKV multipliers to create variable FFN and attention layers respectively.
93
+ ffn_multipliers=(0.5, 4.0),
94
+ qkv_multipliers=(0.5, 1.0),
95
+ ),
96
+ "OpenELM-3B": dict(
97
+ num_transformer_layers=36,
98
+ model_dim=3072,
99
+ head_dim=128,
100
+ num_gqa_groups=4,
101
+ normalize_qk_projections=True,
102
+ share_input_output_layers=True,
103
+ # Vary the FFN and QKV multipliers to create variable FFN and attention layers respectively.
104
+ ffn_multipliers=(0.5, 4.0),
105
+ qkv_multipliers=(0.5, 1.0),
106
+ ),
107
+ }
108
+
109
+
110
+ class OpenELMConfig(PretrainedConfig):
111
+ r"""
112
+ This is the configuration class to store the configuration of a [`OpenELMModel`]. It is used to instantiate an OpenELM model according to the specified arguments, defining the model architecture.
113
+
114
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
115
+ documentation from [`PretrainedConfig`] for more information.
116
+
117
+ Args:
118
+ vocab_size (`int`, *optional*, defaults to 32000):
119
+ Vocabulary size of the OpenELM model.
120
+ max_context_length (`int`, *optional*, defaults to 2048):
121
+ Maximum number of input tokens.
122
+ num_transformer_layers (`int`, *optional*, defaults to 12):
123
+ Number of hidden layers in the Transformer decoder.
124
+ model_dim (`int`, *optional*, defaults to 2048):
125
+ Dimension of the hidden representations.
126
+ head_dim (`int`, *optional*, defaults to 128):
127
+ The attention head dimension.
128
+ qkv_multipliers (`Union[Number, List[Number]]`, *optional*, defaults to 1.0):
129
+ If the qkv_multipliers is a Number, then all attention layers have the same latent dimensions,
130
+ resulting in uniform allocation of parameters.
131
+ If the qkv_multipliers is a List of Number, then each attention layer have different latent dimensions
132
+ assuming qkv_multipliers[0] != qkv_multipliers[1]. This results in variable allocation of parameters in attention layer.
133
+ This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
134
+ num_query_heads (`Union[int, None]`, *optional*, defaults to None):
135
+ The number of query heads, computed from `compute_heads(model_dim=model_dim, head_dim=head_dim)`.
136
+ num_gqa_groups (`int`, *optional*, defaults to 1):
137
+ This variable allows to switch between multi-head attention, group query attention, and multi-query attention.
138
+ When num_gqa_groups == 1, then it is multi-head attention.
139
+ When 1 < num_gqa_groups < num_heads and num_heads is divisible by num_gqa_groups, then it is group query attention
140
+ When num_gqa_groups == num_heads, then it is multi-query attention
141
+ ffn_multipliers (`Union[Number, List[Number]]`, *optional*, defaults to 4.0):
142
+ Feed-forward network (FFN) multipliers.
143
+ If the ffn_multipliers is a Number, then all FFN layers have the same latent dimensions,
144
+ resulting in uniform allocation of parameters.
145
+ If the ffn_multipliers is a List of Number, then each FFN layer have different latent dimensions
146
+ assuming ffn_multipliers[0] != ffn_multipliers[1]. This results in variable allocation of parameters in FFN layer.
147
+ This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
148
+ ffn_with_glu (`bool`, *optional*, defaults to True):
149
+ Whether to use FFN with Gated Linear Unit (GLU)
150
+ ffn_dim_divisor (`int`, *optional*, defaults to 256):
151
+ The ffn layer dimension divisor.
152
+ activation_fn_name (`str` or `function`, *optional*, defaults to `"swish"`):
153
+ The non-linear activation function (function or string) in the decoder.
154
+ normalization_layer_name (`str` or `function`, *optional*, defaults to `"rms_norm"`):
155
+ Type of normalization layer.
156
+ normalize_qk_projections (`bool`, *optional*, defaults to False):
157
+ Whether to normalize queries and keys after projections
158
+ share_input_output_layers (`bool`, *optional*, defaults to False):
159
+ Whether to share the embedding between input and output linear layer
160
+ rope_freq_constant (`int`, *optional*, defaults to 10000):
161
+ The base period of the RoPE embeddings.
162
+ rope_max_length (`int`, *optional*, defaults to 4096):
163
+ That rope_max_length is set to twice of max_context_length.
164
+ This allows flexibility in token lengths during training or fine-tuning.
165
+ initializer_range (`float`, *optional*, defaults to 0.02):
166
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
167
+ use_cache (`bool`, *optional*, defaults to `True`):
168
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
169
+ relevant if `config.is_decoder=True`.
170
+ bos_token_id (`int`, *optional*, defaults to 2):
171
+ Beginning of stream token id.
172
+ eos_token_id (`int`, *optional*, defaults to 1):
173
+ End of stream token id.
174
+ """
175
+
176
+ model_type = "openelm"
177
+
178
+ def __init__(
179
+ self,
180
+ vocab_size: int = 32000,
181
+ max_context_length: int = 2048,
182
+ num_transformer_layers: int = 12,
183
+ model_dim: int = 2048,
184
+ head_dim: int = 128,
185
+ qkv_multipliers: Union[Number, List[Number]] = 1.0,
186
+ num_query_heads: Union[int, None] = None,
187
+ num_gqa_groups: int = 1,
188
+ ffn_multipliers: Union[Number, List[Number]] = 4.0,
189
+ ffn_with_glu: bool = True,
190
+ ffn_dim_divisor: int = 256,
191
+ activation_fn_name: str = "swish",
192
+ normalization_layer_name: str = "rms_norm",
193
+ normalize_qk_projections: bool = False,
194
+ share_input_output_layers: bool = False,
195
+ rope_freq_constant: int = 10000,
196
+ rope_max_length: int = 4096,
197
+ initializer_range: float = 0.02,
198
+ use_cache: bool = True,
199
+ bos_token_id: int = 1,
200
+ eos_token_id: int = 2,
201
+ **kwargs,
202
+ ) -> None:
203
+ self.vocab_size = vocab_size
204
+ self.max_context_length = max_context_length
205
+ self.num_transformer_layers = num_transformer_layers
206
+ self.model_dim = model_dim
207
+ self.head_dim = head_dim
208
+ self.qkv_multipliers = qkv_multipliers
209
+ self.num_query_heads = num_query_heads
210
+ self.num_gqa_groups = num_gqa_groups
211
+ self.ffn_multipliers = ffn_multipliers
212
+ self.ffn_with_glu = ffn_with_glu
213
+ self.ffn_dim_divisor = ffn_dim_divisor
214
+ self.activation_fn_name = activation_fn_name
215
+ self.normalization_layer_name = normalization_layer_name
216
+ self.normalize_qk_projections = normalize_qk_projections
217
+ self.share_input_output_layers = share_input_output_layers
218
+ self.rope_freq_constant = rope_freq_constant
219
+ self.rope_max_length = rope_max_length
220
+ self.num_query_heads = (
221
+ compute_heads(model_dim=model_dim, head_dim=head_dim)
222
+ if num_query_heads is None
223
+ else num_query_heads
224
+ )
225
+ self.initializer_range = initializer_range
226
+
227
+ self.__post_init__()
228
+ super().__init__(
229
+ use_cache=use_cache,
230
+ bos_token_id=bos_token_id,
231
+ eos_token_id=eos_token_id,
232
+ **kwargs,
233
+ )
234
+
235
+ def __post_init__(self) -> None:
236
+ if self.num_gqa_groups is not None:
237
+ head_multiple_of = self.num_gqa_groups
238
+ else:
239
+ head_multiple_of = 2
240
+
241
+ if isinstance(self.qkv_multipliers, Number):
242
+ # All attention layers have the same latent dimensions, resulting in uniform allocation of parameters.
243
+ qkv_dim = make_divisible(
244
+ self.model_dim * self.qkv_multipliers,
245
+ divisor=self.head_dim * head_multiple_of,
246
+ )
247
+ query_dims = [int(qkv_dim)] * self.num_transformer_layers
248
+
249
+ elif (
250
+ isinstance(self.qkv_multipliers, (tuple, list))
251
+ and len(self.qkv_multipliers) == 2
252
+ ):
253
+ # Each attention layer have different latent dimensions assuming qkv_multipliers[0] != qkv_multipliers[1].
254
+ # This results in variable allocation of parameters in attention layer.
255
+ # This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
256
+ qkv_multipliers = [
257
+ round(v, 2)
258
+ for v in np.linspace(
259
+ self.qkv_multipliers[0],
260
+ self.qkv_multipliers[1],
261
+ num=self.num_transformer_layers,
262
+ dtype=float,
263
+ )
264
+ ]
265
+ # Make sure that scaled model dimension is divisible by scaled head dimension.
266
+ query_dims = [
267
+ int(
268
+ make_divisible(
269
+ self.model_dim * m, divisor=self.head_dim * head_multiple_of
270
+ )
271
+ )
272
+ for m in qkv_multipliers
273
+ ]
274
+ else:
275
+ raise NotImplementedError(
276
+ f"QKV multipliers should be a single number or a list containing exactly two numbers. Got: {qkv_multipliers}."
277
+ )
278
+
279
+ # compute the number of query, key, and value heads
280
+ # For multi-head and multi-query attention, the number of heads for query, key, and value are the same.
281
+ # For group query attention, the number of key and value heads are the same.
282
+ self.num_query_heads = [
283
+ int(compute_heads(q_dim, self.head_dim)) for q_dim in query_dims
284
+ ]
285
+ self.num_kv_heads = [
286
+ q_heads // self.num_gqa_groups for q_heads in self.num_query_heads
287
+ ]
288
+
289
+ # Feed-forward network (FFN) multipliers
290
+ if isinstance(self.ffn_multipliers, Number):
291
+ # All FFN layers have the same latent dimensions, resulting in uniform allocation of parameters.
292
+ self.ffn_multipliers = [self.ffn_multipliers] * self.num_transformer_layers
293
+ elif isinstance(self.ffn_multipliers, (tuple, list)):
294
+ # Each FFN layer have different latent dimensions assuming ffn_multipliers[0] != ffn_multipliers[1].
295
+ # This results in variable allocation of parameters in FFN layer.
296
+ # This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
297
+ if len(self.ffn_multipliers) == 2:
298
+ self.ffn_multipliers = [
299
+ round(v, 2)
300
+ for v in np.linspace(
301
+ self.ffn_multipliers[0],
302
+ self.ffn_multipliers[1],
303
+ num=self.num_transformer_layers,
304
+ dtype=float,
305
+ )
306
+ ]
307
+ else:
308
+ assert (
309
+ len(self.ffn_multipliers) == self.num_transformer_layers
310
+ ), f"{len(self.ffn_multipliers)=}!={self.num_transformer_layers=}"
311
+ else:
312
+ raise NotImplementedError(
313
+ f"FFN multipliers should be a single number or a list containing exactly two numbers. Got: {qkv_multipliers}."
314
+ )
315
+
316
+ # check num_query_heads divisible by num_kv_heads for every layer
317
+ for layer_idx in range(len(query_dims)):
318
+ assert self.num_query_heads[layer_idx] % self.num_kv_heads[layer_idx] == 0
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.41.1"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:83215022dc4410fc88da4af61b0e753ab0dbab067e0b84fca186aa8a68a4bb1c
3
  size 4563369024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5b0b679ba69dba67254c1c1e6797608e427a209cb8b3e7116925e851752f516
3
  size 4563369024
modeling_openelm.py ADDED
@@ -0,0 +1,1008 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # For licensing see accompanying LICENSE file.
3
+ # Copyright (C) 2024 Apple Inc. All Rights Reserved.
4
+ #
5
+
6
+ from typing import List, Optional, Tuple, Union
7
+
8
+ import torch
9
+ import torch.utils.checkpoint
10
+ from torch import Tensor, nn
11
+ from torch.nn import CrossEntropyLoss
12
+ from torch.nn import functional as F
13
+ from transformers import PreTrainedModel
14
+ from transformers.activations import ACT2FN
15
+ from transformers.cache_utils import Cache, DynamicCache, StaticCache
16
+ from transformers.modeling_outputs import (
17
+ BaseModelOutputWithPast,
18
+ CausalLMOutputWithPast,
19
+ )
20
+ from transformers.utils import logging
21
+
22
+ logger = logging.get_logger(__name__)
23
+
24
+ # this import has to be relative, otherwise, when setting trust_remote_code=True
25
+ # huggingface transformers won't be able to load the module correctly
26
+ from .configuration_openelm import OpenELMConfig, make_divisible
27
+
28
+
29
+ class OpenELMRMSNorm(nn.Module):
30
+ def __init__(self, num_features: int, eps: float = 1e-6):
31
+ """
32
+ Initialize the OpenELMRMSNorm normalization layer.
33
+
34
+ Args:
35
+ dim (int): The dimension of the input tensor.
36
+ eps (float, optional): A small value added to the denominator for numerical stability. Default is 1e-6.
37
+
38
+ Attributes:
39
+ eps (float): A small value added to the denominator for numerical stability.
40
+ weight (nn.Parameter): Learnable scaling parameter.
41
+
42
+ """
43
+ super().__init__()
44
+ self.eps = eps
45
+ self.weight = nn.Parameter(torch.ones(num_features))
46
+ self.num_features = num_features
47
+
48
+ def _norm(self, x: Tensor) -> Tensor:
49
+ """
50
+ Apply the OpenELMRMSNorm normalization to the input tensor.
51
+
52
+ Args:
53
+ x (torch.Tensor): The input tensor.
54
+
55
+ Returns:
56
+ torch.Tensor: The normalized tensor.
57
+
58
+ """
59
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
60
+
61
+ def forward(self, x: Tensor) -> Tensor:
62
+ """
63
+ Forward pass through the OpenELMRMSNorm layer.
64
+
65
+ Args:
66
+ x (torch.Tensor): The input tensor.
67
+
68
+ Returns:
69
+ torch.Tensor: The output tensor after applying OpenELMRMSNorm.
70
+
71
+ """
72
+ output = self._norm(x.float()).type_as(x)
73
+ return output * self.weight
74
+
75
+ def extra_repr(self) -> str:
76
+ return (
77
+ super().extra_repr() + f"num_features={self.num_features}, eps={self.eps}"
78
+ )
79
+
80
+
81
+ class OpenELMPreTrainedModel(PreTrainedModel):
82
+ config_class = OpenELMConfig
83
+ base_model_prefix = "transformer"
84
+ supports_gradient_checkpointing = True
85
+ _no_split_modules = ["OpenELMDecoderLayer"]
86
+ _skip_keys_device_placement = "past_key_values"
87
+
88
+ def __init__(self, *inputs, **kwargs) -> None:
89
+ super().__init__(*inputs, **kwargs)
90
+
91
+ def _init_weights(self, module: nn.Module) -> None:
92
+ """Initialize the weights."""
93
+ if isinstance(module, nn.Linear):
94
+ # Slightly different from the TF version which uses truncated_normal for initialization
95
+ # cf https://github.com/pytorch/pytorch/pull/5617
96
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
97
+ if module.bias is not None:
98
+ module.bias.data.zero_()
99
+ elif isinstance(module, nn.Embedding):
100
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
101
+ if module.padding_idx is not None:
102
+ module.weight.data[module.padding_idx].zero_()
103
+ elif isinstance(module, OpenELMRMSNorm):
104
+ module.weight.data.fill_(1.0)
105
+
106
+
107
+ def _rotate_half(x: Tensor) -> Tensor:
108
+ x1, x2 = x.chunk(2, dim=-1)
109
+ return torch.cat((-x2, x1), dim=-1)
110
+
111
+
112
+ def _apply_rotary_pos_emb(x: Tensor, pos_sin: Tensor, pos_cos: Tensor) -> Tensor:
113
+ return (x * pos_cos) + (_rotate_half(x) * pos_sin)
114
+
115
+
116
+ class OpenELMRotaryEmbedding(torch.nn.Module):
117
+ """
118
+ The rotary position embeddings (aka RoPE) from `RoFormer <https://arxiv.org/abs/2104.09864>`_.
119
+
120
+ RoPE encodes the position information of tokens using a rotation matrix, and is able to capture
121
+ explicit relative positional dependencies.
122
+
123
+ Args:
124
+ model_dim: The dimensionality of the model's hidden state.
125
+ max_seq_length: Maximum sequence length.
126
+ freq_constant: A constant used for computing frequencies.
127
+ """
128
+
129
+ def __init__(
130
+ self, model_dim: int, max_seq_length: int, freq_constant: int = 10000
131
+ ) -> None:
132
+ inv_freq = 1.0 / (
133
+ freq_constant
134
+ ** (torch.arange(0, model_dim, 2, dtype=torch.float32) / model_dim)
135
+ )
136
+ super().__init__()
137
+
138
+ self.model_dim = model_dim
139
+ self.freq_constant = freq_constant
140
+ self.max_seq_length = max_seq_length
141
+
142
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
143
+ self._cached_cos = None
144
+ self._cached_sin = None
145
+ self._cached_seq_length = max_seq_length
146
+ self._compute_sin_cos_embeddings(max_seq_length)
147
+
148
+ def extra_repr(self) -> str:
149
+ return f"\tmodel_dim={self.model_dim}, max_seq_length={self.max_seq_length}, freq_constant={self.freq_constant}"
150
+
151
+ def _compute_sin_cos_embeddings(
152
+ self,
153
+ key_len: int,
154
+ key_device: torch.device = torch.device("cpu"),
155
+ key_dtype: torch.dtype = torch.float32,
156
+ ) -> None:
157
+ """
158
+ Compute sine and cos embeddings.
159
+
160
+ Args:
161
+ key_len: Number of tokens in the key embeddings in the transformer model.
162
+ device: Device where the key embeddings are stored.
163
+ key_dtype: Data type of the key embeddings.
164
+
165
+ Returns:
166
+ None
167
+
168
+ ...note:
169
+ We recalculate the sine and cosine embeddings if any of the following conditions are met:
170
+ 1. The number of tokens in key embeddings are greater than the cached sequence length.
171
+ 2. Sine and cosine caches are empty.
172
+ 3. The device and data type of sine and cosine embeddings does not match with the key embeddings.
173
+ """
174
+ if (
175
+ key_len > self._cached_seq_length
176
+ or self._cached_cos is None
177
+ or (self._cached_cos is not None and self._cached_cos.device != key_device)
178
+ or (self._cached_cos is not None and self._cached_cos.dtype != key_dtype)
179
+ or self._cached_sin is None
180
+ or (self._cached_sin is not None and self._cached_sin.device != key_device)
181
+ or (self._cached_sin is not None and self._cached_sin.dtype != key_dtype)
182
+ ):
183
+ self._cached_seq_length = max(key_len, self._cached_seq_length)
184
+
185
+ # The shape of 'pos_index' is [number of key tokens]
186
+ pos_index = torch.arange(
187
+ self._cached_seq_length,
188
+ dtype=torch.float32,
189
+ device=self.inv_freq.device,
190
+ )
191
+ # The shape of 'pos_index_theta' is [number of key tokens, model dimension]
192
+ pos_index_theta = torch.einsum("i,j->ij", pos_index, self.inv_freq)
193
+ # The shape of 'emb' is [number of key tokens, model dimension]
194
+ emb = torch.cat((pos_index_theta, pos_index_theta), dim=-1)
195
+
196
+ # the shape of cos and sin embeddings is [number of key tokens, model_dim]
197
+ cos_emb = emb.cos().to(dtype=key_dtype, device=key_device)
198
+ sin_emb = emb.sin().to(dtype=key_dtype, device=key_device)
199
+
200
+ # the shape of cached cos and sin embeddings is [1, 1, number of key tokens, model_dim]
201
+ self._cached_cos = cos_emb[None, None, :, :]
202
+ self._cached_sin = sin_emb[None, None, :, :]
203
+
204
+ def forward(
205
+ self,
206
+ query: torch.Tensor,
207
+ key: torch.Tensor,
208
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
209
+ """
210
+ The forward function of RoPE embeddings.
211
+
212
+ Args:
213
+ query: Query embeddings in the transformer model. The shape of query embeddings is
214
+ [Batch, number of query heads, number of query tokens, model dimension].
215
+ key: Key embeddings in the transformer model. The shape of key embeddings is
216
+ [Batch, number of key heads, number of key tokens, model dimension].
217
+
218
+ Returns:
219
+ A tuple containing the query and key embeddings with positional information. The shape of the returned query
220
+ and key embeddings is the same as the input query and key embeddings respectively.
221
+
222
+ ...note:
223
+ The RoPE embedding computation is done in full-precision. After the computation, input query and key tensors
224
+ are casted to original input datatype.
225
+ """
226
+ dim = key.shape[-1]
227
+ key_len = key.shape[2]
228
+ query_len = query.shape[2]
229
+
230
+ assert dim == self.model_dim
231
+ assert key.device == query.device
232
+ assert key.dtype == query.dtype
233
+
234
+ # In the context of self-attention, the lengths of keys and queries are equal.
235
+ # However, in generation tasks, such as predicting the next token in a sequence, the lengths of keys and queries
236
+ # can differ. For instance, when employing key-value (KV) caching for sequence prediction, the keys
237
+ # represent embeddings of previous tokens and the current token, while the query corresponds
238
+ # to the embedding of the current token only.
239
+ assert (
240
+ key_len >= query_len
241
+ ), "Number of keys has to be greater than or equal to number of queries."
242
+
243
+ query_float = query.float()
244
+ key_float = key.float()
245
+
246
+ self._compute_sin_cos_embeddings(
247
+ key_len, key_device=key_float.device, key_dtype=key_float.dtype
248
+ )
249
+ query_float = _apply_rotary_pos_emb(
250
+ x=query_float,
251
+ pos_sin=self._cached_sin[..., key_len - query_len : key_len, :],
252
+ pos_cos=self._cached_cos[..., key_len - query_len : key_len, :],
253
+ )
254
+ key_float = _apply_rotary_pos_emb(
255
+ x=key_float,
256
+ pos_sin=self._cached_sin[..., :key_len, :],
257
+ pos_cos=self._cached_cos[..., :key_len, :],
258
+ )
259
+
260
+ return query_float.type_as(query), key_float.type_as(key)
261
+
262
+
263
+ class OpenELMMultiHeadCausalAttention(nn.Module):
264
+ def __init__(self, config: OpenELMConfig, layer_idx: int) -> None:
265
+ super().__init__()
266
+ self.layer_idx = layer_idx
267
+ head_dim = config.head_dim
268
+ q_heads = config.num_query_heads[layer_idx]
269
+ k_heads = config.num_kv_heads[layer_idx]
270
+ v_heads = config.num_kv_heads[layer_idx]
271
+
272
+ self.qkv_proj = nn.Linear(
273
+ in_features=config.model_dim,
274
+ out_features=(q_heads + k_heads + v_heads) * head_dim,
275
+ bias=False,
276
+ )
277
+
278
+ self.pos_embedding = OpenELMRotaryEmbedding(
279
+ model_dim=config.head_dim,
280
+ max_seq_length=config.rope_max_length,
281
+ freq_constant=config.rope_freq_constant,
282
+ )
283
+
284
+ if config.normalize_qk_projections:
285
+ self.q_norm = OpenELMRMSNorm(
286
+ num_features=config.head_dim,
287
+ )
288
+ self.k_norm = OpenELMRMSNorm(
289
+ num_features=config.head_dim,
290
+ )
291
+ else:
292
+ self.q_norm = None
293
+ self.k_norm = None
294
+
295
+ self.out_proj = nn.Linear(
296
+ in_features=q_heads * head_dim,
297
+ out_features=config.model_dim,
298
+ bias=False,
299
+ )
300
+
301
+ self.head_dim = config.head_dim
302
+ self.num_q_heads = q_heads
303
+ self.num_k_heads = k_heads
304
+ self.num_v_heads = v_heads
305
+ self.transformer_dim = config.model_dim
306
+ self.num_groups = self.num_q_heads // self.num_k_heads
307
+
308
+ def extra_repr(self) -> str:
309
+ return (
310
+ super().extra_repr()
311
+ + f"query_heads={self.num_q_heads}, key_heads={self.num_k_heads}, value_heads={self.num_v_heads}"
312
+ )
313
+
314
+ def forward(
315
+ self,
316
+ hidden_states: torch.Tensor,
317
+ attention_mask: Optional[torch.Tensor] = None,
318
+ past_key_value: Optional[Cache] = None,
319
+ output_attentions: bool = False,
320
+ use_cache: bool = False,
321
+ cache_position: Optional[torch.LongTensor] = None,
322
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
323
+ """
324
+ Forward pass of multi-head self-attention.
325
+
326
+ Args:
327
+ hidden_states: Input tensor of the shape [batch size, sequence length, model dimension].
328
+ past_key_value: Tensor storing the cached keys and values.
329
+ output_attentions: output attention weights.
330
+ use_cache: Specifies whether to use kv-cache for generation.
331
+ cache_position: used for updating the kv-cache.
332
+
333
+ Returns:
334
+ The output of the same shape as the input, optionally with a tensor containing cached keys and values.
335
+ """
336
+
337
+ # scaled_dot_product_attention does not return attention weights, set output_attentions to False
338
+ output_attentions = False
339
+ batch_size, seq_length, d_model = hidden_states.size()
340
+
341
+ # [B, S, d] --> [B, S, (q_h + k_h + v_h) * h]
342
+ qkv = self.qkv_proj(hidden_states)
343
+ # [B, S, (q_h + k_h + v_h) * h] --> [B, S, (q_h + k_h + v_h), h]
344
+ qkv = qkv.reshape(
345
+ batch_size,
346
+ seq_length,
347
+ self.num_q_heads + self.num_k_heads + self.num_v_heads,
348
+ self.head_dim,
349
+ )
350
+ # [B, S, (q_h + k_h + v_h), h] --> [B, (q_h + k_h + v_h), S, h]
351
+ qkv = qkv.transpose(1, 2)
352
+ # [B, (q_h + k_h + v_h), S, h] --> [B, q_h, S h], [B, k_h, S, h], [B, v_h, S, h]
353
+ queries, keys, values = qkv.split(
354
+ [self.num_q_heads, self.num_k_heads, self.num_v_heads], dim=1
355
+ )
356
+
357
+ if self.q_norm is not None:
358
+ queries = self.q_norm(queries)
359
+
360
+ if self.k_norm is not None:
361
+ keys = self.k_norm(keys)
362
+
363
+ past_key_value = getattr(self, "past_key_value", past_key_value)
364
+
365
+ if past_key_value is not None:
366
+ # sin and cos are specific to RoPE models; position_ids needed for the static cache
367
+ # cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
368
+ cache_kwargs = {"cache_position": cache_position}
369
+ keys, values = past_key_value.update(
370
+ keys, values, self.layer_idx, cache_kwargs
371
+ )
372
+
373
+ # Add positional embedding
374
+ queries, keys = self.pos_embedding(queries, keys)
375
+
376
+ if self.num_groups != 1:
377
+ # GQA
378
+ # [B, k_h, S, h] --> [B, q_h, S, h]
379
+ keys = keys.repeat_interleave(self.num_groups, dim=1)
380
+ # [B, v_h, S, h] --> [B, q_h, S, h]
381
+ values = values.repeat_interleave(self.num_groups, dim=1)
382
+
383
+ causal_mask = attention_mask
384
+ if attention_mask is not None and cache_position is not None:
385
+ causal_mask = causal_mask[:, :, cache_position, : keys.shape[-2]]
386
+
387
+ attn_output = F.scaled_dot_product_attention(
388
+ queries,
389
+ keys,
390
+ values,
391
+ attn_mask=causal_mask,
392
+ dropout_p=0,
393
+ )
394
+
395
+ attn_output = attn_output.transpose(1, 2).contiguous()
396
+ attn_output = attn_output.reshape(
397
+ batch_size, seq_length, self.num_q_heads * self.head_dim
398
+ )
399
+ attn_output = self.out_proj(attn_output)
400
+ if not output_attentions:
401
+ attn_weights = None
402
+ return attn_output, attn_weights, past_key_value
403
+
404
+
405
+ class OpenELMFeedForwardNetwork(nn.Module):
406
+ def __init__(self, config: OpenELMConfig, layer_idx: int) -> None:
407
+ super().__init__()
408
+ ffn_multiplier = config.ffn_multipliers[layer_idx]
409
+ intermediate_dim = int(
410
+ make_divisible(
411
+ ffn_multiplier * config.model_dim,
412
+ divisor=config.ffn_dim_divisor,
413
+ )
414
+ )
415
+ if config.ffn_with_glu:
416
+ # FFN with Gated linear unit, as described in https://arxiv.org/abs/2002.05202v1.
417
+ self.proj_1 = nn.Linear(
418
+ in_features=config.model_dim,
419
+ out_features=2 * intermediate_dim,
420
+ bias=False,
421
+ )
422
+ self.proj_2 = nn.Linear(
423
+ in_features=intermediate_dim,
424
+ out_features=config.model_dim,
425
+ bias=False,
426
+ )
427
+ self.ffn_with_glu = True
428
+ else:
429
+ # Standard FFN, as described in https://arxiv.org/abs/1706.03762
430
+ self.proj_1 = nn.Linear(
431
+ in_features=config.model_dim,
432
+ out_features=intermediate_dim,
433
+ bias=False,
434
+ )
435
+ self.proj_2 = nn.Linear(
436
+ in_features=intermediate_dim,
437
+ out_features=config.model_dim,
438
+ bias=False,
439
+ )
440
+ self.ffn_with_glu = False
441
+
442
+ self.act = ACT2FN[config.activation_fn_name]
443
+
444
+ def extra_repr(self) -> str:
445
+ return super().extra_repr() + f"(ffn_with_glu) : {self.ffn_with_glu}"
446
+
447
+ def forward(self, x: Tensor) -> Tensor:
448
+ """Forward function of FFN layer.
449
+
450
+ Args:
451
+ x: Input tensor of the shape [batch size, sequence length, model dimension].
452
+
453
+ Returns:
454
+ A tensor of the same shape as the input.
455
+ """
456
+ if self.ffn_with_glu:
457
+ y_12 = self.proj_1(x)
458
+ y_1, y_2 = y_12.chunk(2, dim=-1)
459
+ y = self.act(y_1) * y_2
460
+ return self.proj_2(y)
461
+ else:
462
+ return self.proj_2(self.act(self.proj_1(x)))
463
+
464
+
465
+ class OpenELMDecoderLayer(nn.Module):
466
+ def __init__(self, config: OpenELMConfig, layer_idx: int) -> None:
467
+ super().__init__()
468
+ self.attn = OpenELMMultiHeadCausalAttention(config=config, layer_idx=layer_idx)
469
+ self.ffn = OpenELMFeedForwardNetwork(config=config, layer_idx=layer_idx)
470
+ self.ffn_norm = OpenELMRMSNorm(
471
+ num_features=config.model_dim,
472
+ )
473
+ self.attn_norm = OpenELMRMSNorm(
474
+ num_features=config.model_dim,
475
+ )
476
+
477
+ def forward(
478
+ self,
479
+ hidden_states: torch.Tensor,
480
+ attention_mask: Optional[torch.Tensor] = None,
481
+ position_ids: Optional[torch.LongTensor] = None,
482
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
483
+ output_attentions: Optional[bool] = False,
484
+ use_cache: Optional[bool] = False,
485
+ cache_position: Optional[torch.LongTensor] = None,
486
+ **kwargs,
487
+ ) -> Tuple[
488
+ torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]
489
+ ]:
490
+ """
491
+ Args:
492
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
493
+ attention_mask (`torch.FloatTensor`, *optional*):
494
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
495
+ query_sequence_length, key_sequence_length)` if default attention is used.
496
+ output_attentions (`bool`, *optional*):
497
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
498
+ returned tensors for more detail.
499
+ use_cache (`bool`, *optional*):
500
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
501
+ (see `past_key_values`).
502
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
503
+ """
504
+ residual = hidden_states
505
+ hidden_states = self.attn_norm(hidden_states)
506
+
507
+ # Self Attention
508
+ hidden_states, self_attn_weights, present_key_value = self.attn(
509
+ hidden_states=hidden_states,
510
+ attention_mask=attention_mask,
511
+ past_key_value=past_key_value,
512
+ output_attentions=output_attentions,
513
+ use_cache=use_cache,
514
+ cache_position=cache_position,
515
+ **kwargs,
516
+ )
517
+ hidden_states = residual + hidden_states
518
+
519
+ # Fully Connected
520
+ residual = hidden_states
521
+ hidden_states = self.ffn_norm(hidden_states)
522
+ hidden_states = self.ffn(hidden_states)
523
+ hidden_states = residual + hidden_states
524
+
525
+ outputs = (hidden_states,)
526
+
527
+ if output_attentions:
528
+ outputs += (self_attn_weights,)
529
+
530
+ if use_cache:
531
+ outputs += (present_key_value,)
532
+
533
+ return outputs
534
+
535
+
536
+ class OpenELMModel(OpenELMPreTrainedModel):
537
+ config_class = OpenELMConfig
538
+
539
+ def __init__(self, config: OpenELMConfig):
540
+ super().__init__(config)
541
+ self.config = config
542
+
543
+ self.token_embeddings = nn.Embedding(
544
+ embedding_dim=config.model_dim,
545
+ num_embeddings=config.vocab_size,
546
+ )
547
+
548
+ self.layers = nn.ModuleList(
549
+ OpenELMDecoderLayer(config=config, layer_idx=layer_idx)
550
+ for layer_idx in range(config.num_transformer_layers)
551
+ )
552
+ self.norm = OpenELMRMSNorm(num_features=config.model_dim)
553
+ if config.share_input_output_layers:
554
+ self.classifier = None
555
+ else:
556
+ self.classifier = nn.Linear(
557
+ in_features=config.model_dim,
558
+ out_features=config.vocab_size,
559
+ bias=False,
560
+ )
561
+ self.num_transformer_layers = config.num_transformer_layers
562
+ self.gradient_checkpointing = False
563
+
564
+ # Register a causal mask to separate causal and padding mask creation. Merging happens in the attention class.
565
+ # NOTE: This is not friendly with TorchScript, ONNX, ExportedProgram serialization for very large `max_context_length`.
566
+ causal_mask = torch.full(
567
+ (config.max_context_length, config.max_context_length),
568
+ fill_value=True,
569
+ dtype=torch.bool,
570
+ )
571
+ self.register_buffer(
572
+ "causal_mask", torch.triu(causal_mask, diagonal=1), persistent=False
573
+ )
574
+
575
+ # Initialize weights and apply final processing
576
+ self.post_init()
577
+ self.reset_parameters(config=config)
578
+
579
+ def get_input_embeddings(self):
580
+ return self.token_embeddings
581
+
582
+ def set_input_embeddings(self, new_embeddings: torch.Tensor):
583
+ self.token_embeddings = new_embeddings
584
+
585
+ def reset_parameters(self, config: OpenELMConfig) -> None:
586
+ """Initialize the layers in Language Model
587
+
588
+ The initialization scheme is followed, following `OPT <https://arxiv.org/pdf/2205.01068.pdf>`_.
589
+
590
+ Args:
591
+ use_megatron_std: Use standard deviation as described in Megatron-LM.
592
+
593
+ Returns:
594
+ None
595
+ """
596
+ for module in self.modules():
597
+ if isinstance(module, nn.Linear):
598
+ std = module.in_features**-0.5
599
+ torch.nn.init.normal_(module.weight, mean=0.0, std=std)
600
+ if module.bias is not None:
601
+ torch.nn.init.zeros_(module.bias)
602
+ elif isinstance(module, nn.Embedding):
603
+ std = module.embedding_dim**-0.5
604
+ torch.nn.init.normal_(module.weight, mean=0.0, std=std)
605
+ elif isinstance(module, OpenELMRMSNorm):
606
+ if module.weight is not None:
607
+ torch.nn.init.ones_(module.weight)
608
+ if hasattr(module, "bias") and module.bias is not None:
609
+ torch.nn.init.zeros_(module.bias)
610
+
611
+ model_dim = config.model_dim
612
+ n_layers = config.num_transformer_layers
613
+ std = (model_dim**-0.5) * ((2 * n_layers) ** -0.5)
614
+ for param_name, param in self.named_parameters():
615
+ if param_name.endswith("out_proj.weight") or param_name.endswith(
616
+ "ffn.proj_2.weight"
617
+ ):
618
+ torch.nn.init.normal_(param, mean=0.0, std=std)
619
+
620
+ def forward(
621
+ self,
622
+ input_ids: torch.LongTensor = None,
623
+ attention_mask: Optional[torch.Tensor] = None,
624
+ position_ids: Optional[torch.LongTensor] = None,
625
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
626
+ inputs_embeds: Optional[torch.FloatTensor] = None,
627
+ use_cache: Optional[bool] = None,
628
+ output_attentions: Optional[bool] = None,
629
+ output_hidden_states: Optional[bool] = None,
630
+ return_dict: Optional[bool] = None,
631
+ cache_position: Optional[torch.LongTensor] = None,
632
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
633
+ output_attentions = (
634
+ output_attentions
635
+ if output_attentions is not None
636
+ else self.config.output_attentions
637
+ )
638
+ output_hidden_states = (
639
+ output_hidden_states
640
+ if output_hidden_states is not None
641
+ else self.config.output_hidden_states
642
+ )
643
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
644
+ return_dict = (
645
+ return_dict if return_dict is not None else self.config.use_return_dict
646
+ )
647
+
648
+ if (input_ids is None) ^ (inputs_embeds is not None):
649
+ raise ValueError(
650
+ "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one"
651
+ )
652
+
653
+ if self.gradient_checkpointing and self.training and use_cache:
654
+ logger.warning_once(
655
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
656
+ )
657
+ use_cache = False
658
+
659
+ if inputs_embeds is None:
660
+ inputs_embeds = self.token_embeddings(input_ids)
661
+
662
+ past_seen_tokens = 0
663
+ if use_cache: # kept for BC (cache positions)
664
+ if not isinstance(past_key_values, StaticCache):
665
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
666
+ past_seen_tokens = past_key_values.get_seq_length()
667
+
668
+ if cache_position is None:
669
+ cache_position = torch.arange(
670
+ past_seen_tokens,
671
+ past_seen_tokens + inputs_embeds.shape[1],
672
+ device=inputs_embeds.device,
673
+ )
674
+
675
+ if position_ids is None:
676
+ position_ids = cache_position.unsqueeze(0)
677
+
678
+ causal_mask = self._update_causal_mask(attention_mask, inputs_embeds)
679
+
680
+ # embed positions
681
+ hidden_states = inputs_embeds
682
+
683
+ # decoder layers
684
+ all_hidden_states = () if output_hidden_states else None
685
+ all_self_attns = () if output_attentions else None
686
+ next_decoder_cache = None
687
+
688
+ for decoder_layer in self.layers:
689
+ if output_hidden_states:
690
+ all_hidden_states += (hidden_states,)
691
+
692
+ if self.gradient_checkpointing and self.training:
693
+ layer_outputs = self._gradient_checkpointing_func(
694
+ decoder_layer.__call__,
695
+ hidden_states,
696
+ causal_mask,
697
+ position_ids,
698
+ past_key_values,
699
+ output_attentions,
700
+ use_cache,
701
+ cache_position,
702
+ )
703
+ else:
704
+ layer_outputs = decoder_layer(
705
+ hidden_states,
706
+ attention_mask=causal_mask,
707
+ position_ids=position_ids,
708
+ past_key_value=past_key_values,
709
+ output_attentions=output_attentions,
710
+ use_cache=use_cache,
711
+ cache_position=cache_position,
712
+ )
713
+
714
+ hidden_states = layer_outputs[0]
715
+
716
+ if use_cache:
717
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
718
+
719
+ if output_attentions:
720
+ all_self_attns += (layer_outputs[1],)
721
+
722
+ hidden_states = self.norm(hidden_states)
723
+
724
+ # add hidden states from the last decoder layer
725
+ if output_hidden_states:
726
+ all_hidden_states += (hidden_states,)
727
+
728
+ next_cache = None
729
+ if use_cache:
730
+ next_cache = (
731
+ next_decoder_cache.to_legacy_cache()
732
+ if isinstance(next_decoder_cache, Cache)
733
+ else next_decoder_cache
734
+ )
735
+ if not return_dict:
736
+ return tuple(
737
+ v
738
+ for v in [hidden_states, next_cache, all_hidden_states, all_self_attns]
739
+ if v is not None
740
+ )
741
+ return BaseModelOutputWithPast(
742
+ last_hidden_state=hidden_states,
743
+ past_key_values=next_cache,
744
+ hidden_states=all_hidden_states,
745
+ attentions=all_self_attns,
746
+ )
747
+
748
+ def _update_causal_mask(self, attention_mask, input_tensor):
749
+ if self.config._attn_implementation == "flash_attention_2":
750
+ if attention_mask is not None and 0.0 in attention_mask:
751
+ return attention_mask
752
+ return None
753
+
754
+ batch_size, seq_length = input_tensor.shape[:2]
755
+ dtype = input_tensor.dtype
756
+ device = input_tensor.device
757
+
758
+ # support going beyond cached `max_position_embedding`
759
+ if seq_length > self.causal_mask.shape[-1]:
760
+ causal_mask = torch.full(
761
+ (2 * self.causal_mask.shape[-1], 2 * self.causal_mask.shape[-1]),
762
+ fill_value=1,
763
+ )
764
+ self.register_buffer(
765
+ "causal_mask", torch.triu(causal_mask, diagonal=1), persistent=False
766
+ )
767
+
768
+ # We use the current dtype to avoid any overflows
769
+ min_dtype = torch.finfo(dtype).min
770
+ causal_mask = (
771
+ self.causal_mask[None, None, :, :].repeat(batch_size, 1, 1, 1).to(dtype)
772
+ * min_dtype
773
+ )
774
+
775
+ causal_mask = causal_mask.to(dtype=dtype, device=device)
776
+ if attention_mask is not None and attention_mask.dim() == 2:
777
+ mask_length = attention_mask.shape[-1]
778
+ padding_mask = causal_mask[..., :mask_length].eq(0.0) * attention_mask[
779
+ :, None, None, :
780
+ ].eq(0.0)
781
+ causal_mask[..., :mask_length] = causal_mask[..., :mask_length].masked_fill(
782
+ padding_mask, min_dtype
783
+ )
784
+
785
+ if self.config._attn_implementation == "sdpa" and attention_mask is not None:
786
+ # For dynamo, rather use a check on fullgraph=True once this is possible (https://github.com/pytorch/pytorch/pull/120400).
787
+ is_tracing = (
788
+ torch.jit.is_tracing()
789
+ or isinstance(input_tensor, torch.fx.Proxy)
790
+ or (hasattr(torch, "_dynamo") and torch._dynamo.is_compiling())
791
+ )
792
+ if not is_tracing and torch.any(attention_mask != 1):
793
+ # Attend to all tokens in masked rows from the causal_mask, for example the relevant first rows when
794
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
795
+ # Details: https://github.com/pytorch/pytorch/issues/110213
796
+ causal_mask = causal_mask.mul(
797
+ ~torch.all(causal_mask == min_dtype, dim=-1, keepdim=True)
798
+ ).to(dtype)
799
+
800
+ return causal_mask
801
+
802
+
803
+ class OpenELMForCausalLM(OpenELMPreTrainedModel):
804
+ _tied_weights_keys = ["lm_head.weight"]
805
+
806
+ def __init__(self, config: OpenELMConfig):
807
+ super().__init__(config)
808
+ self.transformer = OpenELMModel(config)
809
+ self.vocab_size = config.vocab_size
810
+ if config.share_input_output_layers:
811
+ self.lm_head = None
812
+ else:
813
+ self.lm_head = nn.Linear(config.model_dim, config.vocab_size, bias=False)
814
+
815
+ # Initialize weights and apply final processing
816
+ self.post_init()
817
+
818
+ def get_input_embeddings(self):
819
+ return self.transformer.token_embeddings
820
+
821
+ def set_input_embeddings(self, value):
822
+ self.transformer.token_embeddings = value
823
+
824
+ def get_output_embeddings(self):
825
+ return self.lm_head
826
+
827
+ def set_output_embeddings(self, new_embeddings):
828
+ self.lm_head = new_embeddings
829
+
830
+ def set_decoder(self, decoder):
831
+ self.transformer = decoder
832
+
833
+ def get_decoder(self):
834
+ return self.transformer
835
+
836
+ def forward(
837
+ self,
838
+ input_ids: torch.LongTensor = None,
839
+ attention_mask: Optional[torch.Tensor] = None,
840
+ position_ids: Optional[torch.LongTensor] = None,
841
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
842
+ inputs_embeds: Optional[torch.FloatTensor] = None,
843
+ labels: Optional[torch.LongTensor] = None,
844
+ use_cache: Optional[bool] = None,
845
+ output_attentions: Optional[bool] = None,
846
+ output_hidden_states: Optional[bool] = None,
847
+ return_dict: Optional[bool] = None,
848
+ cache_position: Optional[torch.LongTensor] = None,
849
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
850
+ output_attentions = (
851
+ output_attentions
852
+ if output_attentions is not None
853
+ else self.config.output_attentions
854
+ )
855
+ output_hidden_states = (
856
+ output_hidden_states
857
+ if output_hidden_states is not None
858
+ else self.config.output_hidden_states
859
+ )
860
+ return_dict = (
861
+ return_dict if return_dict is not None else self.config.use_return_dict
862
+ )
863
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
864
+ outputs = self.transformer(
865
+ input_ids=input_ids,
866
+ attention_mask=attention_mask,
867
+ position_ids=position_ids,
868
+ past_key_values=past_key_values,
869
+ inputs_embeds=inputs_embeds,
870
+ use_cache=use_cache,
871
+ output_attentions=output_attentions,
872
+ output_hidden_states=output_hidden_states,
873
+ return_dict=return_dict,
874
+ cache_position=cache_position,
875
+ )
876
+
877
+ hidden_states = outputs[0]
878
+ if self.lm_head is None:
879
+ # shared
880
+ logits = F.linear(
881
+ hidden_states, weight=self.transformer.token_embeddings.weight
882
+ )
883
+ else:
884
+ logits = self.lm_head(hidden_states)
885
+ logits = logits[:, : self.config.vocab_size]
886
+ loss = None
887
+ if labels is not None:
888
+ # Shift so that tokens < n predict n
889
+ shift_logits = logits[..., :-1, :].contiguous()
890
+ shift_labels = labels[..., 1:].contiguous()
891
+ # Flatten the tokens
892
+ loss_fct = CrossEntropyLoss()
893
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
894
+ shift_labels = shift_labels.view(-1)
895
+ # Enable model parallelism
896
+ shift_labels = shift_labels.to(shift_logits.device)
897
+ loss = loss_fct(shift_logits, shift_labels)
898
+
899
+ if not return_dict:
900
+ output = (logits,) + outputs[1:]
901
+ return (loss,) + output if loss is not None else output
902
+
903
+ return CausalLMOutputWithPast(
904
+ loss=loss,
905
+ logits=logits,
906
+ past_key_values=outputs.past_key_values,
907
+ hidden_states=outputs.hidden_states,
908
+ attentions=outputs.attentions,
909
+ )
910
+
911
+ def prepare_inputs_for_generation(
912
+ self,
913
+ input_ids,
914
+ past_key_values=None,
915
+ attention_mask=None,
916
+ inputs_embeds=None,
917
+ **kwargs,
918
+ ):
919
+ past_length = 0
920
+ if past_key_values is not None:
921
+ if isinstance(past_key_values, Cache):
922
+ cache_length = past_key_values.get_seq_length()
923
+ past_length = past_key_values.seen_tokens
924
+ max_cache_length = past_key_values.get_max_length()
925
+ else:
926
+ cache_length = past_length = past_key_values[0][0].shape[2]
927
+ max_cache_length = None
928
+
929
+ # Keep only the unprocessed tokens:
930
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
931
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
932
+ # input)
933
+ if (
934
+ attention_mask is not None
935
+ and attention_mask.shape[1] > input_ids.shape[1]
936
+ ):
937
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
938
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
939
+ # input_ids based on the past_length.
940
+ elif past_length < input_ids.shape[1]:
941
+ input_ids = input_ids[:, past_length:]
942
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
943
+
944
+ # If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
945
+ if (
946
+ max_cache_length is not None
947
+ and attention_mask is not None
948
+ and cache_length + input_ids.shape[1] > max_cache_length
949
+ ):
950
+ attention_mask = attention_mask[:, -max_cache_length:]
951
+
952
+ position_ids = kwargs.get("position_ids", None)
953
+ if attention_mask is not None and position_ids is None:
954
+ # create position_ids on the fly for batch generation
955
+ position_ids = attention_mask.long().cumsum(-1) - 1
956
+ position_ids.masked_fill_(attention_mask == 0, 1)
957
+ if past_key_values:
958
+ position_ids = position_ids[:, -input_ids.shape[1] :]
959
+
960
+ if self.generation_config.cache_implementation == "static":
961
+ # generation with static cache
962
+ cache_position = kwargs.get("cache_position", None)
963
+ if cache_position is None:
964
+ past_length = 0
965
+ else:
966
+ past_length = cache_position[-1] + 1
967
+ input_ids = input_ids[:, past_length:]
968
+ position_ids = position_ids[:, past_length:]
969
+
970
+ # we should only keep a `cache_position` in generate, and do +=1.
971
+ # same goes for position ids. Could also help with continued generation.
972
+ cache_position = torch.arange(
973
+ past_length,
974
+ past_length + position_ids.shape[-1],
975
+ device=position_ids.device,
976
+ )
977
+
978
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
979
+ if inputs_embeds is not None and past_key_values is None:
980
+ model_inputs = {"inputs_embeds": inputs_embeds}
981
+ else:
982
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
983
+ # recompiles graphs as the stride of the inputs is a guard. Ref: https://github.com/huggingface/transformers/pull/29114
984
+ # We could use `next_tokens` directly instead.
985
+ model_inputs = {"input_ids": input_ids.contiguous()}
986
+
987
+ model_inputs.update(
988
+ {
989
+ "position_ids": position_ids.contiguous(),
990
+ "cache_position": cache_position,
991
+ "past_key_values": past_key_values,
992
+ "use_cache": kwargs.get("use_cache"),
993
+ "attention_mask": attention_mask,
994
+ }
995
+ )
996
+ return model_inputs
997
+
998
+ @staticmethod
999
+ def _reorder_cache(past_key_values, beam_idx):
1000
+ reordered_past = ()
1001
+ for layer_past in past_key_values:
1002
+ reordered_past += (
1003
+ tuple(
1004
+ past_state.index_select(0, beam_idx.to(past_state.device))
1005
+ for past_state in layer_past
1006
+ ),
1007
+ )
1008
+ return reordered_past
runs/May27_23-45-48_n0qbsictr1716813819608-jbj64/events.out.tfevents.1716824772.n0qbsictr1716813819608-jbj64.1320.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:186112c031ab2875f4ed132a505e75f2c27dbf37de7f820d3b7881f5ac93aaf6
3
- size 259252
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4832002fc33b179307877efe909c118aed5d2e1b750b1887d688125658e5126e
3
+ size 292311
trainer_log.jsonl CHANGED
@@ -1199,3 +1199,158 @@
1199
  {"current_steps": 11990, "total_steps": 13550, "loss": 2.7833, "learning_rate": 3.983569534229864e-06, "epoch": 0.8848708487084871, "percentage": 88.49, "elapsed_time": "2 days, 23:57:28", "remaining_time": "9:21:44"}
1200
  {"current_steps": 12000, "total_steps": 13550, "loss": 2.8684, "learning_rate": 3.933340092148202e-06, "epoch": 0.8856088560885609, "percentage": 88.56, "elapsed_time": "3 days, 0:01:05", "remaining_time": "9:18:08"}
1201
  {"current_steps": 12010, "total_steps": 13550, "loss": 2.8399, "learning_rate": 3.883416369200399e-06, "epoch": 0.8863468634686347, "percentage": 88.63, "elapsed_time": "3 days, 0:05:05", "remaining_time": "9:14:35"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1199
  {"current_steps": 11990, "total_steps": 13550, "loss": 2.7833, "learning_rate": 3.983569534229864e-06, "epoch": 0.8848708487084871, "percentage": 88.49, "elapsed_time": "2 days, 23:57:28", "remaining_time": "9:21:44"}
1200
  {"current_steps": 12000, "total_steps": 13550, "loss": 2.8684, "learning_rate": 3.933340092148202e-06, "epoch": 0.8856088560885609, "percentage": 88.56, "elapsed_time": "3 days, 0:01:05", "remaining_time": "9:18:08"}
1201
  {"current_steps": 12010, "total_steps": 13550, "loss": 2.8399, "learning_rate": 3.883416369200399e-06, "epoch": 0.8863468634686347, "percentage": 88.63, "elapsed_time": "3 days, 0:05:05", "remaining_time": "9:14:35"}
1202
+ {"current_steps": 12020, "total_steps": 13550, "loss": 2.837, "learning_rate": 3.8337986967028e-06, "epoch": 0.8870848708487085, "percentage": 88.71, "elapsed_time": "3 days, 0:08:42", "remaining_time": "9:10:59"}
1203
+ {"current_steps": 12030, "total_steps": 13550, "loss": 2.8523, "learning_rate": 3.7844874039406674e-06, "epoch": 0.8878228782287823, "percentage": 88.78, "elapsed_time": "3 days, 0:12:18", "remaining_time": "9:07:23"}
1204
+ {"current_steps": 12040, "total_steps": 13550, "loss": 2.8815, "learning_rate": 3.7354828181659695e-06, "epoch": 0.888560885608856, "percentage": 88.86, "elapsed_time": "3 days, 0:15:55", "remaining_time": "9:03:47"}
1205
+ {"current_steps": 12050, "total_steps": 13550, "loss": 2.7918, "learning_rate": 3.6867852645952494e-06, "epoch": 0.8892988929889298, "percentage": 88.93, "elapsed_time": "3 days, 0:19:31", "remaining_time": "9:00:11"}
1206
+ {"current_steps": 12060, "total_steps": 13550, "loss": 2.8106, "learning_rate": 3.6383950664074405e-06, "epoch": 0.8900369003690037, "percentage": 89.0, "elapsed_time": "3 days, 0:23:08", "remaining_time": "8:56:35"}
1207
+ {"current_steps": 12070, "total_steps": 13550, "loss": 2.8244, "learning_rate": 3.5903125447417196e-06, "epoch": 0.8907749077490775, "percentage": 89.08, "elapsed_time": "3 days, 0:26:45", "remaining_time": "8:52:59"}
1208
+ {"current_steps": 12080, "total_steps": 13550, "loss": 2.8061, "learning_rate": 3.5425380186953904e-06, "epoch": 0.8915129151291513, "percentage": 89.15, "elapsed_time": "3 days, 0:30:22", "remaining_time": "8:49:23"}
1209
+ {"current_steps": 12090, "total_steps": 13550, "loss": 2.9384, "learning_rate": 3.495071805321759e-06, "epoch": 0.8922509225092251, "percentage": 89.23, "elapsed_time": "3 days, 0:33:58", "remaining_time": "8:45:47"}
1210
+ {"current_steps": 12100, "total_steps": 13550, "loss": 2.7863, "learning_rate": 3.447914219628029e-06, "epoch": 0.8929889298892989, "percentage": 89.3, "elapsed_time": "3 days, 0:37:35", "remaining_time": "8:42:11"}
1211
+ {"current_steps": 12110, "total_steps": 13550, "loss": 2.8553, "learning_rate": 3.4010655745731865e-06, "epoch": 0.8937269372693727, "percentage": 89.37, "elapsed_time": "3 days, 0:41:12", "remaining_time": "8:38:35"}
1212
+ {"current_steps": 12120, "total_steps": 13550, "loss": 2.7823, "learning_rate": 3.354526181066003e-06, "epoch": 0.8944649446494465, "percentage": 89.45, "elapsed_time": "3 days, 0:44:48", "remaining_time": "8:34:59"}
1213
+ {"current_steps": 12130, "total_steps": 13550, "loss": 2.7281, "learning_rate": 3.308296347962875e-06, "epoch": 0.8952029520295203, "percentage": 89.52, "elapsed_time": "3 days, 0:48:24", "remaining_time": "8:31:23"}
1214
+ {"current_steps": 12140, "total_steps": 13550, "loss": 2.8478, "learning_rate": 3.2623763820658237e-06, "epoch": 0.8959409594095941, "percentage": 89.59, "elapsed_time": "3 days, 0:52:01", "remaining_time": "8:27:47"}
1215
+ {"current_steps": 12150, "total_steps": 13550, "loss": 2.7823, "learning_rate": 3.2167665881204567e-06, "epoch": 0.8966789667896679, "percentage": 89.67, "elapsed_time": "3 days, 0:55:37", "remaining_time": "8:24:11"}
1216
+ {"current_steps": 12160, "total_steps": 13550, "loss": 2.8281, "learning_rate": 3.171467268813938e-06, "epoch": 0.8974169741697416, "percentage": 89.74, "elapsed_time": "3 days, 0:59:14", "remaining_time": "8:20:35"}
1217
+ {"current_steps": 12170, "total_steps": 13550, "loss": 2.7918, "learning_rate": 3.1264787247729908e-06, "epoch": 0.8981549815498155, "percentage": 89.82, "elapsed_time": "3 days, 1:02:51", "remaining_time": "8:16:59"}
1218
+ {"current_steps": 12180, "total_steps": 13550, "loss": 2.793, "learning_rate": 3.0818012545618835e-06, "epoch": 0.8988929889298893, "percentage": 89.89, "elapsed_time": "3 days, 1:06:27", "remaining_time": "8:13:23"}
1219
+ {"current_steps": 12190, "total_steps": 13550, "loss": 2.7829, "learning_rate": 3.0374351546804514e-06, "epoch": 0.8996309963099631, "percentage": 89.96, "elapsed_time": "3 days, 1:10:03", "remaining_time": "8:09:47"}
1220
+ {"current_steps": 12200, "total_steps": 13550, "loss": 2.8107, "learning_rate": 2.9933807195621445e-06, "epoch": 0.9003690036900369, "percentage": 90.04, "elapsed_time": "3 days, 1:13:40", "remaining_time": "8:06:11"}
1221
+ {"current_steps": 12210, "total_steps": 13550, "loss": 2.8524, "learning_rate": 2.9496382415720723e-06, "epoch": 0.9011070110701107, "percentage": 90.11, "elapsed_time": "3 days, 1:17:16", "remaining_time": "8:02:35"}
1222
+ {"current_steps": 12220, "total_steps": 13550, "loss": 2.8215, "learning_rate": 2.9062080110050515e-06, "epoch": 0.9018450184501845, "percentage": 90.18, "elapsed_time": "3 days, 1:20:53", "remaining_time": "7:58:59"}
1223
+ {"current_steps": 12230, "total_steps": 13550, "loss": 2.835, "learning_rate": 2.8630903160836773e-06, "epoch": 0.9025830258302583, "percentage": 90.26, "elapsed_time": "3 days, 1:24:29", "remaining_time": "7:55:22"}
1224
+ {"current_steps": 12240, "total_steps": 13550, "loss": 2.829, "learning_rate": 2.820285442956422e-06, "epoch": 0.9033210332103321, "percentage": 90.33, "elapsed_time": "3 days, 1:28:05", "remaining_time": "7:51:46"}
1225
+ {"current_steps": 12250, "total_steps": 13550, "loss": 2.7945, "learning_rate": 2.7777936756957333e-06, "epoch": 0.9040590405904059, "percentage": 90.41, "elapsed_time": "3 days, 1:31:42", "remaining_time": "7:48:10"}
1226
+ {"current_steps": 12260, "total_steps": 13550, "loss": 2.8904, "learning_rate": 2.7356152962961567e-06, "epoch": 0.9047970479704797, "percentage": 90.48, "elapsed_time": "3 days, 1:35:19", "remaining_time": "7:44:34"}
1227
+ {"current_steps": 12270, "total_steps": 13550, "loss": 2.8889, "learning_rate": 2.6937505846724165e-06, "epoch": 0.9055350553505535, "percentage": 90.55, "elapsed_time": "3 days, 1:38:55", "remaining_time": "7:40:58"}
1228
+ {"current_steps": 12280, "total_steps": 13550, "loss": 2.836, "learning_rate": 2.6521998186576357e-06, "epoch": 0.9062730627306274, "percentage": 90.63, "elapsed_time": "3 days, 1:42:32", "remaining_time": "7:37:22"}
1229
+ {"current_steps": 12290, "total_steps": 13550, "loss": 2.7639, "learning_rate": 2.610963274001438e-06, "epoch": 0.9070110701107011, "percentage": 90.7, "elapsed_time": "3 days, 1:46:08", "remaining_time": "7:33:46"}
1230
+ {"current_steps": 12300, "total_steps": 13550, "loss": 2.7735, "learning_rate": 2.5700412243681417e-06, "epoch": 0.9077490774907749, "percentage": 90.77, "elapsed_time": "3 days, 1:49:44", "remaining_time": "7:30:10"}
1231
+ {"current_steps": 12310, "total_steps": 13550, "loss": 2.8901, "learning_rate": 2.5294339413349076e-06, "epoch": 0.9084870848708487, "percentage": 90.85, "elapsed_time": "3 days, 1:53:21", "remaining_time": "7:26:34"}
1232
+ {"current_steps": 12320, "total_steps": 13550, "loss": 2.8662, "learning_rate": 2.4891416943900014e-06, "epoch": 0.9092250922509225, "percentage": 90.92, "elapsed_time": "3 days, 1:56:57", "remaining_time": "7:22:58"}
1233
+ {"current_steps": 12330, "total_steps": 13550, "loss": 2.8268, "learning_rate": 2.449164750930938e-06, "epoch": 0.9099630996309963, "percentage": 91.0, "elapsed_time": "3 days, 2:00:33", "remaining_time": "7:19:22"}
1234
+ {"current_steps": 12340, "total_steps": 13550, "loss": 2.8246, "learning_rate": 2.409503376262762e-06, "epoch": 0.9107011070110701, "percentage": 91.07, "elapsed_time": "3 days, 2:04:09", "remaining_time": "7:15:46"}
1235
+ {"current_steps": 12350, "total_steps": 13550, "loss": 2.7924, "learning_rate": 2.3701578335962206e-06, "epoch": 0.9114391143911439, "percentage": 91.14, "elapsed_time": "3 days, 2:07:46", "remaining_time": "7:12:10"}
1236
+ {"current_steps": 12360, "total_steps": 13550, "loss": 2.8639, "learning_rate": 2.3311283840460994e-06, "epoch": 0.9121771217712177, "percentage": 91.22, "elapsed_time": "3 days, 2:11:23", "remaining_time": "7:08:34"}
1237
+ {"current_steps": 12370, "total_steps": 13550, "loss": 2.8531, "learning_rate": 2.292415286629418e-06, "epoch": 0.9129151291512915, "percentage": 91.29, "elapsed_time": "3 days, 2:14:59", "remaining_time": "7:04:58"}
1238
+ {"current_steps": 12380, "total_steps": 13550, "loss": 2.8349, "learning_rate": 2.254018798263763e-06, "epoch": 0.9136531365313653, "percentage": 91.37, "elapsed_time": "3 days, 2:18:36", "remaining_time": "7:01:22"}
1239
+ {"current_steps": 12390, "total_steps": 13550, "loss": 2.8225, "learning_rate": 2.2159391737655466e-06, "epoch": 0.9143911439114392, "percentage": 91.44, "elapsed_time": "3 days, 2:22:12", "remaining_time": "6:57:46"}
1240
+ {"current_steps": 12400, "total_steps": 13550, "loss": 2.7716, "learning_rate": 2.1781766658483303e-06, "epoch": 0.915129151291513, "percentage": 91.51, "elapsed_time": "3 days, 2:25:49", "remaining_time": "6:54:10"}
1241
+ {"current_steps": 12410, "total_steps": 13550, "loss": 2.7796, "learning_rate": 2.1407315251211422e-06, "epoch": 0.9158671586715867, "percentage": 91.59, "elapsed_time": "3 days, 2:29:25", "remaining_time": "6:50:34"}
1242
+ {"current_steps": 12420, "total_steps": 13550, "loss": 2.8009, "learning_rate": 2.103604000086856e-06, "epoch": 0.9166051660516605, "percentage": 91.66, "elapsed_time": "3 days, 2:33:02", "remaining_time": "6:46:58"}
1243
+ {"current_steps": 12430, "total_steps": 13550, "loss": 2.8486, "learning_rate": 2.066794337140443e-06, "epoch": 0.9173431734317343, "percentage": 91.73, "elapsed_time": "3 days, 2:36:38", "remaining_time": "6:43:21"}
1244
+ {"current_steps": 12440, "total_steps": 13550, "loss": 2.7234, "learning_rate": 2.0303027805674445e-06, "epoch": 0.9180811808118081, "percentage": 91.81, "elapsed_time": "3 days, 2:40:15", "remaining_time": "6:39:45"}
1245
+ {"current_steps": 12450, "total_steps": 13550, "loss": 2.7963, "learning_rate": 1.994129572542286e-06, "epoch": 0.9188191881918819, "percentage": 91.88, "elapsed_time": "3 days, 2:43:51", "remaining_time": "6:36:09"}
1246
+ {"current_steps": 12460, "total_steps": 13550, "loss": 2.8314, "learning_rate": 1.958274953126693e-06, "epoch": 0.9195571955719557, "percentage": 91.96, "elapsed_time": "3 days, 2:47:28", "remaining_time": "6:32:33"}
1247
+ {"current_steps": 12470, "total_steps": 13550, "loss": 2.8796, "learning_rate": 1.922739160268089e-06, "epoch": 0.9202952029520295, "percentage": 92.03, "elapsed_time": "3 days, 2:51:04", "remaining_time": "6:28:57"}
1248
+ {"current_steps": 12480, "total_steps": 13550, "loss": 2.7904, "learning_rate": 1.8875224297980332e-06, "epoch": 0.9210332103321033, "percentage": 92.1, "elapsed_time": "3 days, 2:54:41", "remaining_time": "6:25:21"}
1249
+ {"current_steps": 12490, "total_steps": 13550, "loss": 2.7583, "learning_rate": 1.8526249954306241e-06, "epoch": 0.9217712177121771, "percentage": 92.18, "elapsed_time": "3 days, 2:58:18", "remaining_time": "6:21:45"}
1250
+ {"current_steps": 12500, "total_steps": 13550, "loss": 2.8608, "learning_rate": 1.8180470887609769e-06, "epoch": 0.922509225092251, "percentage": 92.25, "elapsed_time": "3 days, 3:01:54", "remaining_time": "6:18:09"}
1251
+ {"current_steps": 12510, "total_steps": 13550, "loss": 2.8282, "learning_rate": 1.7837889392636864e-06, "epoch": 0.9232472324723248, "percentage": 92.32, "elapsed_time": "3 days, 3:05:30", "remaining_time": "6:14:33"}
1252
+ {"current_steps": 12520, "total_steps": 13550, "loss": 2.8048, "learning_rate": 1.7498507742912784e-06, "epoch": 0.9239852398523986, "percentage": 92.4, "elapsed_time": "3 days, 3:09:07", "remaining_time": "6:10:57"}
1253
+ {"current_steps": 12530, "total_steps": 13550, "loss": 2.8095, "learning_rate": 1.7162328190727217e-06, "epoch": 0.9247232472324723, "percentage": 92.47, "elapsed_time": "3 days, 3:12:44", "remaining_time": "6:07:21"}
1254
+ {"current_steps": 12540, "total_steps": 13550, "loss": 2.7822, "learning_rate": 1.682935296711935e-06, "epoch": 0.9254612546125461, "percentage": 92.55, "elapsed_time": "3 days, 3:16:20", "remaining_time": "6:03:45"}
1255
+ {"current_steps": 12550, "total_steps": 13550, "loss": 2.8494, "learning_rate": 1.6499584281862935e-06, "epoch": 0.9261992619926199, "percentage": 92.62, "elapsed_time": "3 days, 3:19:56", "remaining_time": "6:00:09"}
1256
+ {"current_steps": 12560, "total_steps": 13550, "loss": 2.8629, "learning_rate": 1.6173024323451747e-06, "epoch": 0.9269372693726937, "percentage": 92.69, "elapsed_time": "3 days, 3:23:33", "remaining_time": "5:56:33"}
1257
+ {"current_steps": 12570, "total_steps": 13550, "loss": 2.8258, "learning_rate": 1.5849675259084872e-06, "epoch": 0.9276752767527675, "percentage": 92.77, "elapsed_time": "3 days, 3:27:09", "remaining_time": "5:52:57"}
1258
+ {"current_steps": 12580, "total_steps": 13550, "loss": 2.8093, "learning_rate": 1.5529539234652668e-06, "epoch": 0.9284132841328413, "percentage": 92.84, "elapsed_time": "3 days, 3:30:46", "remaining_time": "5:49:21"}
1259
+ {"current_steps": 12590, "total_steps": 13550, "loss": 2.828, "learning_rate": 1.5212618374722155e-06, "epoch": 0.9291512915129151, "percentage": 92.92, "elapsed_time": "3 days, 3:34:22", "remaining_time": "5:45:45"}
1260
+ {"current_steps": 12600, "total_steps": 13550, "loss": 2.8305, "learning_rate": 1.4898914782523143e-06, "epoch": 0.9298892988929889, "percentage": 92.99, "elapsed_time": "3 days, 3:37:59", "remaining_time": "5:42:09"}
1261
+ {"current_steps": 12610, "total_steps": 13550, "loss": 2.7875, "learning_rate": 1.458843053993403e-06, "epoch": 0.9306273062730628, "percentage": 93.06, "elapsed_time": "3 days, 3:41:35", "remaining_time": "5:38:32"}
1262
+ {"current_steps": 12620, "total_steps": 13550, "loss": 2.8113, "learning_rate": 1.4281167707468457e-06, "epoch": 0.9313653136531366, "percentage": 93.14, "elapsed_time": "3 days, 3:45:12", "remaining_time": "5:34:56"}
1263
+ {"current_steps": 12630, "total_steps": 13550, "loss": 2.8511, "learning_rate": 1.3977128324261068e-06, "epoch": 0.9321033210332104, "percentage": 93.21, "elapsed_time": "3 days, 3:48:48", "remaining_time": "5:31:20"}
1264
+ {"current_steps": 12640, "total_steps": 13550, "loss": 2.7979, "learning_rate": 1.3676314408054391e-06, "epoch": 0.9328413284132842, "percentage": 93.28, "elapsed_time": "3 days, 3:52:25", "remaining_time": "5:27:44"}
1265
+ {"current_steps": 12650, "total_steps": 13550, "loss": 2.8319, "learning_rate": 1.3378727955185244e-06, "epoch": 0.933579335793358, "percentage": 93.36, "elapsed_time": "3 days, 3:56:01", "remaining_time": "5:24:08"}
1266
+ {"current_steps": 12660, "total_steps": 13550, "loss": 2.8245, "learning_rate": 1.3084370940571577e-06, "epoch": 0.9343173431734317, "percentage": 93.43, "elapsed_time": "3 days, 3:59:37", "remaining_time": "5:20:32"}
1267
+ {"current_steps": 12670, "total_steps": 13550, "loss": 2.7542, "learning_rate": 1.2793245317699321e-06, "epoch": 0.9350553505535055, "percentage": 93.51, "elapsed_time": "3 days, 4:03:14", "remaining_time": "5:16:56"}
1268
+ {"current_steps": 12680, "total_steps": 13550, "loss": 2.7729, "learning_rate": 1.2505353018609444e-06, "epoch": 0.9357933579335793, "percentage": 93.58, "elapsed_time": "3 days, 4:06:50", "remaining_time": "5:13:20"}
1269
+ {"current_steps": 12690, "total_steps": 13550, "loss": 2.8164, "learning_rate": 1.2220695953885031e-06, "epoch": 0.9365313653136531, "percentage": 93.65, "elapsed_time": "3 days, 4:10:27", "remaining_time": "5:09:44"}
1270
+ {"current_steps": 12700, "total_steps": 13550, "loss": 2.8644, "learning_rate": 1.1939276012638723e-06, "epoch": 0.9372693726937269, "percentage": 93.73, "elapsed_time": "3 days, 4:14:03", "remaining_time": "5:06:08"}
1271
+ {"current_steps": 12710, "total_steps": 13550, "loss": 2.8716, "learning_rate": 1.1661095062500237e-06, "epoch": 0.9380073800738007, "percentage": 93.8, "elapsed_time": "3 days, 4:17:40", "remaining_time": "5:02:32"}
1272
+ {"current_steps": 12720, "total_steps": 13550, "loss": 2.8307, "learning_rate": 1.1386154949603934e-06, "epoch": 0.9387453874538746, "percentage": 93.87, "elapsed_time": "3 days, 4:21:16", "remaining_time": "4:58:56"}
1273
+ {"current_steps": 12730, "total_steps": 13550, "loss": 2.7868, "learning_rate": 1.1114457498576258e-06, "epoch": 0.9394833948339484, "percentage": 93.95, "elapsed_time": "3 days, 4:24:52", "remaining_time": "4:55:20"}
1274
+ {"current_steps": 12740, "total_steps": 13550, "loss": 2.8357, "learning_rate": 1.0846004512524211e-06, "epoch": 0.9402214022140222, "percentage": 94.02, "elapsed_time": "3 days, 4:28:28", "remaining_time": "4:51:43"}
1275
+ {"current_steps": 12750, "total_steps": 13550, "loss": 2.8843, "learning_rate": 1.0580797773022733e-06, "epoch": 0.940959409594096, "percentage": 94.1, "elapsed_time": "3 days, 4:32:05", "remaining_time": "4:48:07"}
1276
+ {"current_steps": 12760, "total_steps": 13550, "loss": 2.8038, "learning_rate": 1.03188390401035e-06, "epoch": 0.9416974169741698, "percentage": 94.17, "elapsed_time": "3 days, 4:35:41", "remaining_time": "4:44:31"}
1277
+ {"current_steps": 12770, "total_steps": 13550, "loss": 2.813, "learning_rate": 1.006013005224271e-06, "epoch": 0.9424354243542435, "percentage": 94.24, "elapsed_time": "3 days, 4:39:18", "remaining_time": "4:40:55"}
1278
+ {"current_steps": 12780, "total_steps": 13550, "loss": 2.8414, "learning_rate": 9.80467252634998e-07, "epoch": 0.9431734317343173, "percentage": 94.32, "elapsed_time": "3 days, 4:42:54", "remaining_time": "4:37:19"}
1279
+ {"current_steps": 12790, "total_steps": 13550, "loss": 2.7851, "learning_rate": 9.552468157756622e-07, "epoch": 0.9439114391143911, "percentage": 94.39, "elapsed_time": "3 days, 4:46:30", "remaining_time": "4:33:43"}
1280
+ {"current_steps": 12800, "total_steps": 13550, "loss": 2.8378, "learning_rate": 9.303518620204677e-07, "epoch": 0.9446494464944649, "percentage": 94.46, "elapsed_time": "3 days, 4:50:07", "remaining_time": "4:30:07"}
1281
+ {"current_steps": 12810, "total_steps": 13550, "loss": 2.7366, "learning_rate": 9.057825565835399e-07, "epoch": 0.9453874538745387, "percentage": 94.54, "elapsed_time": "3 days, 4:53:43", "remaining_time": "4:26:31"}
1282
+ {"current_steps": 12820, "total_steps": 13550, "loss": 2.7483, "learning_rate": 8.815390625178887e-07, "epoch": 0.9461254612546125, "percentage": 94.61, "elapsed_time": "3 days, 4:57:20", "remaining_time": "4:22:55"}
1283
+ {"current_steps": 12830, "total_steps": 13550, "loss": 2.7874, "learning_rate": 8.576215407142651e-07, "epoch": 0.9468634686346864, "percentage": 94.69, "elapsed_time": "3 days, 5:00:56", "remaining_time": "4:19:19"}
1284
+ {"current_steps": 12840, "total_steps": 13550, "loss": 2.8252, "learning_rate": 8.340301499001446e-07, "epoch": 0.9476014760147602, "percentage": 94.76, "elapsed_time": "3 days, 5:04:33", "remaining_time": "4:15:43"}
1285
+ {"current_steps": 12850, "total_steps": 13550, "loss": 2.8445, "learning_rate": 8.107650466386285e-07, "epoch": 0.948339483394834, "percentage": 94.83, "elapsed_time": "3 days, 5:08:09", "remaining_time": "4:12:07"}
1286
+ {"current_steps": 12860, "total_steps": 13550, "loss": 2.8411, "learning_rate": 7.878263853274281e-07, "epoch": 0.9490774907749078, "percentage": 94.91, "elapsed_time": "3 days, 5:11:46", "remaining_time": "4:08:30"}
1287
+ {"current_steps": 12870, "total_steps": 13550, "loss": 2.8118, "learning_rate": 7.652143181978655e-07, "epoch": 0.9498154981549816, "percentage": 94.98, "elapsed_time": "3 days, 5:15:22", "remaining_time": "4:04:54"}
1288
+ {"current_steps": 12880, "total_steps": 13550, "loss": 2.8086, "learning_rate": 7.429289953138019e-07, "epoch": 0.9505535055350554, "percentage": 95.06, "elapsed_time": "3 days, 5:18:58", "remaining_time": "4:01:18"}
1289
+ {"current_steps": 12890, "total_steps": 13550, "loss": 2.8468, "learning_rate": 7.209705645706944e-07, "epoch": 0.9512915129151291, "percentage": 95.13, "elapsed_time": "3 days, 5:22:35", "remaining_time": "3:57:42"}
1290
+ {"current_steps": 12900, "total_steps": 13550, "loss": 2.8114, "learning_rate": 6.993391716946019e-07, "epoch": 0.9520295202952029, "percentage": 95.2, "elapsed_time": "3 days, 5:26:11", "remaining_time": "3:54:06"}
1291
+ {"current_steps": 12910, "total_steps": 13550, "loss": 2.8352, "learning_rate": 6.780349602411918e-07, "epoch": 0.9527675276752767, "percentage": 95.28, "elapsed_time": "3 days, 5:29:48", "remaining_time": "3:50:30"}
1292
+ {"current_steps": 12920, "total_steps": 13550, "loss": 2.8013, "learning_rate": 6.570580715948404e-07, "epoch": 0.9535055350553505, "percentage": 95.35, "elapsed_time": "3 days, 5:33:24", "remaining_time": "3:46:54"}
1293
+ {"current_steps": 12930, "total_steps": 13550, "loss": 2.8368, "learning_rate": 6.364086449676232e-07, "epoch": 0.9542435424354243, "percentage": 95.42, "elapsed_time": "3 days, 5:37:01", "remaining_time": "3:43:18"}
1294
+ {"current_steps": 12940, "total_steps": 13550, "loss": 2.8559, "learning_rate": 6.160868173984591e-07, "epoch": 0.9549815498154982, "percentage": 95.5, "elapsed_time": "3 days, 5:40:37", "remaining_time": "3:39:42"}
1295
+ {"current_steps": 12950, "total_steps": 13550, "loss": 2.85, "learning_rate": 5.960927237521563e-07, "epoch": 0.955719557195572, "percentage": 95.57, "elapsed_time": "3 days, 5:44:14", "remaining_time": "3:36:06"}
1296
+ {"current_steps": 12960, "total_steps": 13550, "loss": 2.9074, "learning_rate": 5.764264967185462e-07, "epoch": 0.9564575645756458, "percentage": 95.65, "elapsed_time": "3 days, 5:47:50", "remaining_time": "3:32:30"}
1297
+ {"current_steps": 12970, "total_steps": 13550, "loss": 2.7595, "learning_rate": 5.570882668115784e-07, "epoch": 0.9571955719557196, "percentage": 95.72, "elapsed_time": "3 days, 5:51:27", "remaining_time": "3:28:54"}
1298
+ {"current_steps": 12980, "total_steps": 13550, "loss": 2.8024, "learning_rate": 5.380781623684661e-07, "epoch": 0.9579335793357934, "percentage": 95.79, "elapsed_time": "3 days, 5:55:04", "remaining_time": "3:25:17"}
1299
+ {"current_steps": 12990, "total_steps": 13550, "loss": 2.8231, "learning_rate": 5.193963095488419e-07, "epoch": 0.9586715867158672, "percentage": 95.87, "elapsed_time": "3 days, 5:58:40", "remaining_time": "3:21:41"}
1300
+ {"current_steps": 13000, "total_steps": 13550, "loss": 2.8898, "learning_rate": 5.010428323339033e-07, "epoch": 0.959409594095941, "percentage": 95.94, "elapsed_time": "3 days, 6:02:17", "remaining_time": "3:18:05"}
1301
+ {"current_steps": 13010, "total_steps": 13550, "loss": 2.8558, "learning_rate": 4.830178525256079e-07, "epoch": 0.9601476014760147, "percentage": 96.01, "elapsed_time": "3 days, 6:05:53", "remaining_time": "3:14:29"}
1302
+ {"current_steps": 13020, "total_steps": 13550, "loss": 2.8007, "learning_rate": 4.653214897458513e-07, "epoch": 0.9608856088560885, "percentage": 96.09, "elapsed_time": "3 days, 6:09:29", "remaining_time": "3:10:53"}
1303
+ {"current_steps": 13030, "total_steps": 13550, "loss": 2.8271, "learning_rate": 4.4795386143567374e-07, "epoch": 0.9616236162361623, "percentage": 96.16, "elapsed_time": "3 days, 6:13:06", "remaining_time": "3:07:17"}
1304
+ {"current_steps": 13040, "total_steps": 13550, "loss": 2.8371, "learning_rate": 4.309150828544939e-07, "epoch": 0.9623616236162361, "percentage": 96.24, "elapsed_time": "3 days, 6:16:42", "remaining_time": "3:03:41"}
1305
+ {"current_steps": 13050, "total_steps": 13550, "loss": 2.8808, "learning_rate": 4.1420526707933727e-07, "epoch": 0.9630996309963099, "percentage": 96.31, "elapsed_time": "3 days, 6:20:19", "remaining_time": "3:00:05"}
1306
+ {"current_steps": 13060, "total_steps": 13550, "loss": 2.8506, "learning_rate": 3.978245250040702e-07, "epoch": 0.9638376383763838, "percentage": 96.38, "elapsed_time": "3 days, 6:23:56", "remaining_time": "2:56:29"}
1307
+ {"current_steps": 13070, "total_steps": 13550, "loss": 2.8261, "learning_rate": 3.817729653386892e-07, "epoch": 0.9645756457564576, "percentage": 96.46, "elapsed_time": "3 days, 6:27:33", "remaining_time": "2:52:53"}
1308
+ {"current_steps": 13080, "total_steps": 13550, "loss": 2.8319, "learning_rate": 3.660506946085829e-07, "epoch": 0.9653136531365314, "percentage": 96.53, "elapsed_time": "3 days, 6:31:09", "remaining_time": "2:49:17"}
1309
+ {"current_steps": 13090, "total_steps": 13550, "loss": 2.8326, "learning_rate": 3.506578171538377e-07, "epoch": 0.9660516605166052, "percentage": 96.61, "elapsed_time": "3 days, 6:34:45", "remaining_time": "2:45:40"}
1310
+ {"current_steps": 13100, "total_steps": 13550, "loss": 2.7896, "learning_rate": 3.355944351285278e-07, "epoch": 0.966789667896679, "percentage": 96.68, "elapsed_time": "3 days, 6:38:21", "remaining_time": "2:42:04"}
1311
+ {"current_steps": 13110, "total_steps": 13550, "loss": 2.8499, "learning_rate": 3.2086064850004314e-07, "epoch": 0.9675276752767528, "percentage": 96.75, "elapsed_time": "3 days, 6:41:58", "remaining_time": "2:38:28"}
1312
+ {"current_steps": 13120, "total_steps": 13550, "loss": 2.8005, "learning_rate": 3.064565550484455e-07, "epoch": 0.9682656826568266, "percentage": 96.83, "elapsed_time": "3 days, 6:45:34", "remaining_time": "2:34:52"}
1313
+ {"current_steps": 13130, "total_steps": 13550, "loss": 2.8419, "learning_rate": 2.9238225036579693e-07, "epoch": 0.9690036900369003, "percentage": 96.9, "elapsed_time": "3 days, 6:49:11", "remaining_time": "2:31:16"}
1314
+ {"current_steps": 13140, "total_steps": 13550, "loss": 2.8581, "learning_rate": 2.7863782785552685e-07, "epoch": 0.9697416974169741, "percentage": 96.97, "elapsed_time": "3 days, 6:52:48", "remaining_time": "2:27:40"}
1315
+ {"current_steps": 13150, "total_steps": 13550, "loss": 2.8275, "learning_rate": 2.65223378731827e-07, "epoch": 0.9704797047970479, "percentage": 97.05, "elapsed_time": "3 days, 6:56:24", "remaining_time": "2:24:04"}
1316
+ {"current_steps": 13160, "total_steps": 13550, "loss": 2.8673, "learning_rate": 2.521389920190298e-07, "epoch": 0.9712177121771217, "percentage": 97.12, "elapsed_time": "3 days, 7:00:01", "remaining_time": "2:20:28"}
1317
+ {"current_steps": 13170, "total_steps": 13550, "loss": 2.9407, "learning_rate": 2.3938475455103083e-07, "epoch": 0.9719557195571956, "percentage": 97.2, "elapsed_time": "3 days, 7:03:37", "remaining_time": "2:16:52"}
1318
+ {"current_steps": 13180, "total_steps": 13550, "loss": 2.8481, "learning_rate": 2.269607509707006e-07, "epoch": 0.9726937269372694, "percentage": 97.27, "elapsed_time": "3 days, 7:07:13", "remaining_time": "2:13:16"}
1319
+ {"current_steps": 13190, "total_steps": 13550, "loss": 2.7954, "learning_rate": 2.1486706372932375e-07, "epoch": 0.9734317343173432, "percentage": 97.34, "elapsed_time": "3 days, 7:10:50", "remaining_time": "2:09:39"}
1320
+ {"current_steps": 13200, "total_steps": 13550, "loss": 2.8533, "learning_rate": 2.031037730860774e-07, "epoch": 0.974169741697417, "percentage": 97.42, "elapsed_time": "3 days, 7:14:26", "remaining_time": "2:06:03"}
1321
+ {"current_steps": 13210, "total_steps": 13550, "loss": 2.8151, "learning_rate": 1.916709571074482e-07, "epoch": 0.9749077490774908, "percentage": 97.49, "elapsed_time": "3 days, 7:18:03", "remaining_time": "2:02:27"}
1322
+ {"current_steps": 13220, "total_steps": 13550, "loss": 2.8355, "learning_rate": 1.8056869166677703e-07, "epoch": 0.9756457564575646, "percentage": 97.56, "elapsed_time": "3 days, 7:21:39", "remaining_time": "1:58:51"}
1323
+ {"current_steps": 13230, "total_steps": 13550, "loss": 2.8121, "learning_rate": 1.6979705044369297e-07, "epoch": 0.9763837638376384, "percentage": 97.64, "elapsed_time": "3 days, 7:25:15", "remaining_time": "1:55:15"}
1324
+ {"current_steps": 13240, "total_steps": 13550, "loss": 2.9067, "learning_rate": 1.5935610492366915e-07, "epoch": 0.9771217712177122, "percentage": 97.71, "elapsed_time": "3 days, 7:28:51", "remaining_time": "1:51:39"}
1325
+ {"current_steps": 13250, "total_steps": 13550, "loss": 2.7666, "learning_rate": 1.4924592439753416e-07, "epoch": 0.977859778597786, "percentage": 97.79, "elapsed_time": "3 days, 7:32:28", "remaining_time": "1:48:03"}
1326
+ {"current_steps": 13260, "total_steps": 13550, "loss": 2.7254, "learning_rate": 1.394665759610003e-07, "epoch": 0.9785977859778597, "percentage": 97.86, "elapsed_time": "3 days, 7:36:05", "remaining_time": "1:44:27"}
1327
+ {"current_steps": 13270, "total_steps": 13550, "loss": 2.778, "learning_rate": 1.3001812451423068e-07, "epoch": 0.9793357933579335, "percentage": 97.93, "elapsed_time": "3 days, 7:39:41", "remaining_time": "1:40:51"}
1328
+ {"current_steps": 13280, "total_steps": 13550, "loss": 2.809, "learning_rate": 1.209006327614226e-07, "epoch": 0.9800738007380074, "percentage": 98.01, "elapsed_time": "3 days, 7:43:17", "remaining_time": "1:37:15"}
1329
+ {"current_steps": 13290, "total_steps": 13550, "loss": 2.8325, "learning_rate": 1.1211416121035823e-07, "epoch": 0.9808118081180812, "percentage": 98.08, "elapsed_time": "3 days, 7:46:54", "remaining_time": "1:33:38"}
1330
+ {"current_steps": 13300, "total_steps": 13550, "loss": 2.7841, "learning_rate": 1.036587681720269e-07, "epoch": 0.981549815498155, "percentage": 98.15, "elapsed_time": "3 days, 7:50:30", "remaining_time": "1:30:02"}
1331
+ {"current_steps": 13310, "total_steps": 13550, "loss": 2.8358, "learning_rate": 9.55345097602256e-08, "epoch": 0.9822878228782288, "percentage": 98.23, "elapsed_time": "3 days, 7:54:06", "remaining_time": "1:26:26"}
1332
+ {"current_steps": 13320, "total_steps": 13550, "loss": 2.8313, "learning_rate": 8.774143989119798e-08, "epoch": 0.9830258302583026, "percentage": 98.3, "elapsed_time": "3 days, 7:57:43", "remaining_time": "1:22:50"}
1333
+ {"current_steps": 13330, "total_steps": 13550, "loss": 2.8781, "learning_rate": 8.027961028328479e-08, "epoch": 0.9837638376383764, "percentage": 98.38, "elapsed_time": "3 days, 8:01:19", "remaining_time": "1:19:14"}
1334
+ {"current_steps": 13340, "total_steps": 13550, "loss": 2.7926, "learning_rate": 7.314907045653519e-08, "epoch": 0.9845018450184502, "percentage": 98.45, "elapsed_time": "3 days, 8:04:56", "remaining_time": "1:15:38"}
1335
+ {"current_steps": 13350, "total_steps": 13550, "loss": 2.7885, "learning_rate": 6.634986773244034e-08, "epoch": 0.985239852398524, "percentage": 98.52, "elapsed_time": "3 days, 8:08:32", "remaining_time": "1:12:02"}
1336
+ {"current_steps": 13360, "total_steps": 13550, "loss": 2.7721, "learning_rate": 5.988204723356705e-08, "epoch": 0.9859778597785978, "percentage": 98.6, "elapsed_time": "3 days, 8:12:09", "remaining_time": "1:08:26"}
1337
+ {"current_steps": 13370, "total_steps": 13550, "loss": 2.8138, "learning_rate": 5.374565188329683e-08, "epoch": 0.9867158671586715, "percentage": 98.67, "elapsed_time": "3 days, 8:15:46", "remaining_time": "1:04:50"}
1338
+ {"current_steps": 13380, "total_steps": 13550, "loss": 2.7988, "learning_rate": 4.794072240550951e-08, "epoch": 0.9874538745387453, "percentage": 98.75, "elapsed_time": "3 days, 8:19:22", "remaining_time": "1:01:13"}
1339
+ {"current_steps": 13390, "total_steps": 13550, "loss": 2.7823, "learning_rate": 4.246729732434451e-08, "epoch": 0.9881918819188192, "percentage": 98.82, "elapsed_time": "3 days, 8:22:58", "remaining_time": "0:57:37"}
1340
+ {"current_steps": 13400, "total_steps": 13550, "loss": 2.872, "learning_rate": 3.7325412963912235e-08, "epoch": 0.988929889298893, "percentage": 98.89, "elapsed_time": "3 days, 8:26:35", "remaining_time": "0:54:01"}
1341
+ {"current_steps": 13410, "total_steps": 13550, "loss": 2.9374, "learning_rate": 3.251510344807751e-08, "epoch": 0.9896678966789668, "percentage": 98.97, "elapsed_time": "3 days, 8:30:11", "remaining_time": "0:50:25"}
1342
+ {"current_steps": 13420, "total_steps": 13550, "loss": 2.7839, "learning_rate": 2.8036400700232058e-08, "epoch": 0.9904059040590406, "percentage": 99.04, "elapsed_time": "3 days, 8:33:48", "remaining_time": "0:46:49"}
1343
+ {"current_steps": 13430, "total_steps": 13550, "loss": 2.8689, "learning_rate": 2.3889334443055744e-08, "epoch": 0.9911439114391144, "percentage": 99.11, "elapsed_time": "3 days, 8:37:24", "remaining_time": "0:43:13"}
1344
+ {"current_steps": 13440, "total_steps": 13550, "loss": 2.9239, "learning_rate": 2.007393219836118e-08, "epoch": 0.9918819188191882, "percentage": 99.19, "elapsed_time": "3 days, 8:41:01", "remaining_time": "0:39:37"}
1345
+ {"current_steps": 13450, "total_steps": 13550, "loss": 2.8412, "learning_rate": 1.6590219286871655e-08, "epoch": 0.992619926199262, "percentage": 99.26, "elapsed_time": "3 days, 8:44:38", "remaining_time": "0:36:01"}
1346
+ {"current_steps": 13460, "total_steps": 13550, "loss": 2.7462, "learning_rate": 1.3438218828076832e-08, "epoch": 0.9933579335793358, "percentage": 99.34, "elapsed_time": "3 days, 8:48:14", "remaining_time": "0:32:25"}
1347
+ {"current_steps": 13470, "total_steps": 13550, "loss": 2.8598, "learning_rate": 1.0617951740077292e-08, "epoch": 0.9940959409594096, "percentage": 99.41, "elapsed_time": "3 days, 8:51:50", "remaining_time": "0:28:48"}
1348
+ {"current_steps": 13480, "total_steps": 13550, "loss": 2.8083, "learning_rate": 8.12943673943467e-09, "epoch": 0.9948339483394834, "percentage": 99.48, "elapsed_time": "3 days, 8:55:27", "remaining_time": "0:25:12"}
1349
+ {"current_steps": 13490, "total_steps": 13550, "loss": 2.929, "learning_rate": 5.9726903410661786e-09, "epoch": 0.9955719557195571, "percentage": 99.56, "elapsed_time": "3 days, 8:59:03", "remaining_time": "0:21:36"}
1350
+ {"current_steps": 13500, "total_steps": 13550, "loss": 2.844, "learning_rate": 4.147726858100276e-09, "epoch": 0.996309963099631, "percentage": 99.63, "elapsed_time": "3 days, 9:02:40", "remaining_time": "0:18:00"}
1351
+ {"current_steps": 13510, "total_steps": 13550, "loss": 2.8096, "learning_rate": 2.6545584018211613e-09, "epoch": 0.9970479704797048, "percentage": 99.7, "elapsed_time": "3 days, 9:06:17", "remaining_time": "0:14:24"}
1352
+ {"current_steps": 13520, "total_steps": 13550, "loss": 2.8317, "learning_rate": 1.4931948815744e-09, "epoch": 0.9977859778597786, "percentage": 99.78, "elapsed_time": "3 days, 9:09:53", "remaining_time": "0:10:48"}
1353
+ {"current_steps": 13530, "total_steps": 13550, "loss": 2.8792, "learning_rate": 6.636440046892123e-10, "epoch": 0.9985239852398524, "percentage": 99.85, "elapsed_time": "3 days, 9:13:29", "remaining_time": "0:07:12"}
1354
+ {"current_steps": 13540, "total_steps": 13550, "loss": 2.8205, "learning_rate": 1.6591127643961202e-10, "epoch": 0.9992619926199262, "percentage": 99.93, "elapsed_time": "3 days, 9:17:06", "remaining_time": "0:03:36"}
1355
+ {"current_steps": 13550, "total_steps": 13550, "loss": 2.8161, "learning_rate": 0.0, "epoch": 1.0, "percentage": 100.0, "elapsed_time": "3 days, 9:20:43", "remaining_time": "0:00:00"}
1356
+ {"current_steps": 13550, "total_steps": 13550, "epoch": 1.0, "percentage": 100.0, "elapsed_time": "3 days, 9:20:43", "remaining_time": "0:00:00"}