hushell commited on
Commit
6e0313f
1 Parent(s): 19769a9

Model save

Browse files
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - trl
4
+ - sft
5
+ - generated_from_trainer
6
+ datasets:
7
+ - generator
8
+ model-index:
9
+ - name: tinyllama_mole_sft_ultrachat_ep3
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # tinyllama_mole_sft_ultrachat_ep3
17
+
18
+ This model was trained from scratch on the generator dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.1127
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 2e-05
40
+ - train_batch_size: 16
41
+ - eval_batch_size: 8
42
+ - seed: 42
43
+ - distributed_type: multi-GPU
44
+ - num_devices: 4
45
+ - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 128
47
+ - total_eval_batch_size: 32
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - lr_scheduler_warmup_steps: 120
51
+ - num_epochs: 3
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss |
56
+ |:-------------:|:-----:|:----:|:---------------:|
57
+ | 1.3007 | 0.09 | 100 | 1.2780 |
58
+ | 1.2255 | 0.18 | 200 | 1.2158 |
59
+ | 1.192 | 0.26 | 300 | 1.1921 |
60
+ | 1.1696 | 0.35 | 400 | 1.1770 |
61
+ | 1.1426 | 0.44 | 500 | 1.1666 |
62
+ | 1.1628 | 0.53 | 600 | 1.1583 |
63
+ | 1.1501 | 0.61 | 700 | 1.1513 |
64
+ | 1.137 | 0.7 | 800 | 1.1457 |
65
+ | 1.1321 | 0.79 | 900 | 1.1407 |
66
+ | 1.1156 | 0.88 | 1000 | 1.1359 |
67
+ | 1.1395 | 0.96 | 1100 | 1.1318 |
68
+ | 1.0564 | 1.05 | 1200 | 1.1315 |
69
+ | 1.0594 | 1.14 | 1300 | 1.1295 |
70
+ | 1.0711 | 1.23 | 1400 | 1.1274 |
71
+ | 1.0624 | 1.31 | 1500 | 1.1256 |
72
+ | 1.0652 | 1.4 | 1600 | 1.1233 |
73
+ | 1.0626 | 1.49 | 1700 | 1.1213 |
74
+ | 1.0457 | 1.58 | 1800 | 1.1195 |
75
+ | 1.0665 | 1.66 | 1900 | 1.1178 |
76
+ | 1.07 | 1.75 | 2000 | 1.1158 |
77
+ | 1.0567 | 1.84 | 2100 | 1.1141 |
78
+ | 1.0304 | 1.93 | 2200 | 1.1127 |
79
+ | 1.0132 | 2.01 | 2300 | 1.1170 |
80
+ | 1.0203 | 2.1 | 2400 | 1.1170 |
81
+ | 1.0088 | 2.19 | 2500 | 1.1168 |
82
+ | 1.002 | 2.28 | 2600 | 1.1162 |
83
+ | 1.0004 | 2.37 | 2700 | 1.1157 |
84
+ | 1.0058 | 2.45 | 2800 | 1.1156 |
85
+ | 1.0118 | 2.54 | 2900 | 1.1150 |
86
+ | 0.9941 | 2.63 | 3000 | 1.1148 |
87
+ | 1.0127 | 2.72 | 3100 | 1.1147 |
88
+ | 1.0039 | 2.8 | 3200 | 1.1144 |
89
+ | 1.0 | 2.89 | 3300 | 1.1143 |
90
+ | 1.0188 | 2.98 | 3400 | 1.1143 |
91
+
92
+
93
+ ### Framework versions
94
+
95
+ - Transformers 4.37.0
96
+ - Pytorch 2.1.2+cu118
97
+ - Datasets 2.16.1
98
+ - Tokenizers 0.15.0
all_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_loss": 1.1127439737319946,
4
+ "eval_runtime": 425.9465,
5
+ "eval_samples": 23110,
6
+ "eval_samples_per_second": 37.953,
7
+ "eval_steps_per_second": 1.188,
8
+ "train_loss": 1.0902616986746403,
9
+ "train_runtime": 47329.7113,
10
+ "train_samples": 207865,
11
+ "train_samples_per_second": 9.258,
12
+ "train_steps_per_second": 0.072
13
+ }
configuration_mixtral_mole.py ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 Mixtral AI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ Mixtral model configuration"""
16
+
17
+ from transformers.configuration_utils import PretrainedConfig
18
+ from transformers.utils import logging
19
+
20
+
21
+ logger = logging.get_logger(__name__)
22
+
23
+ MIXTRAL_PRETRAINED_CONFIG_ARCHIVE_MAP = {
24
+ "mistral-ai/Mixtral-8x7B": "https://huggingface.co/mistral-ai/Mixtral-8x7B/resolve/main/config.json",
25
+ }
26
+
27
+
28
+ class MixtralMoleConfig(PretrainedConfig):
29
+ r"""
30
+ This is the configuration class to store the configuration of a [`MixtralMoleModel`]. It is used to instantiate an
31
+ Mixtral model according to the specified arguments, defining the model architecture. Instantiating a configuration
32
+ with the defaults will yield a similar configuration to that of the Mixtral-7B-v0.1 or Mixtral-7B-Instruct-v0.1.
33
+
34
+ [mixtralai/Mixtral-8x7B](https://huggingface.co/mixtralai/Mixtral-8x7B)
35
+ [mixtralai/Mixtral-7B-Instruct-v0.1](https://huggingface.co/mixtralai/Mixtral-7B-Instruct-v0.1)
36
+
37
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
38
+ documentation from [`PretrainedConfig`] for more information.
39
+
40
+
41
+ Args:
42
+ vocab_size (`int`, *optional*, defaults to 32000):
43
+ Vocabulary size of the Mixtral model. Defines the number of different tokens that can be represented by the
44
+ `inputs_ids` passed when calling [`MixtralMoleModel`]
45
+ hidden_size (`int`, *optional*, defaults to 4096):
46
+ Dimension of the hidden representations.
47
+ intermediate_size (`int`, *optional*, defaults to 14336):
48
+ Dimension of the MLP representations.
49
+ num_hidden_layers (`int`, *optional*, defaults to 32):
50
+ Number of hidden layers in the Transformer encoder.
51
+ num_attention_heads (`int`, *optional*, defaults to 32):
52
+ Number of attention heads for each attention layer in the Transformer encoder.
53
+ num_key_value_heads (`int`, *optional*, defaults to 8):
54
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
55
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
56
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
57
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
58
+ by meanpooling all the original heads within that group. For more details checkout [this
59
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`.
60
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
63
+ The maximum sequence length that this model might ever be used with. Mixtral's sliding window attention
64
+ allows sequence of up to 4096*32 tokens.
65
+ initializer_range (`float`, *optional*, defaults to 0.02):
66
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
67
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
68
+ The epsilon used by the rms normalization layers.
69
+ use_cache (`bool`, *optional*, defaults to `True`):
70
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
71
+ relevant if `config.is_decoder=True`.
72
+ pad_token_id (`int`, *optional*):
73
+ The id of the padding token.
74
+ bos_token_id (`int`, *optional*, defaults to 1):
75
+ The id of the "beginning-of-sequence" token.
76
+ eos_token_id (`int`, *optional*, defaults to 2):
77
+ The id of the "end-of-sequence" token.
78
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
79
+ Whether the model's input and output word embeddings should be tied.
80
+ rope_theta (`float`, *optional*, defaults to 1000000.0):
81
+ The base period of the RoPE embeddings.
82
+ sliding_window (`int`, *optional*):
83
+ Sliding window attention window size. If not specified, will default to `4096`.
84
+ attention_dropout (`float`, *optional*, defaults to 0.0):
85
+ The dropout ratio for the attention probabilities.
86
+ num_experts_per_tok (`int`, *optional*, defaults to 2):
87
+ The number of experts to root per-token, can be also interpreted as the `top-p` routing
88
+ parameter
89
+ num_local_experts (`int`, *optional*, defaults to 8):
90
+ Number of experts per Sparse MLP layer.
91
+ output_router_logits (`bool`, *optional*, defaults to `False`):
92
+ Whether or not the router logits should be returned by the model. Enabeling this will also
93
+ allow the model to output the auxiliary loss. See [here]() for more details
94
+ router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
95
+ The aux loss factor for the total loss.
96
+
97
+ ```python
98
+ >>> from models import MixtralMoleModel, MixtralMoleConfig
99
+
100
+ >>> # Initializing a Mixtral 7B style configuration
101
+ >>> configuration = MixtralMoleConfig()
102
+
103
+ >>> # Initializing a model from the Mixtral 7B style configuration
104
+ >>> model = MixtralMoleModel(configuration)
105
+
106
+ >>> # Accessing the model configuration
107
+ >>> configuration = model.config
108
+ ```"""
109
+
110
+ model_type = "mixtralmole"
111
+ keys_to_ignore_at_inference = ["past_key_values"]
112
+
113
+ def __init__(
114
+ self,
115
+ vocab_size=32000,
116
+ hidden_size=4096,
117
+ intermediate_size=14336,
118
+ num_hidden_layers=32,
119
+ num_attention_heads=32,
120
+ num_key_value_heads=8,
121
+ hidden_act="silu",
122
+ max_position_embeddings=4096 * 32,
123
+ initializer_range=0.02,
124
+ rms_norm_eps=1e-5,
125
+ use_cache=True,
126
+ pad_token_id=None,
127
+ bos_token_id=1,
128
+ eos_token_id=2,
129
+ tie_word_embeddings=False,
130
+ rope_theta=1e6,
131
+ sliding_window=None,
132
+ attention_dropout=0.0,
133
+ num_experts_per_tok=2,
134
+ num_local_experts=8,
135
+ output_router_logits=False,
136
+ router_aux_loss_coef=0.001,
137
+ adapter_dim=16,
138
+ adapter_alpha=1.0,
139
+ **kwargs,
140
+ ):
141
+ self.vocab_size = vocab_size
142
+ self.max_position_embeddings = max_position_embeddings
143
+ self.hidden_size = hidden_size
144
+ self.intermediate_size = intermediate_size
145
+ self.num_hidden_layers = num_hidden_layers
146
+ self.num_attention_heads = num_attention_heads
147
+ self.sliding_window = sliding_window
148
+
149
+ # for backward compatibility
150
+ if num_key_value_heads is None:
151
+ num_key_value_heads = num_attention_heads
152
+
153
+ self.num_key_value_heads = num_key_value_heads
154
+ self.hidden_act = hidden_act
155
+ self.initializer_range = initializer_range
156
+ self.rms_norm_eps = rms_norm_eps
157
+ self.use_cache = use_cache
158
+ self.rope_theta = rope_theta
159
+ self.attention_dropout = attention_dropout
160
+
161
+ self.num_experts_per_tok = num_experts_per_tok
162
+ self.num_local_experts = num_local_experts
163
+ self.output_router_logits = output_router_logits
164
+ self.router_aux_loss_coef = router_aux_loss_coef
165
+
166
+ # lora
167
+ self.adapter_dim = adapter_dim
168
+ self.adapter_alpha = adapter_alpha
169
+
170
+ super().__init__(
171
+ pad_token_id=pad_token_id,
172
+ bos_token_id=bos_token_id,
173
+ eos_token_id=eos_token_id,
174
+ tie_word_embeddings=tie_word_embeddings,
175
+ **kwargs,
176
+ )
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_loss": 1.1127439737319946,
4
+ "eval_runtime": 425.9465,
5
+ "eval_samples": 23110,
6
+ "eval_samples_per_second": 37.953,
7
+ "eval_steps_per_second": 1.188
8
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.37.0",
6
+ "use_cache": false
7
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ebd53dc49f5adb8ac4399646b86bc0545c85876b50a4530926f6dbe4c6c18952
3
  size 2223960880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9437ee9f304616df308df88427951ab973156df2ce1181c5c57d23dd955911b6
3
  size 2223960880
modeling_mixtral_mole.py ADDED
@@ -0,0 +1,960 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 Mistral AI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """ PyTorch Mixtral Mole model."""
21
+ import inspect
22
+ import math
23
+ import warnings
24
+ from typing import List, Optional, Tuple, Union
25
+
26
+ import torch
27
+ import torch.nn.functional as F
28
+ import torch.utils.checkpoint
29
+ from torch import nn
30
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
31
+
32
+ from transformers.activations import ACT2FN
33
+ from transformers.cache_utils import Cache, DynamicCache
34
+ from transformers.modeling_attn_mask_utils import (
35
+ _prepare_4d_causal_attention_mask,
36
+ _prepare_4d_causal_attention_mask_for_sdpa,
37
+ )
38
+ from transformers.modeling_outputs import (
39
+ MoeCausalLMOutputWithPast,
40
+ MoeModelOutputWithPast,
41
+ SequenceClassifierOutputWithPast,
42
+ )
43
+ from transformers.modeling_utils import PreTrainedModel
44
+ from transformers.pytorch_utils import is_torch_greater_or_equal_than_1_13
45
+ from transformers.utils import (
46
+ add_start_docstrings,
47
+ add_start_docstrings_to_model_forward,
48
+ is_flash_attn_2_available,
49
+ is_flash_attn_greater_or_equal_2_10,
50
+ logging,
51
+ replace_return_docstrings,
52
+ )
53
+ from transformers.utils.import_utils import is_torch_fx_available
54
+ from .configuration_mixtral_mole import MixtralMoleConfig
55
+
56
+ from transformers.models.mixtral.modeling_mixtral import (
57
+ MixtralRMSNorm,
58
+ MixtralRotaryEmbedding,
59
+ MixtralAttention,
60
+ MixtralFlashAttention2,
61
+ MixtralSdpaAttention,
62
+ )
63
+
64
+
65
+ # This makes `_prepare_4d_causal_attention_mask` a leaf function in the FX graph.
66
+ # It means that the function will not be traced through and simply appear as a node in the graph.
67
+ if is_torch_fx_available():
68
+ if not is_torch_greater_or_equal_than_1_13:
69
+ import torch.fx
70
+
71
+ _prepare_4d_causal_attention_mask = torch.fx.wrap(_prepare_4d_causal_attention_mask)
72
+
73
+
74
+ logger = logging.get_logger(__name__)
75
+
76
+ _CONFIG_FOR_DOC = "MixtralMoleConfig"
77
+
78
+
79
+ def load_balancing_loss_func(gate_logits: torch.Tensor, num_experts: torch.Tensor = None, top_k=2) -> float:
80
+ r"""
81
+ Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch.
82
+
83
+ See Switch Transformer (https://arxiv.org/abs/2101.03961) for more details. This function implements the loss
84
+ function presented in equations (4) - (6) of the paper. It aims at penalizing cases where the routing between
85
+ experts is too unbalanced.
86
+
87
+ Args:
88
+ gate_logits (Union[`torch.Tensor`, Tuple[torch.Tensor]):
89
+ Logits from the `gate`, should be a tuple of model.config.num_hidden_layers tensors of
90
+ shape [batch_size X sequence_length, num_experts].
91
+ num_experts (`int`, *optional*):
92
+ Number of experts
93
+
94
+ Returns:
95
+ The auxiliary loss.
96
+ """
97
+ if gate_logits is None or not isinstance(gate_logits, tuple):
98
+ return 0
99
+
100
+ if isinstance(gate_logits, tuple):
101
+ compute_device = gate_logits[0].device
102
+ concatenated_gate_logits = torch.cat([layer_gate.to(compute_device) for layer_gate in gate_logits], dim=0)
103
+
104
+ routing_weights = torch.nn.functional.softmax(concatenated_gate_logits, dim=-1)
105
+
106
+ _, selected_experts = torch.topk(routing_weights, top_k, dim=-1)
107
+
108
+ expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts)
109
+
110
+ # Compute the percentage of tokens routed to each experts
111
+ tokens_per_expert = torch.mean(expert_mask.float(), dim=0)
112
+
113
+ # Compute the average probability of routing to these experts
114
+ router_prob_per_expert = torch.mean(routing_weights, dim=0)
115
+
116
+ overall_loss = torch.sum(tokens_per_expert * router_prob_per_expert.unsqueeze(0))
117
+ return overall_loss * num_experts
118
+
119
+
120
+ MIXTRAL_ATTENTION_CLASSES = {
121
+ "eager": MixtralAttention,
122
+ "flash_attention_2": MixtralFlashAttention2,
123
+ "sdpa": MixtralSdpaAttention,
124
+ }
125
+
126
+ class LoRALayer(nn.Module):
127
+ def __init__(self, config):
128
+ super().__init__()
129
+ self.config = config
130
+ self.intermediate_size = config.intermediate_size
131
+ self.hidden_size = config.hidden_size
132
+ self.adapter_down = nn.Linear(self.hidden_size, config.adapter_dim, bias=False)
133
+ self.adapter_up = nn.Linear(config.adapter_dim, self.hidden_size, bias=False)
134
+ # self.adapter_act = nn.GELU()
135
+ self.adapter_act = nn.Identity() # Using LoRA not Parallel Adapter
136
+
137
+ self.adapter_dropout = nn.Dropout(p=0.01)
138
+ self.adapter_scaling = config.adapter_alpha / config.adapter_dim
139
+
140
+ def forward(self, x):
141
+ x = self.adapter_dropout(x)
142
+ x = self.adapter_scaling * self.adapter_up(self.adapter_act(self.adapter_down(x)))
143
+ return x
144
+
145
+
146
+ class MixtralMoleBLockSparseTop2MLP(nn.Module):
147
+ def __init__(self, config: MixtralMoleConfig):
148
+ super().__init__()
149
+ self.ffn_dim = config.intermediate_size
150
+ self.hidden_dim = config.hidden_size
151
+
152
+ self.w1 = nn.Linear(self.hidden_dim, self.ffn_dim, bias=False)
153
+ self.w2 = nn.Linear(self.ffn_dim, self.hidden_dim, bias=False)
154
+ self.w3 = nn.Linear(self.hidden_dim, self.ffn_dim, bias=False)
155
+
156
+ self.act_fn = ACT2FN[config.hidden_act]
157
+
158
+ def forward(self, hidden_states):
159
+ current_hidden_states = self.act_fn(self.w1(hidden_states)) * self.w3(hidden_states)
160
+ current_hidden_states = self.w2(current_hidden_states)
161
+ return current_hidden_states
162
+
163
+
164
+ class MixtralMoleSparseMoeBlock(nn.Module):
165
+ """
166
+ This implementation is
167
+ strictly equivalent to standard MoE with full capacity (no
168
+ dropped tokens). It's faster since it formulates MoE operations
169
+ in terms of block-sparse operations to accomodate imbalanced
170
+ assignments of tokens to experts, whereas standard MoE either
171
+ (1) drop tokens at the cost of reduced performance or (2) set
172
+ capacity factor to number of experts and thus waste computation
173
+ and memory on padding.
174
+ """
175
+
176
+ def __init__(self, config):
177
+ super().__init__()
178
+ self.hidden_dim = config.hidden_size
179
+ self.ffn_dim = config.intermediate_size
180
+ self.num_experts = config.num_local_experts
181
+ self.top_k = config.num_experts_per_tok
182
+
183
+ # gating
184
+ self.gate = nn.Linear(self.hidden_dim, self.num_experts, bias=False)
185
+
186
+ self.ffn = MixtralMoleBLockSparseTop2MLP(config)
187
+
188
+ self.experts = nn.ModuleList([LoRALayer(config) for _ in range(self.num_experts)])
189
+
190
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
191
+ """ """
192
+ batch_size, sequence_length, hidden_dim = hidden_states.shape
193
+ hidden_states = hidden_states.view(-1, hidden_dim)
194
+ # router_logits: (batch * sequence_length, n_experts)
195
+ router_logits = self.gate(hidden_states)
196
+
197
+ routing_weights = F.softmax(router_logits, dim=1, dtype=torch.float)
198
+ routing_weights, selected_experts = torch.topk(routing_weights, self.top_k, dim=-1) # (B*N, 2)
199
+ routing_weights /= routing_weights.sum(dim=-1, keepdim=True)
200
+ # we cast back to the input dtype
201
+ routing_weights = routing_weights.to(hidden_states.dtype)
202
+
203
+ # hidden_states fed into FFN
204
+ hidden_states_ffn = self.ffn(hidden_states) # (B*N, dim)
205
+
206
+ final_hidden_states = torch.zeros(
207
+ (batch_size * sequence_length, hidden_dim), dtype=hidden_states.dtype, device=hidden_states.device
208
+ )
209
+
210
+ # One hot encode the selected experts to create an expert mask
211
+ # this will be used to easily index which expert is going to be sollicitated
212
+ expert_mask = torch.nn.functional.one_hot(selected_experts, num_classes=self.num_experts).permute(2, 1, 0) # (8, 2, B*N)
213
+
214
+ # Loop over all available experts in the model and perform the computation on each expert
215
+ for expert_idx in range(self.num_experts):
216
+ expert_layer = self.experts[expert_idx]
217
+ idx, top_x = torch.where(expert_mask[expert_idx]) # idx: token choose this expert as top-1 or top-2; top_x: whether token choose this expert
218
+
219
+ if top_x.shape[0] == 0:
220
+ continue
221
+
222
+ # in torch it is faster to index using lists than torch tensors
223
+ top_x_list = top_x.tolist()
224
+ idx_list = idx.tolist()
225
+
226
+ # Index the correct hidden states and compute the expert hidden state for
227
+ # the current expert. We need to make sure to multiply the output hidden
228
+ # states by `routing_weights` on the corresponding tokens (top-1 and top-2)
229
+ current_state = hidden_states[None, top_x_list].reshape(-1, hidden_dim)
230
+ current_ffn = hidden_states_ffn[None, top_x_list].reshape(-1, hidden_dim)
231
+
232
+ # fuse ffn and lora hidden states
233
+ # shall we fuse lora experts before this addition or after?
234
+ # current implementation is aligned with mixsture of FFN experts
235
+ current_hidden_states = (expert_layer(current_state) + current_ffn) * routing_weights[top_x_list, idx_list, None]
236
+
237
+ # However `index_add_` only support torch tensors for indexing so we'll use
238
+ # the `top_x` tensor here.
239
+ final_hidden_states.index_add_(0, top_x, current_hidden_states.to(hidden_states.dtype))
240
+ final_hidden_states = final_hidden_states.reshape(batch_size, sequence_length, hidden_dim)
241
+ return final_hidden_states, router_logits
242
+
243
+
244
+ class MixtralMoleDecoderLayer(nn.Module):
245
+ def __init__(self, config: MixtralMoleConfig, layer_idx: int):
246
+ super().__init__()
247
+ self.hidden_size = config.hidden_size
248
+
249
+ self.self_attn = MIXTRAL_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx)
250
+
251
+ self.block_sparse_moe = MixtralMoleSparseMoeBlock(config)
252
+ self.input_layernorm = MixtralRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
253
+ self.post_attention_layernorm = MixtralRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
254
+
255
+ def forward(
256
+ self,
257
+ hidden_states: torch.Tensor,
258
+ attention_mask: Optional[torch.Tensor] = None,
259
+ position_ids: Optional[torch.LongTensor] = None,
260
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
261
+ output_attentions: Optional[bool] = False,
262
+ output_router_logits: Optional[bool] = False,
263
+ use_cache: Optional[bool] = False,
264
+ **kwargs,
265
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
266
+ if "padding_mask" in kwargs:
267
+ warnings.warn(
268
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
269
+ )
270
+ """
271
+ Args:
272
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
273
+ attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
274
+ `(batch, sequence_length)` where padding elements are indicated by 0.
275
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
276
+ output_attentions (`bool`, *optional*):
277
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
278
+ returned tensors for more detail.
279
+ output_router_logits (`bool`, *optional*):
280
+ Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
281
+ should not be returned during inference.
282
+ use_cache (`bool`, *optional*):
283
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
284
+ (see `past_key_values`).
285
+ """
286
+
287
+ residual = hidden_states
288
+
289
+ hidden_states = self.input_layernorm(hidden_states)
290
+
291
+ # Self Attention
292
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
293
+ hidden_states=hidden_states,
294
+ attention_mask=attention_mask,
295
+ position_ids=position_ids,
296
+ past_key_value=past_key_value,
297
+ output_attentions=output_attentions,
298
+ use_cache=use_cache,
299
+ )
300
+ hidden_states = residual + hidden_states
301
+
302
+ # Fully Connected
303
+ residual = hidden_states
304
+ hidden_states = self.post_attention_layernorm(hidden_states)
305
+ hidden_states, router_logits = self.block_sparse_moe(hidden_states)
306
+ hidden_states = residual + hidden_states
307
+
308
+ outputs = (hidden_states,)
309
+
310
+ if output_attentions:
311
+ outputs += (self_attn_weights,)
312
+
313
+ if use_cache:
314
+ outputs += (present_key_value,)
315
+
316
+ if output_router_logits:
317
+ outputs += (router_logits,)
318
+
319
+ return outputs
320
+
321
+
322
+ MIXTRAL_START_DOCSTRING = r"""
323
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
324
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
325
+ etc.)
326
+
327
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
328
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
329
+ and behavior.
330
+
331
+ Parameters:
332
+ config ([`MixtralMoleConfig`]):
333
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
334
+ load the weights associated with the model, only the configuration. Check out the
335
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
336
+ """
337
+
338
+
339
+ @add_start_docstrings(
340
+ "The bare Mixtral Model outputting raw hidden-states without any specific head on top.",
341
+ MIXTRAL_START_DOCSTRING,
342
+ )
343
+ # Copied from transformers.models.mistral.modeling_mistral.MistralPreTrainedModel with Mistral->Mixtral
344
+ class MixtralMolePreTrainedModel(PreTrainedModel):
345
+ config_class = MixtralMoleConfig
346
+ base_model_prefix = "model"
347
+ supports_gradient_checkpointing = True
348
+ _no_split_modules = ["MixtralMoleDecoderLayer"]
349
+ _skip_keys_device_placement = "past_key_values"
350
+ _supports_flash_attn_2 = True
351
+ _supports_sdpa = True
352
+ _supports_cache_class = True
353
+
354
+ def _init_weights(self, module):
355
+ std = self.config.initializer_range
356
+ if isinstance(module, nn.Linear):
357
+ module.weight.data.normal_(mean=0.0, std=std)
358
+ if module.bias is not None:
359
+ module.bias.data.zero_()
360
+ elif isinstance(module, nn.Embedding):
361
+ module.weight.data.normal_(mean=0.0, std=std)
362
+ if module.padding_idx is not None:
363
+ module.weight.data[module.padding_idx].zero_()
364
+
365
+
366
+ MIXTRAL_INPUTS_DOCSTRING = r"""
367
+ Args:
368
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
369
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
370
+ it.
371
+
372
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
373
+ [`PreTrainedTokenizer.__call__`] for details.
374
+
375
+ [What are input IDs?](../glossary#input-ids)
376
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
377
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
378
+
379
+ - 1 for tokens that are **not masked**,
380
+ - 0 for tokens that are **masked**.
381
+
382
+ [What are attention masks?](../glossary#attention-mask)
383
+
384
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
385
+ [`PreTrainedTokenizer.__call__`] for details.
386
+
387
+ If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
388
+ `past_key_values`).
389
+
390
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
391
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
392
+ information on the default strategy.
393
+
394
+ - 1 indicates the head is **not masked**,
395
+ - 0 indicates the head is **masked**.
396
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
397
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
398
+ config.n_positions - 1]`.
399
+
400
+ [What are position IDs?](../glossary#position-ids)
401
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
402
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
403
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
404
+ `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
405
+
406
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
407
+ blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
408
+
409
+ If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
410
+ don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
411
+ `decoder_input_ids` of shape `(batch_size, sequence_length)`.
412
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
413
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
414
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
415
+ model's internal embedding lookup matrix.
416
+ use_cache (`bool`, *optional*):
417
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
418
+ `past_key_values`).
419
+ output_attentions (`bool`, *optional*):
420
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
421
+ tensors for more detail.
422
+ output_hidden_states (`bool`, *optional*):
423
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
424
+ more detail.
425
+ output_router_logits (`bool`, *optional*):
426
+ Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
427
+ should not be returned during inference.
428
+ return_dict (`bool`, *optional*):
429
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
430
+ """
431
+
432
+
433
+ @add_start_docstrings(
434
+ "The bare Mixtral Model outputting raw hidden-states without any specific head on top.",
435
+ MIXTRAL_START_DOCSTRING,
436
+ )
437
+ # Copied from transformers.models.mistral.modeling_mistral.MistralModel with MISTRAL->MIXTRAL,Mistral->Mixtral
438
+ class MixtralMoleModel(MixtralMolePreTrainedModel):
439
+ """
440
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`MixtralMoleDecoderLayer`]
441
+
442
+ Args:
443
+ config: MixtralMoleConfig
444
+ """
445
+
446
+ def __init__(self, config: MixtralMoleConfig):
447
+ super().__init__(config)
448
+ self.padding_idx = config.pad_token_id
449
+ self.vocab_size = config.vocab_size
450
+
451
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
452
+ self.layers = nn.ModuleList(
453
+ [MixtralMoleDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
454
+ )
455
+ self._attn_implementation = config._attn_implementation
456
+ self.norm = MixtralRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
457
+
458
+ self.gradient_checkpointing = False
459
+ # Initialize weights and apply final processing
460
+ self.post_init()
461
+
462
+ def get_input_embeddings(self):
463
+ return self.embed_tokens
464
+
465
+ def set_input_embeddings(self, value):
466
+ self.embed_tokens = value
467
+
468
+ # Ignore copy
469
+ @add_start_docstrings_to_model_forward(MIXTRAL_INPUTS_DOCSTRING)
470
+ def forward(
471
+ self,
472
+ input_ids: torch.LongTensor = None,
473
+ attention_mask: Optional[torch.Tensor] = None,
474
+ position_ids: Optional[torch.LongTensor] = None,
475
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
476
+ inputs_embeds: Optional[torch.FloatTensor] = None,
477
+ use_cache: Optional[bool] = None,
478
+ output_attentions: Optional[bool] = None,
479
+ output_hidden_states: Optional[bool] = None,
480
+ output_router_logits: Optional[bool] = None,
481
+ return_dict: Optional[bool] = None,
482
+ ) -> Union[Tuple, MoeModelOutputWithPast]:
483
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
484
+ output_router_logits = (
485
+ output_router_logits if output_router_logits is not None else self.config.output_router_logits
486
+ )
487
+ output_hidden_states = (
488
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
489
+ )
490
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
491
+
492
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
493
+
494
+ # retrieve input_ids and inputs_embeds
495
+ if input_ids is not None and inputs_embeds is not None:
496
+ raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
497
+ elif input_ids is not None:
498
+ batch_size, seq_length = input_ids.shape
499
+ elif inputs_embeds is not None:
500
+ batch_size, seq_length, _ = inputs_embeds.shape
501
+ else:
502
+ raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
503
+
504
+ past_key_values_length = 0
505
+
506
+ if self.gradient_checkpointing and self.training:
507
+ if use_cache:
508
+ logger.warning_once(
509
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
510
+ )
511
+ use_cache = False
512
+
513
+ if use_cache:
514
+ use_legacy_cache = not isinstance(past_key_values, Cache)
515
+ if use_legacy_cache:
516
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
517
+ past_key_values_length = past_key_values.get_usable_length(seq_length)
518
+
519
+ if position_ids is None:
520
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
521
+ position_ids = torch.arange(
522
+ past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
523
+ )
524
+ position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
525
+ else:
526
+ position_ids = position_ids.view(-1, seq_length).long()
527
+
528
+ if inputs_embeds is None:
529
+ inputs_embeds = self.embed_tokens(input_ids)
530
+
531
+ if attention_mask is not None and self._attn_implementation == "flash_attention_2" and use_cache:
532
+ is_padding_right = attention_mask[:, -1].sum().item() != batch_size
533
+ if is_padding_right:
534
+ raise ValueError(
535
+ "You are attempting to perform batched generation with padding_side='right'"
536
+ " this may lead to unexpected behaviour for Flash Attention version of Mixtral. Make sure to "
537
+ " call `tokenizer.padding_side = 'left'` before tokenizing the input. "
538
+ )
539
+
540
+ if self._attn_implementation == "flash_attention_2":
541
+ # 2d mask is passed through the layers
542
+ attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
543
+ elif self._attn_implementation == "sdpa" and not output_attentions:
544
+ # output_attentions=True can not be supported when using SDPA, and we fall back on
545
+ # the manual implementation that requires a 4D causal mask in all cases.
546
+ attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
547
+ attention_mask,
548
+ (batch_size, seq_length),
549
+ inputs_embeds,
550
+ past_key_values_length,
551
+ )
552
+ else:
553
+ # 4d mask is passed through the layers
554
+ attention_mask = _prepare_4d_causal_attention_mask(
555
+ attention_mask,
556
+ (batch_size, seq_length),
557
+ inputs_embeds,
558
+ past_key_values_length,
559
+ sliding_window=self.config.sliding_window,
560
+ )
561
+
562
+ hidden_states = inputs_embeds
563
+
564
+ # decoder layers
565
+ all_hidden_states = () if output_hidden_states else None
566
+ all_self_attns = () if output_attentions else None
567
+ all_router_logits = () if output_router_logits else None
568
+ next_decoder_cache = None
569
+
570
+ for decoder_layer in self.layers:
571
+ if output_hidden_states:
572
+ all_hidden_states += (hidden_states,)
573
+
574
+ if self.gradient_checkpointing and self.training:
575
+ layer_outputs = self._gradient_checkpointing_func(
576
+ decoder_layer.__call__,
577
+ hidden_states,
578
+ attention_mask,
579
+ position_ids,
580
+ past_key_values,
581
+ output_attentions,
582
+ output_router_logits,
583
+ use_cache,
584
+ )
585
+ else:
586
+ layer_outputs = decoder_layer(
587
+ hidden_states,
588
+ attention_mask=attention_mask,
589
+ position_ids=position_ids,
590
+ past_key_value=past_key_values,
591
+ output_attentions=output_attentions,
592
+ output_router_logits=output_router_logits,
593
+ use_cache=use_cache,
594
+ )
595
+
596
+ hidden_states = layer_outputs[0]
597
+
598
+ if use_cache:
599
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
600
+
601
+ if output_attentions:
602
+ all_self_attns += (layer_outputs[1],)
603
+
604
+ if output_router_logits:
605
+ all_router_logits += (layer_outputs[-1],)
606
+
607
+ hidden_states = self.norm(hidden_states)
608
+
609
+ # add hidden states from the last decoder layer
610
+ if output_hidden_states:
611
+ all_hidden_states += (hidden_states,)
612
+
613
+ next_cache = None
614
+ if use_cache:
615
+ next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
616
+
617
+ if not return_dict:
618
+ return tuple(
619
+ v
620
+ for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_router_logits]
621
+ if v is not None
622
+ )
623
+ return MoeModelOutputWithPast(
624
+ last_hidden_state=hidden_states,
625
+ past_key_values=next_cache,
626
+ hidden_states=all_hidden_states,
627
+ attentions=all_self_attns,
628
+ router_logits=all_router_logits,
629
+ )
630
+
631
+
632
+ class MixtralMoleForCausalLM(MixtralMolePreTrainedModel):
633
+ _tied_weights_keys = ["lm_head.weight"]
634
+
635
+ def __init__(self, config):
636
+ super().__init__(config)
637
+ self.model = MixtralMoleModel(config)
638
+ self.vocab_size = config.vocab_size
639
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
640
+ self.router_aux_loss_coef = config.router_aux_loss_coef
641
+ self.num_experts = config.num_local_experts
642
+ self.num_experts_per_tok = config.num_experts_per_tok
643
+ # Initialize weights and apply final processing
644
+ self.post_init()
645
+
646
+ def get_input_embeddings(self):
647
+ return self.model.embed_tokens
648
+
649
+ def set_input_embeddings(self, value):
650
+ self.model.embed_tokens = value
651
+
652
+ def get_output_embeddings(self):
653
+ return self.lm_head
654
+
655
+ def set_output_embeddings(self, new_embeddings):
656
+ self.lm_head = new_embeddings
657
+
658
+ def set_decoder(self, decoder):
659
+ self.model = decoder
660
+
661
+ def get_decoder(self):
662
+ return self.model
663
+
664
+ @add_start_docstrings_to_model_forward(MIXTRAL_INPUTS_DOCSTRING)
665
+ @replace_return_docstrings(output_type=MoeCausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
666
+ # Ignore copy
667
+ def forward(
668
+ self,
669
+ input_ids: torch.LongTensor = None,
670
+ attention_mask: Optional[torch.Tensor] = None,
671
+ position_ids: Optional[torch.LongTensor] = None,
672
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
673
+ inputs_embeds: Optional[torch.FloatTensor] = None,
674
+ labels: Optional[torch.LongTensor] = None,
675
+ use_cache: Optional[bool] = None,
676
+ output_attentions: Optional[bool] = None,
677
+ output_hidden_states: Optional[bool] = None,
678
+ output_router_logits: Optional[bool] = None,
679
+ return_dict: Optional[bool] = None,
680
+ ) -> Union[Tuple, MoeCausalLMOutputWithPast]:
681
+ r"""
682
+ Args:
683
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
684
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
685
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
686
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
687
+
688
+ Returns:
689
+
690
+ Example:
691
+
692
+ ```python
693
+ >>> from transformers import AutoTokenizer, MixtralMoleForCausalLM
694
+
695
+ >>> model = MixtralMoleForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1")
696
+ >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1")
697
+
698
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
699
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
700
+
701
+ >>> # Generate
702
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
703
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
704
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
705
+ ```"""
706
+
707
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
708
+ output_router_logits = (
709
+ output_router_logits if output_router_logits is not None else self.config.output_router_logits
710
+ )
711
+
712
+ output_hidden_states = (
713
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
714
+ )
715
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
716
+
717
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
718
+ outputs = self.model(
719
+ input_ids=input_ids,
720
+ attention_mask=attention_mask,
721
+ position_ids=position_ids,
722
+ past_key_values=past_key_values,
723
+ inputs_embeds=inputs_embeds,
724
+ use_cache=use_cache,
725
+ output_attentions=output_attentions,
726
+ output_hidden_states=output_hidden_states,
727
+ output_router_logits=output_router_logits,
728
+ return_dict=return_dict,
729
+ )
730
+
731
+ hidden_states = outputs[0]
732
+ logits = self.lm_head(hidden_states)
733
+ logits = logits.float()
734
+
735
+ loss = None
736
+ if labels is not None:
737
+ # Shift so that tokens < n predict n
738
+ shift_logits = logits[..., :-1, :].contiguous()
739
+ shift_labels = labels[..., 1:].contiguous()
740
+ # Flatten the tokens
741
+ loss_fct = CrossEntropyLoss()
742
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
743
+ shift_labels = shift_labels.view(-1)
744
+ # Enable model parallelism
745
+ shift_labels = shift_labels.to(shift_logits.device)
746
+ loss = loss_fct(shift_logits, shift_labels)
747
+
748
+ aux_loss = None
749
+ if output_router_logits:
750
+ aux_loss = load_balancing_loss_func(
751
+ outputs.router_logits if return_dict else outputs[-1], self.num_experts, self.num_experts_per_tok
752
+ )
753
+ if labels is not None:
754
+ loss += self.router_aux_loss_coef * aux_loss
755
+
756
+ if not return_dict:
757
+ output = (logits,) + outputs[1:]
758
+ if output_router_logits:
759
+ output = (aux_loss,) + output
760
+ return (loss,) + output if loss is not None else output
761
+
762
+ return MoeCausalLMOutputWithPast(
763
+ loss=loss,
764
+ aux_loss=aux_loss,
765
+ logits=logits,
766
+ past_key_values=outputs.past_key_values,
767
+ hidden_states=outputs.hidden_states,
768
+ attentions=outputs.attentions,
769
+ router_logits=outputs.router_logits,
770
+ )
771
+
772
+ def prepare_inputs_for_generation(
773
+ self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
774
+ ):
775
+ # Omit tokens covered by past_key_values
776
+ if past_key_values is not None:
777
+ if isinstance(past_key_values, Cache):
778
+ cache_length = past_key_values.get_seq_length()
779
+ past_length = past_key_values.seen_tokens
780
+ max_cache_length = past_key_values.get_max_length()
781
+ else:
782
+ cache_length = past_length = past_key_values[0][0].shape[2]
783
+ max_cache_length = None
784
+
785
+ # Keep only the unprocessed tokens:
786
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
787
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
788
+ # input)
789
+ if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
790
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
791
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
792
+ # input_ids based on the past_length.
793
+ elif past_length < input_ids.shape[1]:
794
+ input_ids = input_ids[:, past_length:]
795
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
796
+
797
+ # If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
798
+ if (
799
+ max_cache_length is not None
800
+ and attention_mask is not None
801
+ and cache_length + input_ids.shape[1] > max_cache_length
802
+ ):
803
+ attention_mask = attention_mask[:, -max_cache_length:]
804
+
805
+ position_ids = kwargs.get("position_ids", None)
806
+ if attention_mask is not None and position_ids is None:
807
+ # create position_ids on the fly for batch generation
808
+ position_ids = attention_mask.long().cumsum(-1) - 1
809
+ position_ids.masked_fill_(attention_mask == 0, 1)
810
+ if past_key_values:
811
+ position_ids = position_ids[:, -input_ids.shape[1] :]
812
+
813
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
814
+ if inputs_embeds is not None and past_key_values is None:
815
+ model_inputs = {"inputs_embeds": inputs_embeds}
816
+ else:
817
+ model_inputs = {"input_ids": input_ids}
818
+
819
+ model_inputs.update(
820
+ {
821
+ "position_ids": position_ids,
822
+ "past_key_values": past_key_values,
823
+ "use_cache": kwargs.get("use_cache"),
824
+ "attention_mask": attention_mask,
825
+ }
826
+ )
827
+ return model_inputs
828
+
829
+ @staticmethod
830
+ def _reorder_cache(past_key_values, beam_idx):
831
+ reordered_past = ()
832
+ for layer_past in past_key_values:
833
+ reordered_past += (
834
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
835
+ )
836
+ return reordered_past
837
+
838
+
839
+ @add_start_docstrings(
840
+ """
841
+ The Mixtral Model transformer with a sequence classification head on top (linear layer).
842
+
843
+ [`MixtralMoleForSequenceClassification`] uses the last token in order to do the classification, as other causal models
844
+ (e.g. GPT-2) do.
845
+
846
+ Since it does classification on the last token, it requires to know the position of the last token. If a
847
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
848
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
849
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
850
+ each row of the batch).
851
+ """,
852
+ MIXTRAL_START_DOCSTRING,
853
+ )
854
+ # Copied from transformers.models.llama.modeling_llama.LlamaForSequenceClassification with Llama->Mixtral, LLAMA->MIXTRAL
855
+ class MixtralMoleForSequenceClassification(MixtralMolePreTrainedModel):
856
+ def __init__(self, config):
857
+ super().__init__(config)
858
+ self.num_labels = config.num_labels
859
+ self.model = MixtralMoleModel(config)
860
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
861
+
862
+ # Initialize weights and apply final processing
863
+ self.post_init()
864
+
865
+ def get_input_embeddings(self):
866
+ return self.model.embed_tokens
867
+
868
+ def set_input_embeddings(self, value):
869
+ self.model.embed_tokens = value
870
+
871
+ @add_start_docstrings_to_model_forward(MIXTRAL_INPUTS_DOCSTRING)
872
+ def forward(
873
+ self,
874
+ input_ids: torch.LongTensor = None,
875
+ attention_mask: Optional[torch.Tensor] = None,
876
+ position_ids: Optional[torch.LongTensor] = None,
877
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
878
+ inputs_embeds: Optional[torch.FloatTensor] = None,
879
+ labels: Optional[torch.LongTensor] = None,
880
+ use_cache: Optional[bool] = None,
881
+ output_attentions: Optional[bool] = None,
882
+ output_hidden_states: Optional[bool] = None,
883
+ return_dict: Optional[bool] = None,
884
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
885
+ r"""
886
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
887
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
888
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
889
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
890
+ """
891
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
892
+
893
+ transformer_outputs = self.model(
894
+ input_ids,
895
+ attention_mask=attention_mask,
896
+ position_ids=position_ids,
897
+ past_key_values=past_key_values,
898
+ inputs_embeds=inputs_embeds,
899
+ use_cache=use_cache,
900
+ output_attentions=output_attentions,
901
+ output_hidden_states=output_hidden_states,
902
+ return_dict=return_dict,
903
+ )
904
+ hidden_states = transformer_outputs[0]
905
+ logits = self.score(hidden_states)
906
+
907
+ if input_ids is not None:
908
+ batch_size = input_ids.shape[0]
909
+ else:
910
+ batch_size = inputs_embeds.shape[0]
911
+
912
+ if self.config.pad_token_id is None and batch_size != 1:
913
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
914
+ if self.config.pad_token_id is None:
915
+ sequence_lengths = -1
916
+ else:
917
+ if input_ids is not None:
918
+ # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
919
+ sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
920
+ sequence_lengths = sequence_lengths % input_ids.shape[-1]
921
+ sequence_lengths = sequence_lengths.to(logits.device)
922
+ else:
923
+ sequence_lengths = -1
924
+
925
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
926
+
927
+ loss = None
928
+ if labels is not None:
929
+ labels = labels.to(logits.device)
930
+ if self.config.problem_type is None:
931
+ if self.num_labels == 1:
932
+ self.config.problem_type = "regression"
933
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
934
+ self.config.problem_type = "single_label_classification"
935
+ else:
936
+ self.config.problem_type = "multi_label_classification"
937
+
938
+ if self.config.problem_type == "regression":
939
+ loss_fct = MSELoss()
940
+ if self.num_labels == 1:
941
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
942
+ else:
943
+ loss = loss_fct(pooled_logits, labels)
944
+ elif self.config.problem_type == "single_label_classification":
945
+ loss_fct = CrossEntropyLoss()
946
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
947
+ elif self.config.problem_type == "multi_label_classification":
948
+ loss_fct = BCEWithLogitsLoss()
949
+ loss = loss_fct(pooled_logits, labels)
950
+ if not return_dict:
951
+ output = (pooled_logits,) + transformer_outputs[1:]
952
+ return ((loss,) + output) if loss is not None else output
953
+
954
+ return SequenceClassifierOutputWithPast(
955
+ loss=loss,
956
+ logits=pooled_logits,
957
+ past_key_values=transformer_outputs.past_key_values,
958
+ hidden_states=transformer_outputs.hidden_states,
959
+ attentions=transformer_outputs.attentions,
960
+ )
runs/Jan23_12-59-12_main1/events.out.tfevents.1706014802.main1.65865.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fa443b537f60f0f975c5ef0d78c91eaaee9f9310012dda9cad8af344251d46f0
3
- size 121084
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ca8ba4c05907405476633105cb3e4e5fb8372f701e78ea49be4a9cfc02d89ca
3
+ size 122066
runs/Jan23_12-59-12_main1/events.out.tfevents.1706062627.main1.65865.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1996dbc3bce71ab41af4ab108d30754d4b3fde333db8b99d734589d64aaf48de
3
+ size 359
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "train_loss": 1.0902616986746403,
4
+ "train_runtime": 47329.7113,
5
+ "train_samples": 207865,
6
+ "train_samples_per_second": 9.258,
7
+ "train_steps_per_second": 0.072
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,4412 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 1.1127439737319946,
3
+ "best_model_checkpoint": "data/tinyllama_mole_sft_ultrachat_ep3/checkpoint-2200",
4
+ "epoch": 2.9986859395532193,
5
+ "eval_steps": 100,
6
+ "global_step": 3423,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 1.6666666666666668e-07,
14
+ "loss": 2.7824,
15
+ "step": 1
16
+ },
17
+ {
18
+ "epoch": 0.0,
19
+ "learning_rate": 8.333333333333333e-07,
20
+ "loss": 2.7372,
21
+ "step": 5
22
+ },
23
+ {
24
+ "epoch": 0.01,
25
+ "learning_rate": 1.6666666666666667e-06,
26
+ "loss": 2.7223,
27
+ "step": 10
28
+ },
29
+ {
30
+ "epoch": 0.01,
31
+ "learning_rate": 2.5e-06,
32
+ "loss": 2.5564,
33
+ "step": 15
34
+ },
35
+ {
36
+ "epoch": 0.02,
37
+ "learning_rate": 3.3333333333333333e-06,
38
+ "loss": 2.2279,
39
+ "step": 20
40
+ },
41
+ {
42
+ "epoch": 0.02,
43
+ "learning_rate": 4.166666666666667e-06,
44
+ "loss": 1.8858,
45
+ "step": 25
46
+ },
47
+ {
48
+ "epoch": 0.03,
49
+ "learning_rate": 5e-06,
50
+ "loss": 1.7341,
51
+ "step": 30
52
+ },
53
+ {
54
+ "epoch": 0.03,
55
+ "learning_rate": 5.833333333333334e-06,
56
+ "loss": 1.629,
57
+ "step": 35
58
+ },
59
+ {
60
+ "epoch": 0.04,
61
+ "learning_rate": 6.666666666666667e-06,
62
+ "loss": 1.5663,
63
+ "step": 40
64
+ },
65
+ {
66
+ "epoch": 0.04,
67
+ "learning_rate": 7.500000000000001e-06,
68
+ "loss": 1.5049,
69
+ "step": 45
70
+ },
71
+ {
72
+ "epoch": 0.04,
73
+ "learning_rate": 8.333333333333334e-06,
74
+ "loss": 1.4663,
75
+ "step": 50
76
+ },
77
+ {
78
+ "epoch": 0.05,
79
+ "learning_rate": 9.166666666666666e-06,
80
+ "loss": 1.4061,
81
+ "step": 55
82
+ },
83
+ {
84
+ "epoch": 0.05,
85
+ "learning_rate": 1e-05,
86
+ "loss": 1.3948,
87
+ "step": 60
88
+ },
89
+ {
90
+ "epoch": 0.06,
91
+ "learning_rate": 1.0833333333333334e-05,
92
+ "loss": 1.3597,
93
+ "step": 65
94
+ },
95
+ {
96
+ "epoch": 0.06,
97
+ "learning_rate": 1.1666666666666668e-05,
98
+ "loss": 1.3542,
99
+ "step": 70
100
+ },
101
+ {
102
+ "epoch": 0.07,
103
+ "learning_rate": 1.25e-05,
104
+ "loss": 1.3161,
105
+ "step": 75
106
+ },
107
+ {
108
+ "epoch": 0.07,
109
+ "learning_rate": 1.3333333333333333e-05,
110
+ "loss": 1.3134,
111
+ "step": 80
112
+ },
113
+ {
114
+ "epoch": 0.07,
115
+ "learning_rate": 1.416666666666667e-05,
116
+ "loss": 1.3123,
117
+ "step": 85
118
+ },
119
+ {
120
+ "epoch": 0.08,
121
+ "learning_rate": 1.5000000000000002e-05,
122
+ "loss": 1.3025,
123
+ "step": 90
124
+ },
125
+ {
126
+ "epoch": 0.08,
127
+ "learning_rate": 1.5833333333333333e-05,
128
+ "loss": 1.2728,
129
+ "step": 95
130
+ },
131
+ {
132
+ "epoch": 0.09,
133
+ "learning_rate": 1.6666666666666667e-05,
134
+ "loss": 1.3007,
135
+ "step": 100
136
+ },
137
+ {
138
+ "epoch": 0.09,
139
+ "eval_loss": 1.2780451774597168,
140
+ "eval_runtime": 443.4938,
141
+ "eval_samples_per_second": 36.451,
142
+ "eval_steps_per_second": 1.141,
143
+ "step": 100
144
+ },
145
+ {
146
+ "epoch": 0.09,
147
+ "learning_rate": 1.7500000000000002e-05,
148
+ "loss": 1.2749,
149
+ "step": 105
150
+ },
151
+ {
152
+ "epoch": 0.1,
153
+ "learning_rate": 1.8333333333333333e-05,
154
+ "loss": 1.2661,
155
+ "step": 110
156
+ },
157
+ {
158
+ "epoch": 0.1,
159
+ "learning_rate": 1.916666666666667e-05,
160
+ "loss": 1.2678,
161
+ "step": 115
162
+ },
163
+ {
164
+ "epoch": 0.11,
165
+ "learning_rate": 2e-05,
166
+ "loss": 1.2441,
167
+ "step": 120
168
+ },
169
+ {
170
+ "epoch": 0.11,
171
+ "learning_rate": 1.9999886918439637e-05,
172
+ "loss": 1.249,
173
+ "step": 125
174
+ },
175
+ {
176
+ "epoch": 0.11,
177
+ "learning_rate": 1.9999547676316034e-05,
178
+ "loss": 1.2267,
179
+ "step": 130
180
+ },
181
+ {
182
+ "epoch": 0.12,
183
+ "learning_rate": 1.99989822813016e-05,
184
+ "loss": 1.2418,
185
+ "step": 135
186
+ },
187
+ {
188
+ "epoch": 0.12,
189
+ "learning_rate": 1.999819074618348e-05,
190
+ "loss": 1.2541,
191
+ "step": 140
192
+ },
193
+ {
194
+ "epoch": 0.13,
195
+ "learning_rate": 1.9997173088863285e-05,
196
+ "loss": 1.2339,
197
+ "step": 145
198
+ },
199
+ {
200
+ "epoch": 0.13,
201
+ "learning_rate": 1.9995929332356666e-05,
202
+ "loss": 1.2366,
203
+ "step": 150
204
+ },
205
+ {
206
+ "epoch": 0.14,
207
+ "learning_rate": 1.999445950479281e-05,
208
+ "loss": 1.2296,
209
+ "step": 155
210
+ },
211
+ {
212
+ "epoch": 0.14,
213
+ "learning_rate": 1.9992763639413796e-05,
214
+ "loss": 1.2246,
215
+ "step": 160
216
+ },
217
+ {
218
+ "epoch": 0.14,
219
+ "learning_rate": 1.9990841774573843e-05,
220
+ "loss": 1.22,
221
+ "step": 165
222
+ },
223
+ {
224
+ "epoch": 0.15,
225
+ "learning_rate": 1.9988693953738446e-05,
226
+ "loss": 1.2232,
227
+ "step": 170
228
+ },
229
+ {
230
+ "epoch": 0.15,
231
+ "learning_rate": 1.9986320225483396e-05,
232
+ "loss": 1.2337,
233
+ "step": 175
234
+ },
235
+ {
236
+ "epoch": 0.16,
237
+ "learning_rate": 1.9983720643493665e-05,
238
+ "loss": 1.2096,
239
+ "step": 180
240
+ },
241
+ {
242
+ "epoch": 0.16,
243
+ "learning_rate": 1.9980895266562217e-05,
244
+ "loss": 1.2099,
245
+ "step": 185
246
+ },
247
+ {
248
+ "epoch": 0.17,
249
+ "learning_rate": 1.9977844158588655e-05,
250
+ "loss": 1.2165,
251
+ "step": 190
252
+ },
253
+ {
254
+ "epoch": 0.17,
255
+ "learning_rate": 1.997456738857779e-05,
256
+ "loss": 1.2075,
257
+ "step": 195
258
+ },
259
+ {
260
+ "epoch": 0.18,
261
+ "learning_rate": 1.9971065030638076e-05,
262
+ "loss": 1.2255,
263
+ "step": 200
264
+ },
265
+ {
266
+ "epoch": 0.18,
267
+ "eval_loss": 1.215826153755188,
268
+ "eval_runtime": 441.9342,
269
+ "eval_samples_per_second": 36.58,
270
+ "eval_steps_per_second": 1.145,
271
+ "step": 200
272
+ },
273
+ {
274
+ "epoch": 0.18,
275
+ "learning_rate": 1.996733716397993e-05,
276
+ "loss": 1.2178,
277
+ "step": 205
278
+ },
279
+ {
280
+ "epoch": 0.18,
281
+ "learning_rate": 1.996338387291395e-05,
282
+ "loss": 1.1935,
283
+ "step": 210
284
+ },
285
+ {
286
+ "epoch": 0.19,
287
+ "learning_rate": 1.9959205246849e-05,
288
+ "loss": 1.2246,
289
+ "step": 215
290
+ },
291
+ {
292
+ "epoch": 0.19,
293
+ "learning_rate": 1.9954801380290194e-05,
294
+ "loss": 1.2003,
295
+ "step": 220
296
+ },
297
+ {
298
+ "epoch": 0.2,
299
+ "learning_rate": 1.995017237283675e-05,
300
+ "loss": 1.1842,
301
+ "step": 225
302
+ },
303
+ {
304
+ "epoch": 0.2,
305
+ "learning_rate": 1.994531832917974e-05,
306
+ "loss": 1.2052,
307
+ "step": 230
308
+ },
309
+ {
310
+ "epoch": 0.21,
311
+ "learning_rate": 1.994023935909974e-05,
312
+ "loss": 1.214,
313
+ "step": 235
314
+ },
315
+ {
316
+ "epoch": 0.21,
317
+ "learning_rate": 1.9934935577464312e-05,
318
+ "loss": 1.1859,
319
+ "step": 240
320
+ },
321
+ {
322
+ "epoch": 0.21,
323
+ "learning_rate": 1.9929407104225444e-05,
324
+ "loss": 1.2139,
325
+ "step": 245
326
+ },
327
+ {
328
+ "epoch": 0.22,
329
+ "learning_rate": 1.9923654064416813e-05,
330
+ "loss": 1.1902,
331
+ "step": 250
332
+ },
333
+ {
334
+ "epoch": 0.22,
335
+ "learning_rate": 1.991767658815096e-05,
336
+ "loss": 1.1991,
337
+ "step": 255
338
+ },
339
+ {
340
+ "epoch": 0.23,
341
+ "learning_rate": 1.9911474810616348e-05,
342
+ "loss": 1.2027,
343
+ "step": 260
344
+ },
345
+ {
346
+ "epoch": 0.23,
347
+ "learning_rate": 1.9905048872074322e-05,
348
+ "loss": 1.184,
349
+ "step": 265
350
+ },
351
+ {
352
+ "epoch": 0.24,
353
+ "learning_rate": 1.989839891785591e-05,
354
+ "loss": 1.1857,
355
+ "step": 270
356
+ },
357
+ {
358
+ "epoch": 0.24,
359
+ "learning_rate": 1.9891525098358553e-05,
360
+ "loss": 1.199,
361
+ "step": 275
362
+ },
363
+ {
364
+ "epoch": 0.25,
365
+ "learning_rate": 1.9884427569042693e-05,
366
+ "loss": 1.1751,
367
+ "step": 280
368
+ },
369
+ {
370
+ "epoch": 0.25,
371
+ "learning_rate": 1.9877106490428275e-05,
372
+ "loss": 1.2092,
373
+ "step": 285
374
+ },
375
+ {
376
+ "epoch": 0.25,
377
+ "learning_rate": 1.9869562028091092e-05,
378
+ "loss": 1.1751,
379
+ "step": 290
380
+ },
381
+ {
382
+ "epoch": 0.26,
383
+ "learning_rate": 1.986179435265906e-05,
384
+ "loss": 1.196,
385
+ "step": 295
386
+ },
387
+ {
388
+ "epoch": 0.26,
389
+ "learning_rate": 1.9853803639808357e-05,
390
+ "loss": 1.192,
391
+ "step": 300
392
+ },
393
+ {
394
+ "epoch": 0.26,
395
+ "eval_loss": 1.1921414136886597,
396
+ "eval_runtime": 444.0535,
397
+ "eval_samples_per_second": 36.406,
398
+ "eval_steps_per_second": 1.14,
399
+ "step": 300
400
+ },
401
+ {
402
+ "epoch": 0.27,
403
+ "learning_rate": 1.984559007025943e-05,
404
+ "loss": 1.1938,
405
+ "step": 305
406
+ },
407
+ {
408
+ "epoch": 0.27,
409
+ "learning_rate": 1.983715382977293e-05,
410
+ "loss": 1.1638,
411
+ "step": 310
412
+ },
413
+ {
414
+ "epoch": 0.28,
415
+ "learning_rate": 1.9828495109145516e-05,
416
+ "loss": 1.1807,
417
+ "step": 315
418
+ },
419
+ {
420
+ "epoch": 0.28,
421
+ "learning_rate": 1.9819614104205504e-05,
422
+ "loss": 1.1772,
423
+ "step": 320
424
+ },
425
+ {
426
+ "epoch": 0.28,
427
+ "learning_rate": 1.9810511015808477e-05,
428
+ "loss": 1.1755,
429
+ "step": 325
430
+ },
431
+ {
432
+ "epoch": 0.29,
433
+ "learning_rate": 1.980118604983273e-05,
434
+ "loss": 1.17,
435
+ "step": 330
436
+ },
437
+ {
438
+ "epoch": 0.29,
439
+ "learning_rate": 1.979163941717459e-05,
440
+ "loss": 1.2216,
441
+ "step": 335
442
+ },
443
+ {
444
+ "epoch": 0.3,
445
+ "learning_rate": 1.9781871333743695e-05,
446
+ "loss": 1.1751,
447
+ "step": 340
448
+ },
449
+ {
450
+ "epoch": 0.3,
451
+ "learning_rate": 1.9771882020458055e-05,
452
+ "loss": 1.1799,
453
+ "step": 345
454
+ },
455
+ {
456
+ "epoch": 0.31,
457
+ "learning_rate": 1.9761671703239108e-05,
458
+ "loss": 1.1752,
459
+ "step": 350
460
+ },
461
+ {
462
+ "epoch": 0.31,
463
+ "learning_rate": 1.9751240613006568e-05,
464
+ "loss": 1.1753,
465
+ "step": 355
466
+ },
467
+ {
468
+ "epoch": 0.32,
469
+ "learning_rate": 1.9740588985673226e-05,
470
+ "loss": 1.181,
471
+ "step": 360
472
+ },
473
+ {
474
+ "epoch": 0.32,
475
+ "learning_rate": 1.9729717062139616e-05,
476
+ "loss": 1.176,
477
+ "step": 365
478
+ },
479
+ {
480
+ "epoch": 0.32,
481
+ "learning_rate": 1.9718625088288544e-05,
482
+ "loss": 1.1694,
483
+ "step": 370
484
+ },
485
+ {
486
+ "epoch": 0.33,
487
+ "learning_rate": 1.970731331497956e-05,
488
+ "loss": 1.164,
489
+ "step": 375
490
+ },
491
+ {
492
+ "epoch": 0.33,
493
+ "learning_rate": 1.969578199804326e-05,
494
+ "loss": 1.1784,
495
+ "step": 380
496
+ },
497
+ {
498
+ "epoch": 0.34,
499
+ "learning_rate": 1.96840313982755e-05,
500
+ "loss": 1.1669,
501
+ "step": 385
502
+ },
503
+ {
504
+ "epoch": 0.34,
505
+ "learning_rate": 1.967206178143152e-05,
506
+ "loss": 1.1611,
507
+ "step": 390
508
+ },
509
+ {
510
+ "epoch": 0.35,
511
+ "learning_rate": 1.96598734182199e-05,
512
+ "loss": 1.1622,
513
+ "step": 395
514
+ },
515
+ {
516
+ "epoch": 0.35,
517
+ "learning_rate": 1.9647466584296474e-05,
518
+ "loss": 1.1696,
519
+ "step": 400
520
+ },
521
+ {
522
+ "epoch": 0.35,
523
+ "eval_loss": 1.1770251989364624,
524
+ "eval_runtime": 440.3334,
525
+ "eval_samples_per_second": 36.713,
526
+ "eval_steps_per_second": 1.149,
527
+ "step": 400
528
+ },
529
+ {
530
+ "epoch": 0.35,
531
+ "learning_rate": 1.9634841560258063e-05,
532
+ "loss": 1.1899,
533
+ "step": 405
534
+ },
535
+ {
536
+ "epoch": 0.36,
537
+ "learning_rate": 1.9621998631636156e-05,
538
+ "loss": 1.1843,
539
+ "step": 410
540
+ },
541
+ {
542
+ "epoch": 0.36,
543
+ "learning_rate": 1.960893808889043e-05,
544
+ "loss": 1.1701,
545
+ "step": 415
546
+ },
547
+ {
548
+ "epoch": 0.37,
549
+ "learning_rate": 1.9595660227402204e-05,
550
+ "loss": 1.1703,
551
+ "step": 420
552
+ },
553
+ {
554
+ "epoch": 0.37,
555
+ "learning_rate": 1.958216534746773e-05,
556
+ "loss": 1.1633,
557
+ "step": 425
558
+ },
559
+ {
560
+ "epoch": 0.38,
561
+ "learning_rate": 1.9568453754291424e-05,
562
+ "loss": 1.171,
563
+ "step": 430
564
+ },
565
+ {
566
+ "epoch": 0.38,
567
+ "learning_rate": 1.9554525757978958e-05,
568
+ "loss": 1.157,
569
+ "step": 435
570
+ },
571
+ {
572
+ "epoch": 0.39,
573
+ "learning_rate": 1.9540381673530247e-05,
574
+ "loss": 1.1708,
575
+ "step": 440
576
+ },
577
+ {
578
+ "epoch": 0.39,
579
+ "learning_rate": 1.952602182083231e-05,
580
+ "loss": 1.1612,
581
+ "step": 445
582
+ },
583
+ {
584
+ "epoch": 0.39,
585
+ "learning_rate": 1.9511446524652062e-05,
586
+ "loss": 1.1861,
587
+ "step": 450
588
+ },
589
+ {
590
+ "epoch": 0.4,
591
+ "learning_rate": 1.949665611462895e-05,
592
+ "loss": 1.1564,
593
+ "step": 455
594
+ },
595
+ {
596
+ "epoch": 0.4,
597
+ "learning_rate": 1.9481650925267506e-05,
598
+ "loss": 1.1559,
599
+ "step": 460
600
+ },
601
+ {
602
+ "epoch": 0.41,
603
+ "learning_rate": 1.946643129592977e-05,
604
+ "loss": 1.1676,
605
+ "step": 465
606
+ },
607
+ {
608
+ "epoch": 0.41,
609
+ "learning_rate": 1.945099757082763e-05,
610
+ "loss": 1.1698,
611
+ "step": 470
612
+ },
613
+ {
614
+ "epoch": 0.42,
615
+ "learning_rate": 1.9435350099015028e-05,
616
+ "loss": 1.1639,
617
+ "step": 475
618
+ },
619
+ {
620
+ "epoch": 0.42,
621
+ "learning_rate": 1.9419489234380077e-05,
622
+ "loss": 1.1722,
623
+ "step": 480
624
+ },
625
+ {
626
+ "epoch": 0.42,
627
+ "learning_rate": 1.940341533563703e-05,
628
+ "loss": 1.1382,
629
+ "step": 485
630
+ },
631
+ {
632
+ "epoch": 0.43,
633
+ "learning_rate": 1.9387128766318205e-05,
634
+ "loss": 1.1711,
635
+ "step": 490
636
+ },
637
+ {
638
+ "epoch": 0.43,
639
+ "learning_rate": 1.9370629894765737e-05,
640
+ "loss": 1.1552,
641
+ "step": 495
642
+ },
643
+ {
644
+ "epoch": 0.44,
645
+ "learning_rate": 1.935391909412325e-05,
646
+ "loss": 1.1426,
647
+ "step": 500
648
+ },
649
+ {
650
+ "epoch": 0.44,
651
+ "eval_loss": 1.1665772199630737,
652
+ "eval_runtime": 442.1456,
653
+ "eval_samples_per_second": 36.563,
654
+ "eval_steps_per_second": 1.144,
655
+ "step": 500
656
+ },
657
+ {
658
+ "epoch": 0.44,
659
+ "learning_rate": 1.9336996742327424e-05,
660
+ "loss": 1.157,
661
+ "step": 505
662
+ },
663
+ {
664
+ "epoch": 0.45,
665
+ "learning_rate": 1.931986322209946e-05,
666
+ "loss": 1.1594,
667
+ "step": 510
668
+ },
669
+ {
670
+ "epoch": 0.45,
671
+ "learning_rate": 1.930251892093638e-05,
672
+ "loss": 1.1641,
673
+ "step": 515
674
+ },
675
+ {
676
+ "epoch": 0.46,
677
+ "learning_rate": 1.928496423110233e-05,
678
+ "loss": 1.1382,
679
+ "step": 520
680
+ },
681
+ {
682
+ "epoch": 0.46,
683
+ "learning_rate": 1.9267199549619643e-05,
684
+ "loss": 1.1556,
685
+ "step": 525
686
+ },
687
+ {
688
+ "epoch": 0.46,
689
+ "learning_rate": 1.92492252782599e-05,
690
+ "loss": 1.1473,
691
+ "step": 530
692
+ },
693
+ {
694
+ "epoch": 0.47,
695
+ "learning_rate": 1.9231041823534835e-05,
696
+ "loss": 1.1431,
697
+ "step": 535
698
+ },
699
+ {
700
+ "epoch": 0.47,
701
+ "learning_rate": 1.9212649596687136e-05,
702
+ "loss": 1.1588,
703
+ "step": 540
704
+ },
705
+ {
706
+ "epoch": 0.48,
707
+ "learning_rate": 1.9194049013681134e-05,
708
+ "loss": 1.1478,
709
+ "step": 545
710
+ },
711
+ {
712
+ "epoch": 0.48,
713
+ "learning_rate": 1.9175240495193433e-05,
714
+ "loss": 1.1595,
715
+ "step": 550
716
+ },
717
+ {
718
+ "epoch": 0.49,
719
+ "learning_rate": 1.915622446660335e-05,
720
+ "loss": 1.1614,
721
+ "step": 555
722
+ },
723
+ {
724
+ "epoch": 0.49,
725
+ "learning_rate": 1.9137001357983323e-05,
726
+ "loss": 1.1662,
727
+ "step": 560
728
+ },
729
+ {
730
+ "epoch": 0.49,
731
+ "learning_rate": 1.9117571604089172e-05,
732
+ "loss": 1.1647,
733
+ "step": 565
734
+ },
735
+ {
736
+ "epoch": 0.5,
737
+ "learning_rate": 1.9097935644350284e-05,
738
+ "loss": 1.1551,
739
+ "step": 570
740
+ },
741
+ {
742
+ "epoch": 0.5,
743
+ "learning_rate": 1.9078093922859642e-05,
744
+ "loss": 1.157,
745
+ "step": 575
746
+ },
747
+ {
748
+ "epoch": 0.51,
749
+ "learning_rate": 1.9058046888363814e-05,
750
+ "loss": 1.1419,
751
+ "step": 580
752
+ },
753
+ {
754
+ "epoch": 0.51,
755
+ "learning_rate": 1.9037794994252792e-05,
756
+ "loss": 1.1605,
757
+ "step": 585
758
+ },
759
+ {
760
+ "epoch": 0.52,
761
+ "learning_rate": 1.901733869854973e-05,
762
+ "loss": 1.149,
763
+ "step": 590
764
+ },
765
+ {
766
+ "epoch": 0.52,
767
+ "learning_rate": 1.8996678463900596e-05,
768
+ "loss": 1.173,
769
+ "step": 595
770
+ },
771
+ {
772
+ "epoch": 0.53,
773
+ "learning_rate": 1.8975814757563707e-05,
774
+ "loss": 1.1628,
775
+ "step": 600
776
+ },
777
+ {
778
+ "epoch": 0.53,
779
+ "eval_loss": 1.158300757408142,
780
+ "eval_runtime": 426.2016,
781
+ "eval_samples_per_second": 37.93,
782
+ "eval_steps_per_second": 1.187,
783
+ "step": 600
784
+ },
785
+ {
786
+ "epoch": 0.53,
787
+ "learning_rate": 1.8954748051399153e-05,
788
+ "loss": 1.1337,
789
+ "step": 605
790
+ },
791
+ {
792
+ "epoch": 0.53,
793
+ "learning_rate": 1.893347882185814e-05,
794
+ "loss": 1.1455,
795
+ "step": 610
796
+ },
797
+ {
798
+ "epoch": 0.54,
799
+ "learning_rate": 1.891200754997219e-05,
800
+ "loss": 1.1667,
801
+ "step": 615
802
+ },
803
+ {
804
+ "epoch": 0.54,
805
+ "learning_rate": 1.88903347213423e-05,
806
+ "loss": 1.1419,
807
+ "step": 620
808
+ },
809
+ {
810
+ "epoch": 0.55,
811
+ "learning_rate": 1.886846082612792e-05,
812
+ "loss": 1.1458,
813
+ "step": 625
814
+ },
815
+ {
816
+ "epoch": 0.55,
817
+ "learning_rate": 1.8846386359035892e-05,
818
+ "loss": 1.1459,
819
+ "step": 630
820
+ },
821
+ {
822
+ "epoch": 0.56,
823
+ "learning_rate": 1.8824111819309256e-05,
824
+ "loss": 1.1676,
825
+ "step": 635
826
+ },
827
+ {
828
+ "epoch": 0.56,
829
+ "learning_rate": 1.8801637710715945e-05,
830
+ "loss": 1.1556,
831
+ "step": 640
832
+ },
833
+ {
834
+ "epoch": 0.57,
835
+ "learning_rate": 1.8778964541537422e-05,
836
+ "loss": 1.1544,
837
+ "step": 645
838
+ },
839
+ {
840
+ "epoch": 0.57,
841
+ "learning_rate": 1.8756092824557148e-05,
842
+ "loss": 1.1434,
843
+ "step": 650
844
+ },
845
+ {
846
+ "epoch": 0.57,
847
+ "learning_rate": 1.873302307704902e-05,
848
+ "loss": 1.1492,
849
+ "step": 655
850
+ },
851
+ {
852
+ "epoch": 0.58,
853
+ "learning_rate": 1.870975582076564e-05,
854
+ "loss": 1.1547,
855
+ "step": 660
856
+ },
857
+ {
858
+ "epoch": 0.58,
859
+ "learning_rate": 1.8686291581926546e-05,
860
+ "loss": 1.1438,
861
+ "step": 665
862
+ },
863
+ {
864
+ "epoch": 0.59,
865
+ "learning_rate": 1.8662630891206276e-05,
866
+ "loss": 1.1334,
867
+ "step": 670
868
+ },
869
+ {
870
+ "epoch": 0.59,
871
+ "learning_rate": 1.86387742837224e-05,
872
+ "loss": 1.1553,
873
+ "step": 675
874
+ },
875
+ {
876
+ "epoch": 0.6,
877
+ "learning_rate": 1.86147222990234e-05,
878
+ "loss": 1.1647,
879
+ "step": 680
880
+ },
881
+ {
882
+ "epoch": 0.6,
883
+ "learning_rate": 1.8590475481076468e-05,
884
+ "loss": 1.1443,
885
+ "step": 685
886
+ },
887
+ {
888
+ "epoch": 0.6,
889
+ "learning_rate": 1.8566034378255198e-05,
890
+ "loss": 1.1461,
891
+ "step": 690
892
+ },
893
+ {
894
+ "epoch": 0.61,
895
+ "learning_rate": 1.8541399543327206e-05,
896
+ "loss": 1.1391,
897
+ "step": 695
898
+ },
899
+ {
900
+ "epoch": 0.61,
901
+ "learning_rate": 1.8516571533441606e-05,
902
+ "loss": 1.1501,
903
+ "step": 700
904
+ },
905
+ {
906
+ "epoch": 0.61,
907
+ "eval_loss": 1.151345133781433,
908
+ "eval_runtime": 426.3067,
909
+ "eval_samples_per_second": 37.921,
910
+ "eval_steps_per_second": 1.187,
911
+ "step": 700
912
+ },
913
+ {
914
+ "epoch": 0.62,
915
+ "learning_rate": 1.8491550910116415e-05,
916
+ "loss": 1.1488,
917
+ "step": 705
918
+ },
919
+ {
920
+ "epoch": 0.62,
921
+ "learning_rate": 1.8466338239225862e-05,
922
+ "loss": 1.1213,
923
+ "step": 710
924
+ },
925
+ {
926
+ "epoch": 0.63,
927
+ "learning_rate": 1.8440934090987576e-05,
928
+ "loss": 1.1292,
929
+ "step": 715
930
+ },
931
+ {
932
+ "epoch": 0.63,
933
+ "learning_rate": 1.8415339039949702e-05,
934
+ "loss": 1.1585,
935
+ "step": 720
936
+ },
937
+ {
938
+ "epoch": 0.64,
939
+ "learning_rate": 1.8389553664977905e-05,
940
+ "loss": 1.1279,
941
+ "step": 725
942
+ },
943
+ {
944
+ "epoch": 0.64,
945
+ "learning_rate": 1.8363578549242266e-05,
946
+ "loss": 1.167,
947
+ "step": 730
948
+ },
949
+ {
950
+ "epoch": 0.64,
951
+ "learning_rate": 1.8337414280204116e-05,
952
+ "loss": 1.1522,
953
+ "step": 735
954
+ },
955
+ {
956
+ "epoch": 0.65,
957
+ "learning_rate": 1.8311061449602725e-05,
958
+ "loss": 1.143,
959
+ "step": 740
960
+ },
961
+ {
962
+ "epoch": 0.65,
963
+ "learning_rate": 1.8284520653441936e-05,
964
+ "loss": 1.1549,
965
+ "step": 745
966
+ },
967
+ {
968
+ "epoch": 0.66,
969
+ "learning_rate": 1.8257792491976676e-05,
970
+ "loss": 1.1412,
971
+ "step": 750
972
+ },
973
+ {
974
+ "epoch": 0.66,
975
+ "learning_rate": 1.8230877569699387e-05,
976
+ "loss": 1.1554,
977
+ "step": 755
978
+ },
979
+ {
980
+ "epoch": 0.67,
981
+ "learning_rate": 1.8203776495326346e-05,
982
+ "loss": 1.1423,
983
+ "step": 760
984
+ },
985
+ {
986
+ "epoch": 0.67,
987
+ "learning_rate": 1.8176489881783915e-05,
988
+ "loss": 1.1276,
989
+ "step": 765
990
+ },
991
+ {
992
+ "epoch": 0.67,
993
+ "learning_rate": 1.8149018346194655e-05,
994
+ "loss": 1.1276,
995
+ "step": 770
996
+ },
997
+ {
998
+ "epoch": 0.68,
999
+ "learning_rate": 1.8121362509863397e-05,
1000
+ "loss": 1.1334,
1001
+ "step": 775
1002
+ },
1003
+ {
1004
+ "epoch": 0.68,
1005
+ "learning_rate": 1.8093522998263154e-05,
1006
+ "loss": 1.1267,
1007
+ "step": 780
1008
+ },
1009
+ {
1010
+ "epoch": 0.69,
1011
+ "learning_rate": 1.8065500441021018e-05,
1012
+ "loss": 1.1465,
1013
+ "step": 785
1014
+ },
1015
+ {
1016
+ "epoch": 0.69,
1017
+ "learning_rate": 1.803729547190389e-05,
1018
+ "loss": 1.1207,
1019
+ "step": 790
1020
+ },
1021
+ {
1022
+ "epoch": 0.7,
1023
+ "learning_rate": 1.800890872880414e-05,
1024
+ "loss": 1.1483,
1025
+ "step": 795
1026
+ },
1027
+ {
1028
+ "epoch": 0.7,
1029
+ "learning_rate": 1.7980340853725223e-05,
1030
+ "loss": 1.137,
1031
+ "step": 800
1032
+ },
1033
+ {
1034
+ "epoch": 0.7,
1035
+ "eval_loss": 1.1456836462020874,
1036
+ "eval_runtime": 426.0503,
1037
+ "eval_samples_per_second": 37.944,
1038
+ "eval_steps_per_second": 1.188,
1039
+ "step": 800
1040
+ },
1041
+ {
1042
+ "epoch": 0.71,
1043
+ "learning_rate": 1.795159249276711e-05,
1044
+ "loss": 1.1212,
1045
+ "step": 805
1046
+ },
1047
+ {
1048
+ "epoch": 0.71,
1049
+ "learning_rate": 1.792266429611171e-05,
1050
+ "loss": 1.1259,
1051
+ "step": 810
1052
+ },
1053
+ {
1054
+ "epoch": 0.71,
1055
+ "learning_rate": 1.7893556918008136e-05,
1056
+ "loss": 1.1232,
1057
+ "step": 815
1058
+ },
1059
+ {
1060
+ "epoch": 0.72,
1061
+ "learning_rate": 1.7864271016757942e-05,
1062
+ "loss": 1.1163,
1063
+ "step": 820
1064
+ },
1065
+ {
1066
+ "epoch": 0.72,
1067
+ "learning_rate": 1.7834807254700212e-05,
1068
+ "loss": 1.138,
1069
+ "step": 825
1070
+ },
1071
+ {
1072
+ "epoch": 0.73,
1073
+ "learning_rate": 1.7805166298196577e-05,
1074
+ "loss": 1.134,
1075
+ "step": 830
1076
+ },
1077
+ {
1078
+ "epoch": 0.73,
1079
+ "learning_rate": 1.7775348817616164e-05,
1080
+ "loss": 1.1575,
1081
+ "step": 835
1082
+ },
1083
+ {
1084
+ "epoch": 0.74,
1085
+ "learning_rate": 1.7745355487320418e-05,
1086
+ "loss": 1.138,
1087
+ "step": 840
1088
+ },
1089
+ {
1090
+ "epoch": 0.74,
1091
+ "learning_rate": 1.7715186985647857e-05,
1092
+ "loss": 1.1504,
1093
+ "step": 845
1094
+ },
1095
+ {
1096
+ "epoch": 0.74,
1097
+ "learning_rate": 1.768484399489873e-05,
1098
+ "loss": 1.1296,
1099
+ "step": 850
1100
+ },
1101
+ {
1102
+ "epoch": 0.75,
1103
+ "learning_rate": 1.7654327201319584e-05,
1104
+ "loss": 1.1413,
1105
+ "step": 855
1106
+ },
1107
+ {
1108
+ "epoch": 0.75,
1109
+ "learning_rate": 1.762363729508775e-05,
1110
+ "loss": 1.1365,
1111
+ "step": 860
1112
+ },
1113
+ {
1114
+ "epoch": 0.76,
1115
+ "learning_rate": 1.7592774970295714e-05,
1116
+ "loss": 1.1515,
1117
+ "step": 865
1118
+ },
1119
+ {
1120
+ "epoch": 0.76,
1121
+ "learning_rate": 1.7561740924935456e-05,
1122
+ "loss": 1.1423,
1123
+ "step": 870
1124
+ },
1125
+ {
1126
+ "epoch": 0.77,
1127
+ "learning_rate": 1.753053586088263e-05,
1128
+ "loss": 1.1244,
1129
+ "step": 875
1130
+ },
1131
+ {
1132
+ "epoch": 0.77,
1133
+ "learning_rate": 1.7499160483880694e-05,
1134
+ "loss": 1.1376,
1135
+ "step": 880
1136
+ },
1137
+ {
1138
+ "epoch": 0.78,
1139
+ "learning_rate": 1.7467615503524973e-05,
1140
+ "loss": 1.1232,
1141
+ "step": 885
1142
+ },
1143
+ {
1144
+ "epoch": 0.78,
1145
+ "learning_rate": 1.7435901633246585e-05,
1146
+ "loss": 1.1233,
1147
+ "step": 890
1148
+ },
1149
+ {
1150
+ "epoch": 0.78,
1151
+ "learning_rate": 1.740401959029632e-05,
1152
+ "loss": 1.1284,
1153
+ "step": 895
1154
+ },
1155
+ {
1156
+ "epoch": 0.79,
1157
+ "learning_rate": 1.7371970095728408e-05,
1158
+ "loss": 1.1321,
1159
+ "step": 900
1160
+ },
1161
+ {
1162
+ "epoch": 0.79,
1163
+ "eval_loss": 1.140657663345337,
1164
+ "eval_runtime": 424.8606,
1165
+ "eval_samples_per_second": 38.05,
1166
+ "eval_steps_per_second": 1.191,
1167
+ "step": 900
1168
+ },
1169
+ {
1170
+ "epoch": 0.79,
1171
+ "learning_rate": 1.7339753874384218e-05,
1172
+ "loss": 1.1389,
1173
+ "step": 905
1174
+ },
1175
+ {
1176
+ "epoch": 0.8,
1177
+ "learning_rate": 1.730737165487587e-05,
1178
+ "loss": 1.1325,
1179
+ "step": 910
1180
+ },
1181
+ {
1182
+ "epoch": 0.8,
1183
+ "learning_rate": 1.7274824169569747e-05,
1184
+ "loss": 1.1477,
1185
+ "step": 915
1186
+ },
1187
+ {
1188
+ "epoch": 0.81,
1189
+ "learning_rate": 1.7242112154569928e-05,
1190
+ "loss": 1.1233,
1191
+ "step": 920
1192
+ },
1193
+ {
1194
+ "epoch": 0.81,
1195
+ "learning_rate": 1.7209236349701553e-05,
1196
+ "loss": 1.1324,
1197
+ "step": 925
1198
+ },
1199
+ {
1200
+ "epoch": 0.81,
1201
+ "learning_rate": 1.717619749849409e-05,
1202
+ "loss": 1.1241,
1203
+ "step": 930
1204
+ },
1205
+ {
1206
+ "epoch": 0.82,
1207
+ "learning_rate": 1.7142996348164508e-05,
1208
+ "loss": 1.1471,
1209
+ "step": 935
1210
+ },
1211
+ {
1212
+ "epoch": 0.82,
1213
+ "learning_rate": 1.710963364960038e-05,
1214
+ "loss": 1.1388,
1215
+ "step": 940
1216
+ },
1217
+ {
1218
+ "epoch": 0.83,
1219
+ "learning_rate": 1.707611015734291e-05,
1220
+ "loss": 1.1375,
1221
+ "step": 945
1222
+ },
1223
+ {
1224
+ "epoch": 0.83,
1225
+ "learning_rate": 1.704242662956986e-05,
1226
+ "loss": 1.1476,
1227
+ "step": 950
1228
+ },
1229
+ {
1230
+ "epoch": 0.84,
1231
+ "learning_rate": 1.700858382807841e-05,
1232
+ "loss": 1.1454,
1233
+ "step": 955
1234
+ },
1235
+ {
1236
+ "epoch": 0.84,
1237
+ "learning_rate": 1.6974582518267913e-05,
1238
+ "loss": 1.1263,
1239
+ "step": 960
1240
+ },
1241
+ {
1242
+ "epoch": 0.85,
1243
+ "learning_rate": 1.694042346912261e-05,
1244
+ "loss": 1.1206,
1245
+ "step": 965
1246
+ },
1247
+ {
1248
+ "epoch": 0.85,
1249
+ "learning_rate": 1.6906107453194207e-05,
1250
+ "loss": 1.1315,
1251
+ "step": 970
1252
+ },
1253
+ {
1254
+ "epoch": 0.85,
1255
+ "learning_rate": 1.687163524658444e-05,
1256
+ "loss": 1.1444,
1257
+ "step": 975
1258
+ },
1259
+ {
1260
+ "epoch": 0.86,
1261
+ "learning_rate": 1.6837007628927483e-05,
1262
+ "loss": 1.1397,
1263
+ "step": 980
1264
+ },
1265
+ {
1266
+ "epoch": 0.86,
1267
+ "learning_rate": 1.680222538337235e-05,
1268
+ "loss": 1.1232,
1269
+ "step": 985
1270
+ },
1271
+ {
1272
+ "epoch": 0.87,
1273
+ "learning_rate": 1.6767289296565155e-05,
1274
+ "loss": 1.1276,
1275
+ "step": 990
1276
+ },
1277
+ {
1278
+ "epoch": 0.87,
1279
+ "learning_rate": 1.6732200158631343e-05,
1280
+ "loss": 1.1415,
1281
+ "step": 995
1282
+ },
1283
+ {
1284
+ "epoch": 0.88,
1285
+ "learning_rate": 1.6696958763157808e-05,
1286
+ "loss": 1.1156,
1287
+ "step": 1000
1288
+ },
1289
+ {
1290
+ "epoch": 0.88,
1291
+ "eval_loss": 1.1359467506408691,
1292
+ "eval_runtime": 426.3548,
1293
+ "eval_samples_per_second": 37.917,
1294
+ "eval_steps_per_second": 1.187,
1295
+ "step": 1000
1296
+ },
1297
+ {
1298
+ "epoch": 0.88,
1299
+ "learning_rate": 1.666156590717495e-05,
1300
+ "loss": 1.1168,
1301
+ "step": 1005
1302
+ },
1303
+ {
1304
+ "epoch": 0.88,
1305
+ "learning_rate": 1.6626022391138643e-05,
1306
+ "loss": 1.1237,
1307
+ "step": 1010
1308
+ },
1309
+ {
1310
+ "epoch": 0.89,
1311
+ "learning_rate": 1.6590329018912134e-05,
1312
+ "loss": 1.1189,
1313
+ "step": 1015
1314
+ },
1315
+ {
1316
+ "epoch": 0.89,
1317
+ "learning_rate": 1.655448659774787e-05,
1318
+ "loss": 1.1146,
1319
+ "step": 1020
1320
+ },
1321
+ {
1322
+ "epoch": 0.9,
1323
+ "learning_rate": 1.6518495938269242e-05,
1324
+ "loss": 1.135,
1325
+ "step": 1025
1326
+ },
1327
+ {
1328
+ "epoch": 0.9,
1329
+ "learning_rate": 1.6482357854452223e-05,
1330
+ "loss": 1.113,
1331
+ "step": 1030
1332
+ },
1333
+ {
1334
+ "epoch": 0.91,
1335
+ "learning_rate": 1.6446073163607e-05,
1336
+ "loss": 1.1233,
1337
+ "step": 1035
1338
+ },
1339
+ {
1340
+ "epoch": 0.91,
1341
+ "learning_rate": 1.6409642686359472e-05,
1342
+ "loss": 1.1344,
1343
+ "step": 1040
1344
+ },
1345
+ {
1346
+ "epoch": 0.92,
1347
+ "learning_rate": 1.637306724663267e-05,
1348
+ "loss": 1.1327,
1349
+ "step": 1045
1350
+ },
1351
+ {
1352
+ "epoch": 0.92,
1353
+ "learning_rate": 1.6336347671628162e-05,
1354
+ "loss": 1.1236,
1355
+ "step": 1050
1356
+ },
1357
+ {
1358
+ "epoch": 0.92,
1359
+ "learning_rate": 1.629948479180731e-05,
1360
+ "loss": 1.1192,
1361
+ "step": 1055
1362
+ },
1363
+ {
1364
+ "epoch": 0.93,
1365
+ "learning_rate": 1.6262479440872505e-05,
1366
+ "loss": 1.1621,
1367
+ "step": 1060
1368
+ },
1369
+ {
1370
+ "epoch": 0.93,
1371
+ "learning_rate": 1.622533245574832e-05,
1372
+ "loss": 1.1318,
1373
+ "step": 1065
1374
+ },
1375
+ {
1376
+ "epoch": 0.94,
1377
+ "learning_rate": 1.618804467656256e-05,
1378
+ "loss": 1.1231,
1379
+ "step": 1070
1380
+ },
1381
+ {
1382
+ "epoch": 0.94,
1383
+ "learning_rate": 1.6150616946627272e-05,
1384
+ "loss": 1.1166,
1385
+ "step": 1075
1386
+ },
1387
+ {
1388
+ "epoch": 0.95,
1389
+ "learning_rate": 1.6113050112419683e-05,
1390
+ "loss": 1.1252,
1391
+ "step": 1080
1392
+ },
1393
+ {
1394
+ "epoch": 0.95,
1395
+ "learning_rate": 1.6075345023563035e-05,
1396
+ "loss": 1.113,
1397
+ "step": 1085
1398
+ },
1399
+ {
1400
+ "epoch": 0.95,
1401
+ "learning_rate": 1.6037502532807382e-05,
1402
+ "loss": 1.1102,
1403
+ "step": 1090
1404
+ },
1405
+ {
1406
+ "epoch": 0.96,
1407
+ "learning_rate": 1.599952349601031e-05,
1408
+ "loss": 1.1273,
1409
+ "step": 1095
1410
+ },
1411
+ {
1412
+ "epoch": 0.96,
1413
+ "learning_rate": 1.5961408772117567e-05,
1414
+ "loss": 1.1395,
1415
+ "step": 1100
1416
+ },
1417
+ {
1418
+ "epoch": 0.96,
1419
+ "eval_loss": 1.1318247318267822,
1420
+ "eval_runtime": 426.4822,
1421
+ "eval_samples_per_second": 37.905,
1422
+ "eval_steps_per_second": 1.186,
1423
+ "step": 1100
1424
+ },
1425
+ {
1426
+ "epoch": 0.97,
1427
+ "learning_rate": 1.592315922314364e-05,
1428
+ "loss": 1.134,
1429
+ "step": 1105
1430
+ },
1431
+ {
1432
+ "epoch": 0.97,
1433
+ "learning_rate": 1.588477571415226e-05,
1434
+ "loss": 1.108,
1435
+ "step": 1110
1436
+ },
1437
+ {
1438
+ "epoch": 0.98,
1439
+ "learning_rate": 1.5846259113236855e-05,
1440
+ "loss": 1.103,
1441
+ "step": 1115
1442
+ },
1443
+ {
1444
+ "epoch": 0.98,
1445
+ "learning_rate": 1.580761029150089e-05,
1446
+ "loss": 1.1193,
1447
+ "step": 1120
1448
+ },
1449
+ {
1450
+ "epoch": 0.99,
1451
+ "learning_rate": 1.5768830123038172e-05,
1452
+ "loss": 1.1121,
1453
+ "step": 1125
1454
+ },
1455
+ {
1456
+ "epoch": 0.99,
1457
+ "learning_rate": 1.57299194849131e-05,
1458
+ "loss": 1.1311,
1459
+ "step": 1130
1460
+ },
1461
+ {
1462
+ "epoch": 0.99,
1463
+ "learning_rate": 1.5690879257140804e-05,
1464
+ "loss": 1.1179,
1465
+ "step": 1135
1466
+ },
1467
+ {
1468
+ "epoch": 1.0,
1469
+ "learning_rate": 1.5651710322667262e-05,
1470
+ "loss": 1.1305,
1471
+ "step": 1140
1472
+ },
1473
+ {
1474
+ "epoch": 1.0,
1475
+ "learning_rate": 1.5612413567349314e-05,
1476
+ "loss": 1.0956,
1477
+ "step": 1145
1478
+ },
1479
+ {
1480
+ "epoch": 1.01,
1481
+ "learning_rate": 1.557298987993465e-05,
1482
+ "loss": 1.0383,
1483
+ "step": 1150
1484
+ },
1485
+ {
1486
+ "epoch": 1.01,
1487
+ "learning_rate": 1.553344015204168e-05,
1488
+ "loss": 1.0692,
1489
+ "step": 1155
1490
+ },
1491
+ {
1492
+ "epoch": 1.02,
1493
+ "learning_rate": 1.5493765278139397e-05,
1494
+ "loss": 1.0767,
1495
+ "step": 1160
1496
+ },
1497
+ {
1498
+ "epoch": 1.02,
1499
+ "learning_rate": 1.5453966155527133e-05,
1500
+ "loss": 1.0617,
1501
+ "step": 1165
1502
+ },
1503
+ {
1504
+ "epoch": 1.02,
1505
+ "learning_rate": 1.541404368431426e-05,
1506
+ "loss": 1.075,
1507
+ "step": 1170
1508
+ },
1509
+ {
1510
+ "epoch": 1.03,
1511
+ "learning_rate": 1.537399876739985e-05,
1512
+ "loss": 1.0709,
1513
+ "step": 1175
1514
+ },
1515
+ {
1516
+ "epoch": 1.03,
1517
+ "learning_rate": 1.5333832310452232e-05,
1518
+ "loss": 1.0725,
1519
+ "step": 1180
1520
+ },
1521
+ {
1522
+ "epoch": 1.04,
1523
+ "learning_rate": 1.5293545221888542e-05,
1524
+ "loss": 1.071,
1525
+ "step": 1185
1526
+ },
1527
+ {
1528
+ "epoch": 1.04,
1529
+ "learning_rate": 1.525313841285414e-05,
1530
+ "loss": 1.0702,
1531
+ "step": 1190
1532
+ },
1533
+ {
1534
+ "epoch": 1.05,
1535
+ "learning_rate": 1.5212612797202033e-05,
1536
+ "loss": 1.0801,
1537
+ "step": 1195
1538
+ },
1539
+ {
1540
+ "epoch": 1.05,
1541
+ "learning_rate": 1.517196929147219e-05,
1542
+ "loss": 1.0564,
1543
+ "step": 1200
1544
+ },
1545
+ {
1546
+ "epoch": 1.05,
1547
+ "eval_loss": 1.1314754486083984,
1548
+ "eval_runtime": 425.7152,
1549
+ "eval_samples_per_second": 37.974,
1550
+ "eval_steps_per_second": 1.189,
1551
+ "step": 1200
1552
+ },
1553
+ {
1554
+ "epoch": 1.06,
1555
+ "learning_rate": 1.5131208814870822e-05,
1556
+ "loss": 1.0822,
1557
+ "step": 1205
1558
+ },
1559
+ {
1560
+ "epoch": 1.06,
1561
+ "learning_rate": 1.5090332289249586e-05,
1562
+ "loss": 1.0505,
1563
+ "step": 1210
1564
+ },
1565
+ {
1566
+ "epoch": 1.06,
1567
+ "learning_rate": 1.5049340639084742e-05,
1568
+ "loss": 1.0704,
1569
+ "step": 1215
1570
+ },
1571
+ {
1572
+ "epoch": 1.07,
1573
+ "learning_rate": 1.5008234791456242e-05,
1574
+ "loss": 1.0645,
1575
+ "step": 1220
1576
+ },
1577
+ {
1578
+ "epoch": 1.07,
1579
+ "learning_rate": 1.4967015676026768e-05,
1580
+ "loss": 1.0748,
1581
+ "step": 1225
1582
+ },
1583
+ {
1584
+ "epoch": 1.08,
1585
+ "learning_rate": 1.4925684225020694e-05,
1586
+ "loss": 1.0557,
1587
+ "step": 1230
1588
+ },
1589
+ {
1590
+ "epoch": 1.08,
1591
+ "learning_rate": 1.4884241373203014e-05,
1592
+ "loss": 1.0557,
1593
+ "step": 1235
1594
+ },
1595
+ {
1596
+ "epoch": 1.09,
1597
+ "learning_rate": 1.4842688057858203e-05,
1598
+ "loss": 1.0669,
1599
+ "step": 1240
1600
+ },
1601
+ {
1602
+ "epoch": 1.09,
1603
+ "learning_rate": 1.4801025218769001e-05,
1604
+ "loss": 1.0655,
1605
+ "step": 1245
1606
+ },
1607
+ {
1608
+ "epoch": 1.1,
1609
+ "learning_rate": 1.4759253798195183e-05,
1610
+ "loss": 1.0675,
1611
+ "step": 1250
1612
+ },
1613
+ {
1614
+ "epoch": 1.1,
1615
+ "learning_rate": 1.4717374740852236e-05,
1616
+ "loss": 1.0824,
1617
+ "step": 1255
1618
+ },
1619
+ {
1620
+ "epoch": 1.1,
1621
+ "learning_rate": 1.467538899388998e-05,
1622
+ "loss": 1.0723,
1623
+ "step": 1260
1624
+ },
1625
+ {
1626
+ "epoch": 1.11,
1627
+ "learning_rate": 1.463329750687118e-05,
1628
+ "loss": 1.0569,
1629
+ "step": 1265
1630
+ },
1631
+ {
1632
+ "epoch": 1.11,
1633
+ "learning_rate": 1.459110123175004e-05,
1634
+ "loss": 1.0472,
1635
+ "step": 1270
1636
+ },
1637
+ {
1638
+ "epoch": 1.12,
1639
+ "learning_rate": 1.4548801122850682e-05,
1640
+ "loss": 1.0633,
1641
+ "step": 1275
1642
+ },
1643
+ {
1644
+ "epoch": 1.12,
1645
+ "learning_rate": 1.450639813684558e-05,
1646
+ "loss": 1.0774,
1647
+ "step": 1280
1648
+ },
1649
+ {
1650
+ "epoch": 1.13,
1651
+ "learning_rate": 1.4463893232733886e-05,
1652
+ "loss": 1.0695,
1653
+ "step": 1285
1654
+ },
1655
+ {
1656
+ "epoch": 1.13,
1657
+ "learning_rate": 1.4421287371819781e-05,
1658
+ "loss": 1.0711,
1659
+ "step": 1290
1660
+ },
1661
+ {
1662
+ "epoch": 1.13,
1663
+ "learning_rate": 1.4378581517690711e-05,
1664
+ "loss": 1.0562,
1665
+ "step": 1295
1666
+ },
1667
+ {
1668
+ "epoch": 1.14,
1669
+ "learning_rate": 1.4335776636195605e-05,
1670
+ "loss": 1.0594,
1671
+ "step": 1300
1672
+ },
1673
+ {
1674
+ "epoch": 1.14,
1675
+ "eval_loss": 1.1295197010040283,
1676
+ "eval_runtime": 425.2423,
1677
+ "eval_samples_per_second": 38.016,
1678
+ "eval_steps_per_second": 1.19,
1679
+ "step": 1300
1680
+ },
1681
+ {
1682
+ "epoch": 1.14,
1683
+ "learning_rate": 1.4292873695423012e-05,
1684
+ "loss": 1.0657,
1685
+ "step": 1305
1686
+ },
1687
+ {
1688
+ "epoch": 1.15,
1689
+ "learning_rate": 1.4249873665679241e-05,
1690
+ "loss": 1.0535,
1691
+ "step": 1310
1692
+ },
1693
+ {
1694
+ "epoch": 1.15,
1695
+ "learning_rate": 1.4206777519466375e-05,
1696
+ "loss": 1.0706,
1697
+ "step": 1315
1698
+ },
1699
+ {
1700
+ "epoch": 1.16,
1701
+ "learning_rate": 1.4163586231460307e-05,
1702
+ "loss": 1.0553,
1703
+ "step": 1320
1704
+ },
1705
+ {
1706
+ "epoch": 1.16,
1707
+ "learning_rate": 1.4120300778488687e-05,
1708
+ "loss": 1.068,
1709
+ "step": 1325
1710
+ },
1711
+ {
1712
+ "epoch": 1.17,
1713
+ "learning_rate": 1.4076922139508828e-05,
1714
+ "loss": 1.0535,
1715
+ "step": 1330
1716
+ },
1717
+ {
1718
+ "epoch": 1.17,
1719
+ "learning_rate": 1.4033451295585565e-05,
1720
+ "loss": 1.0633,
1721
+ "step": 1335
1722
+ },
1723
+ {
1724
+ "epoch": 1.17,
1725
+ "learning_rate": 1.3989889229869071e-05,
1726
+ "loss": 1.0646,
1727
+ "step": 1340
1728
+ },
1729
+ {
1730
+ "epoch": 1.18,
1731
+ "learning_rate": 1.394623692757262e-05,
1732
+ "loss": 1.0881,
1733
+ "step": 1345
1734
+ },
1735
+ {
1736
+ "epoch": 1.18,
1737
+ "learning_rate": 1.3902495375950303e-05,
1738
+ "loss": 1.0478,
1739
+ "step": 1350
1740
+ },
1741
+ {
1742
+ "epoch": 1.19,
1743
+ "learning_rate": 1.3858665564274699e-05,
1744
+ "loss": 1.0718,
1745
+ "step": 1355
1746
+ },
1747
+ {
1748
+ "epoch": 1.19,
1749
+ "learning_rate": 1.3814748483814511e-05,
1750
+ "loss": 1.0504,
1751
+ "step": 1360
1752
+ },
1753
+ {
1754
+ "epoch": 1.2,
1755
+ "learning_rate": 1.3770745127812134e-05,
1756
+ "loss": 1.0751,
1757
+ "step": 1365
1758
+ },
1759
+ {
1760
+ "epoch": 1.2,
1761
+ "learning_rate": 1.3726656491461196e-05,
1762
+ "loss": 1.0644,
1763
+ "step": 1370
1764
+ },
1765
+ {
1766
+ "epoch": 1.2,
1767
+ "learning_rate": 1.3682483571884064e-05,
1768
+ "loss": 1.0691,
1769
+ "step": 1375
1770
+ },
1771
+ {
1772
+ "epoch": 1.21,
1773
+ "learning_rate": 1.3638227368109268e-05,
1774
+ "loss": 1.067,
1775
+ "step": 1380
1776
+ },
1777
+ {
1778
+ "epoch": 1.21,
1779
+ "learning_rate": 1.3593888881048922e-05,
1780
+ "loss": 1.06,
1781
+ "step": 1385
1782
+ },
1783
+ {
1784
+ "epoch": 1.22,
1785
+ "learning_rate": 1.3549469113476087e-05,
1786
+ "loss": 1.0621,
1787
+ "step": 1390
1788
+ },
1789
+ {
1790
+ "epoch": 1.22,
1791
+ "learning_rate": 1.3504969070002091e-05,
1792
+ "loss": 1.0634,
1793
+ "step": 1395
1794
+ },
1795
+ {
1796
+ "epoch": 1.23,
1797
+ "learning_rate": 1.3460389757053802e-05,
1798
+ "loss": 1.0711,
1799
+ "step": 1400
1800
+ },
1801
+ {
1802
+ "epoch": 1.23,
1803
+ "eval_loss": 1.1273547410964966,
1804
+ "eval_runtime": 424.8164,
1805
+ "eval_samples_per_second": 38.054,
1806
+ "eval_steps_per_second": 1.191,
1807
+ "step": 1400
1808
+ },
1809
+ {
1810
+ "epoch": 1.23,
1811
+ "learning_rate": 1.3415732182850873e-05,
1812
+ "loss": 1.0606,
1813
+ "step": 1405
1814
+ },
1815
+ {
1816
+ "epoch": 1.24,
1817
+ "learning_rate": 1.3370997357382943e-05,
1818
+ "loss": 1.0958,
1819
+ "step": 1410
1820
+ },
1821
+ {
1822
+ "epoch": 1.24,
1823
+ "learning_rate": 1.3326186292386778e-05,
1824
+ "loss": 1.069,
1825
+ "step": 1415
1826
+ },
1827
+ {
1828
+ "epoch": 1.24,
1829
+ "learning_rate": 1.3281300001323416e-05,
1830
+ "loss": 1.071,
1831
+ "step": 1420
1832
+ },
1833
+ {
1834
+ "epoch": 1.25,
1835
+ "learning_rate": 1.3236339499355217e-05,
1836
+ "loss": 1.071,
1837
+ "step": 1425
1838
+ },
1839
+ {
1840
+ "epoch": 1.25,
1841
+ "learning_rate": 1.3191305803322929e-05,
1842
+ "loss": 1.0537,
1843
+ "step": 1430
1844
+ },
1845
+ {
1846
+ "epoch": 1.26,
1847
+ "learning_rate": 1.3146199931722674e-05,
1848
+ "loss": 1.0743,
1849
+ "step": 1435
1850
+ },
1851
+ {
1852
+ "epoch": 1.26,
1853
+ "learning_rate": 1.3101022904682918e-05,
1854
+ "loss": 1.0541,
1855
+ "step": 1440
1856
+ },
1857
+ {
1858
+ "epoch": 1.27,
1859
+ "learning_rate": 1.3055775743941409e-05,
1860
+ "loss": 1.0694,
1861
+ "step": 1445
1862
+ },
1863
+ {
1864
+ "epoch": 1.27,
1865
+ "learning_rate": 1.3010459472822046e-05,
1866
+ "loss": 1.0468,
1867
+ "step": 1450
1868
+ },
1869
+ {
1870
+ "epoch": 1.27,
1871
+ "learning_rate": 1.2965075116211769e-05,
1872
+ "loss": 1.0636,
1873
+ "step": 1455
1874
+ },
1875
+ {
1876
+ "epoch": 1.28,
1877
+ "learning_rate": 1.2919623700537342e-05,
1878
+ "loss": 1.0709,
1879
+ "step": 1460
1880
+ },
1881
+ {
1882
+ "epoch": 1.28,
1883
+ "learning_rate": 1.287410625374217e-05,
1884
+ "loss": 1.0631,
1885
+ "step": 1465
1886
+ },
1887
+ {
1888
+ "epoch": 1.29,
1889
+ "learning_rate": 1.282852380526303e-05,
1890
+ "loss": 1.0554,
1891
+ "step": 1470
1892
+ },
1893
+ {
1894
+ "epoch": 1.29,
1895
+ "learning_rate": 1.2782877386006807e-05,
1896
+ "loss": 1.0683,
1897
+ "step": 1475
1898
+ },
1899
+ {
1900
+ "epoch": 1.3,
1901
+ "learning_rate": 1.2737168028327163e-05,
1902
+ "loss": 1.0509,
1903
+ "step": 1480
1904
+ },
1905
+ {
1906
+ "epoch": 1.3,
1907
+ "learning_rate": 1.2691396766001192e-05,
1908
+ "loss": 1.0449,
1909
+ "step": 1485
1910
+ },
1911
+ {
1912
+ "epoch": 1.31,
1913
+ "learning_rate": 1.2645564634206054e-05,
1914
+ "loss": 1.0571,
1915
+ "step": 1490
1916
+ },
1917
+ {
1918
+ "epoch": 1.31,
1919
+ "learning_rate": 1.2599672669495537e-05,
1920
+ "loss": 1.0684,
1921
+ "step": 1495
1922
+ },
1923
+ {
1924
+ "epoch": 1.31,
1925
+ "learning_rate": 1.2553721909776644e-05,
1926
+ "loss": 1.0624,
1927
+ "step": 1500
1928
+ },
1929
+ {
1930
+ "epoch": 1.31,
1931
+ "eval_loss": 1.125585675239563,
1932
+ "eval_runtime": 426.6579,
1933
+ "eval_samples_per_second": 37.89,
1934
+ "eval_steps_per_second": 1.186,
1935
+ "step": 1500
1936
+ },
1937
+ {
1938
+ "epoch": 1.32,
1939
+ "learning_rate": 1.2507713394286088e-05,
1940
+ "loss": 1.0529,
1941
+ "step": 1505
1942
+ },
1943
+ {
1944
+ "epoch": 1.32,
1945
+ "learning_rate": 1.246164816356682e-05,
1946
+ "loss": 1.0729,
1947
+ "step": 1510
1948
+ },
1949
+ {
1950
+ "epoch": 1.33,
1951
+ "learning_rate": 1.2415527259444471e-05,
1952
+ "loss": 1.0738,
1953
+ "step": 1515
1954
+ },
1955
+ {
1956
+ "epoch": 1.33,
1957
+ "learning_rate": 1.2369351725003802e-05,
1958
+ "loss": 1.0858,
1959
+ "step": 1520
1960
+ },
1961
+ {
1962
+ "epoch": 1.34,
1963
+ "learning_rate": 1.232312260456511e-05,
1964
+ "loss": 1.0608,
1965
+ "step": 1525
1966
+ },
1967
+ {
1968
+ "epoch": 1.34,
1969
+ "learning_rate": 1.2276840943660613e-05,
1970
+ "loss": 1.0667,
1971
+ "step": 1530
1972
+ },
1973
+ {
1974
+ "epoch": 1.34,
1975
+ "learning_rate": 1.2230507789010792e-05,
1976
+ "loss": 1.0672,
1977
+ "step": 1535
1978
+ },
1979
+ {
1980
+ "epoch": 1.35,
1981
+ "learning_rate": 1.2184124188500735e-05,
1982
+ "loss": 1.0576,
1983
+ "step": 1540
1984
+ },
1985
+ {
1986
+ "epoch": 1.35,
1987
+ "learning_rate": 1.2137691191156425e-05,
1988
+ "loss": 1.049,
1989
+ "step": 1545
1990
+ },
1991
+ {
1992
+ "epoch": 1.36,
1993
+ "learning_rate": 1.209120984712102e-05,
1994
+ "loss": 1.0601,
1995
+ "step": 1550
1996
+ },
1997
+ {
1998
+ "epoch": 1.36,
1999
+ "learning_rate": 1.2044681207631104e-05,
2000
+ "loss": 1.0605,
2001
+ "step": 1555
2002
+ },
2003
+ {
2004
+ "epoch": 1.37,
2005
+ "learning_rate": 1.1998106324992906e-05,
2006
+ "loss": 1.0784,
2007
+ "step": 1560
2008
+ },
2009
+ {
2010
+ "epoch": 1.37,
2011
+ "learning_rate": 1.1951486252558508e-05,
2012
+ "loss": 1.0623,
2013
+ "step": 1565
2014
+ },
2015
+ {
2016
+ "epoch": 1.38,
2017
+ "learning_rate": 1.1904822044702017e-05,
2018
+ "loss": 1.0658,
2019
+ "step": 1570
2020
+ },
2021
+ {
2022
+ "epoch": 1.38,
2023
+ "learning_rate": 1.1858114756795718e-05,
2024
+ "loss": 1.0739,
2025
+ "step": 1575
2026
+ },
2027
+ {
2028
+ "epoch": 1.38,
2029
+ "learning_rate": 1.1811365445186213e-05,
2030
+ "loss": 1.073,
2031
+ "step": 1580
2032
+ },
2033
+ {
2034
+ "epoch": 1.39,
2035
+ "learning_rate": 1.1764575167170525e-05,
2036
+ "loss": 1.0548,
2037
+ "step": 1585
2038
+ },
2039
+ {
2040
+ "epoch": 1.39,
2041
+ "learning_rate": 1.1717744980972178e-05,
2042
+ "loss": 1.0725,
2043
+ "step": 1590
2044
+ },
2045
+ {
2046
+ "epoch": 1.4,
2047
+ "learning_rate": 1.1670875945717282e-05,
2048
+ "loss": 1.0649,
2049
+ "step": 1595
2050
+ },
2051
+ {
2052
+ "epoch": 1.4,
2053
+ "learning_rate": 1.1623969121410563e-05,
2054
+ "loss": 1.0652,
2055
+ "step": 1600
2056
+ },
2057
+ {
2058
+ "epoch": 1.4,
2059
+ "eval_loss": 1.1232975721359253,
2060
+ "eval_runtime": 426.1914,
2061
+ "eval_samples_per_second": 37.931,
2062
+ "eval_steps_per_second": 1.187,
2063
+ "step": 1600
2064
+ },
2065
+ {
2066
+ "epoch": 1.41,
2067
+ "learning_rate": 1.1577025568911395e-05,
2068
+ "loss": 1.0705,
2069
+ "step": 1605
2070
+ },
2071
+ {
2072
+ "epoch": 1.41,
2073
+ "learning_rate": 1.1530046349909816e-05,
2074
+ "loss": 1.0584,
2075
+ "step": 1610
2076
+ },
2077
+ {
2078
+ "epoch": 1.41,
2079
+ "learning_rate": 1.1483032526902502e-05,
2080
+ "loss": 1.0655,
2081
+ "step": 1615
2082
+ },
2083
+ {
2084
+ "epoch": 1.42,
2085
+ "learning_rate": 1.1435985163168745e-05,
2086
+ "loss": 1.0515,
2087
+ "step": 1620
2088
+ },
2089
+ {
2090
+ "epoch": 1.42,
2091
+ "learning_rate": 1.1388905322746406e-05,
2092
+ "loss": 1.071,
2093
+ "step": 1625
2094
+ },
2095
+ {
2096
+ "epoch": 1.43,
2097
+ "learning_rate": 1.1341794070407847e-05,
2098
+ "loss": 1.064,
2099
+ "step": 1630
2100
+ },
2101
+ {
2102
+ "epoch": 1.43,
2103
+ "learning_rate": 1.1294652471635857e-05,
2104
+ "loss": 1.0576,
2105
+ "step": 1635
2106
+ },
2107
+ {
2108
+ "epoch": 1.44,
2109
+ "learning_rate": 1.124748159259954e-05,
2110
+ "loss": 1.0586,
2111
+ "step": 1640
2112
+ },
2113
+ {
2114
+ "epoch": 1.44,
2115
+ "learning_rate": 1.1200282500130222e-05,
2116
+ "loss": 1.0596,
2117
+ "step": 1645
2118
+ },
2119
+ {
2120
+ "epoch": 1.45,
2121
+ "learning_rate": 1.1153056261697303e-05,
2122
+ "loss": 1.0633,
2123
+ "step": 1650
2124
+ },
2125
+ {
2126
+ "epoch": 1.45,
2127
+ "learning_rate": 1.1105803945384134e-05,
2128
+ "loss": 1.0529,
2129
+ "step": 1655
2130
+ },
2131
+ {
2132
+ "epoch": 1.45,
2133
+ "learning_rate": 1.1058526619863846e-05,
2134
+ "loss": 1.0574,
2135
+ "step": 1660
2136
+ },
2137
+ {
2138
+ "epoch": 1.46,
2139
+ "learning_rate": 1.1011225354375184e-05,
2140
+ "loss": 1.0768,
2141
+ "step": 1665
2142
+ },
2143
+ {
2144
+ "epoch": 1.46,
2145
+ "learning_rate": 1.0963901218698331e-05,
2146
+ "loss": 1.061,
2147
+ "step": 1670
2148
+ },
2149
+ {
2150
+ "epoch": 1.47,
2151
+ "learning_rate": 1.0916555283130714e-05,
2152
+ "loss": 1.0585,
2153
+ "step": 1675
2154
+ },
2155
+ {
2156
+ "epoch": 1.47,
2157
+ "learning_rate": 1.0869188618462778e-05,
2158
+ "loss": 1.0675,
2159
+ "step": 1680
2160
+ },
2161
+ {
2162
+ "epoch": 1.48,
2163
+ "learning_rate": 1.0821802295953795e-05,
2164
+ "loss": 1.0437,
2165
+ "step": 1685
2166
+ },
2167
+ {
2168
+ "epoch": 1.48,
2169
+ "learning_rate": 1.0774397387307628e-05,
2170
+ "loss": 1.059,
2171
+ "step": 1690
2172
+ },
2173
+ {
2174
+ "epoch": 1.48,
2175
+ "learning_rate": 1.0726974964648478e-05,
2176
+ "loss": 1.047,
2177
+ "step": 1695
2178
+ },
2179
+ {
2180
+ "epoch": 1.49,
2181
+ "learning_rate": 1.0679536100496661e-05,
2182
+ "loss": 1.0626,
2183
+ "step": 1700
2184
+ },
2185
+ {
2186
+ "epoch": 1.49,
2187
+ "eval_loss": 1.1213114261627197,
2188
+ "eval_runtime": 426.3267,
2189
+ "eval_samples_per_second": 37.919,
2190
+ "eval_steps_per_second": 1.187,
2191
+ "step": 1700
2192
+ },
2193
+ {
2194
+ "epoch": 1.49,
2195
+ "learning_rate": 1.063208186774433e-05,
2196
+ "loss": 1.0578,
2197
+ "step": 1705
2198
+ },
2199
+ {
2200
+ "epoch": 1.5,
2201
+ "learning_rate": 1.0584613339631222e-05,
2202
+ "loss": 1.0652,
2203
+ "step": 1710
2204
+ },
2205
+ {
2206
+ "epoch": 1.5,
2207
+ "learning_rate": 1.0537131589720387e-05,
2208
+ "loss": 1.0497,
2209
+ "step": 1715
2210
+ },
2211
+ {
2212
+ "epoch": 1.51,
2213
+ "learning_rate": 1.0489637691873889e-05,
2214
+ "loss": 1.0764,
2215
+ "step": 1720
2216
+ },
2217
+ {
2218
+ "epoch": 1.51,
2219
+ "learning_rate": 1.0442132720228551e-05,
2220
+ "loss": 1.071,
2221
+ "step": 1725
2222
+ },
2223
+ {
2224
+ "epoch": 1.52,
2225
+ "learning_rate": 1.0394617749171636e-05,
2226
+ "loss": 1.0666,
2227
+ "step": 1730
2228
+ },
2229
+ {
2230
+ "epoch": 1.52,
2231
+ "learning_rate": 1.0347093853316555e-05,
2232
+ "loss": 1.0495,
2233
+ "step": 1735
2234
+ },
2235
+ {
2236
+ "epoch": 1.52,
2237
+ "learning_rate": 1.0299562107478569e-05,
2238
+ "loss": 1.0489,
2239
+ "step": 1740
2240
+ },
2241
+ {
2242
+ "epoch": 1.53,
2243
+ "learning_rate": 1.0252023586650476e-05,
2244
+ "loss": 1.0737,
2245
+ "step": 1745
2246
+ },
2247
+ {
2248
+ "epoch": 1.53,
2249
+ "learning_rate": 1.0204479365978298e-05,
2250
+ "loss": 1.045,
2251
+ "step": 1750
2252
+ },
2253
+ {
2254
+ "epoch": 1.54,
2255
+ "learning_rate": 1.0156930520736965e-05,
2256
+ "loss": 1.0637,
2257
+ "step": 1755
2258
+ },
2259
+ {
2260
+ "epoch": 1.54,
2261
+ "learning_rate": 1.0109378126306002e-05,
2262
+ "loss": 1.0737,
2263
+ "step": 1760
2264
+ },
2265
+ {
2266
+ "epoch": 1.55,
2267
+ "learning_rate": 1.00618232581452e-05,
2268
+ "loss": 1.0626,
2269
+ "step": 1765
2270
+ },
2271
+ {
2272
+ "epoch": 1.55,
2273
+ "learning_rate": 1.0014266991770299e-05,
2274
+ "loss": 1.0483,
2275
+ "step": 1770
2276
+ },
2277
+ {
2278
+ "epoch": 1.55,
2279
+ "learning_rate": 9.966710402728658e-06,
2280
+ "loss": 1.0653,
2281
+ "step": 1775
2282
+ },
2283
+ {
2284
+ "epoch": 1.56,
2285
+ "learning_rate": 9.919154566574942e-06,
2286
+ "loss": 1.07,
2287
+ "step": 1780
2288
+ },
2289
+ {
2290
+ "epoch": 1.56,
2291
+ "learning_rate": 9.871600558846772e-06,
2292
+ "loss": 1.0668,
2293
+ "step": 1785
2294
+ },
2295
+ {
2296
+ "epoch": 1.57,
2297
+ "learning_rate": 9.82404945504044e-06,
2298
+ "loss": 1.0525,
2299
+ "step": 1790
2300
+ },
2301
+ {
2302
+ "epoch": 1.57,
2303
+ "learning_rate": 9.776502330586535e-06,
2304
+ "loss": 1.0578,
2305
+ "step": 1795
2306
+ },
2307
+ {
2308
+ "epoch": 1.58,
2309
+ "learning_rate": 9.728960260825675e-06,
2310
+ "loss": 1.0457,
2311
+ "step": 1800
2312
+ },
2313
+ {
2314
+ "epoch": 1.58,
2315
+ "eval_loss": 1.1195377111434937,
2316
+ "eval_runtime": 425.5901,
2317
+ "eval_samples_per_second": 37.985,
2318
+ "eval_steps_per_second": 1.189,
2319
+ "step": 1800
2320
+ },
2321
+ {
2322
+ "epoch": 1.58,
2323
+ "learning_rate": 9.681424320984136e-06,
2324
+ "loss": 1.0608,
2325
+ "step": 1805
2326
+ },
2327
+ {
2328
+ "epoch": 1.59,
2329
+ "learning_rate": 9.633895586149575e-06,
2330
+ "loss": 1.042,
2331
+ "step": 1810
2332
+ },
2333
+ {
2334
+ "epoch": 1.59,
2335
+ "learning_rate": 9.586375131246688e-06,
2336
+ "loss": 1.0456,
2337
+ "step": 1815
2338
+ },
2339
+ {
2340
+ "epoch": 1.59,
2341
+ "learning_rate": 9.538864031012913e-06,
2342
+ "loss": 1.0777,
2343
+ "step": 1820
2344
+ },
2345
+ {
2346
+ "epoch": 1.6,
2347
+ "learning_rate": 9.491363359974121e-06,
2348
+ "loss": 1.0497,
2349
+ "step": 1825
2350
+ },
2351
+ {
2352
+ "epoch": 1.6,
2353
+ "learning_rate": 9.443874192420312e-06,
2354
+ "loss": 1.0441,
2355
+ "step": 1830
2356
+ },
2357
+ {
2358
+ "epoch": 1.61,
2359
+ "learning_rate": 9.396397602381318e-06,
2360
+ "loss": 1.0767,
2361
+ "step": 1835
2362
+ },
2363
+ {
2364
+ "epoch": 1.61,
2365
+ "learning_rate": 9.34893466360252e-06,
2366
+ "loss": 1.0545,
2367
+ "step": 1840
2368
+ },
2369
+ {
2370
+ "epoch": 1.62,
2371
+ "learning_rate": 9.301486449520543e-06,
2372
+ "loss": 1.0691,
2373
+ "step": 1845
2374
+ },
2375
+ {
2376
+ "epoch": 1.62,
2377
+ "learning_rate": 9.254054033239017e-06,
2378
+ "loss": 1.0398,
2379
+ "step": 1850
2380
+ },
2381
+ {
2382
+ "epoch": 1.63,
2383
+ "learning_rate": 9.206638487504265e-06,
2384
+ "loss": 1.0498,
2385
+ "step": 1855
2386
+ },
2387
+ {
2388
+ "epoch": 1.63,
2389
+ "learning_rate": 9.15924088468106e-06,
2390
+ "loss": 1.0486,
2391
+ "step": 1860
2392
+ },
2393
+ {
2394
+ "epoch": 1.63,
2395
+ "learning_rate": 9.11186229672839e-06,
2396
+ "loss": 1.0574,
2397
+ "step": 1865
2398
+ },
2399
+ {
2400
+ "epoch": 1.64,
2401
+ "learning_rate": 9.064503795175175e-06,
2402
+ "loss": 1.06,
2403
+ "step": 1870
2404
+ },
2405
+ {
2406
+ "epoch": 1.64,
2407
+ "learning_rate": 9.017166451096077e-06,
2408
+ "loss": 1.0461,
2409
+ "step": 1875
2410
+ },
2411
+ {
2412
+ "epoch": 1.65,
2413
+ "learning_rate": 8.969851335087233e-06,
2414
+ "loss": 1.0605,
2415
+ "step": 1880
2416
+ },
2417
+ {
2418
+ "epoch": 1.65,
2419
+ "learning_rate": 8.922559517242078e-06,
2420
+ "loss": 1.0398,
2421
+ "step": 1885
2422
+ },
2423
+ {
2424
+ "epoch": 1.66,
2425
+ "learning_rate": 8.87529206712712e-06,
2426
+ "loss": 1.0581,
2427
+ "step": 1890
2428
+ },
2429
+ {
2430
+ "epoch": 1.66,
2431
+ "learning_rate": 8.828050053757764e-06,
2432
+ "loss": 1.0431,
2433
+ "step": 1895
2434
+ },
2435
+ {
2436
+ "epoch": 1.66,
2437
+ "learning_rate": 8.780834545574122e-06,
2438
+ "loss": 1.0665,
2439
+ "step": 1900
2440
+ },
2441
+ {
2442
+ "epoch": 1.66,
2443
+ "eval_loss": 1.1177818775177002,
2444
+ "eval_runtime": 425.4649,
2445
+ "eval_samples_per_second": 37.996,
2446
+ "eval_steps_per_second": 1.189,
2447
+ "step": 1900
2448
+ },
2449
+ {
2450
+ "epoch": 1.67,
2451
+ "learning_rate": 8.73364661041687e-06,
2452
+ "loss": 1.0675,
2453
+ "step": 1905
2454
+ },
2455
+ {
2456
+ "epoch": 1.67,
2457
+ "learning_rate": 8.686487315503066e-06,
2458
+ "loss": 1.0327,
2459
+ "step": 1910
2460
+ },
2461
+ {
2462
+ "epoch": 1.68,
2463
+ "learning_rate": 8.63935772740205e-06,
2464
+ "loss": 1.0395,
2465
+ "step": 1915
2466
+ },
2467
+ {
2468
+ "epoch": 1.68,
2469
+ "learning_rate": 8.59225891201129e-06,
2470
+ "loss": 1.0442,
2471
+ "step": 1920
2472
+ },
2473
+ {
2474
+ "epoch": 1.69,
2475
+ "learning_rate": 8.545191934532294e-06,
2476
+ "loss": 1.0605,
2477
+ "step": 1925
2478
+ },
2479
+ {
2480
+ "epoch": 1.69,
2481
+ "learning_rate": 8.498157859446512e-06,
2482
+ "loss": 1.0384,
2483
+ "step": 1930
2484
+ },
2485
+ {
2486
+ "epoch": 1.7,
2487
+ "learning_rate": 8.451157750491265e-06,
2488
+ "loss": 1.0631,
2489
+ "step": 1935
2490
+ },
2491
+ {
2492
+ "epoch": 1.7,
2493
+ "learning_rate": 8.404192670635683e-06,
2494
+ "loss": 1.0804,
2495
+ "step": 1940
2496
+ },
2497
+ {
2498
+ "epoch": 1.7,
2499
+ "learning_rate": 8.35726368205667e-06,
2500
+ "loss": 1.0556,
2501
+ "step": 1945
2502
+ },
2503
+ {
2504
+ "epoch": 1.71,
2505
+ "learning_rate": 8.310371846114875e-06,
2506
+ "loss": 1.0428,
2507
+ "step": 1950
2508
+ },
2509
+ {
2510
+ "epoch": 1.71,
2511
+ "learning_rate": 8.263518223330698e-06,
2512
+ "loss": 1.0596,
2513
+ "step": 1955
2514
+ },
2515
+ {
2516
+ "epoch": 1.72,
2517
+ "learning_rate": 8.216703873360292e-06,
2518
+ "loss": 1.0443,
2519
+ "step": 1960
2520
+ },
2521
+ {
2522
+ "epoch": 1.72,
2523
+ "learning_rate": 8.169929854971598e-06,
2524
+ "loss": 1.0637,
2525
+ "step": 1965
2526
+ },
2527
+ {
2528
+ "epoch": 1.73,
2529
+ "learning_rate": 8.123197226020426e-06,
2530
+ "loss": 1.0336,
2531
+ "step": 1970
2532
+ },
2533
+ {
2534
+ "epoch": 1.73,
2535
+ "learning_rate": 8.076507043426482e-06,
2536
+ "loss": 1.0734,
2537
+ "step": 1975
2538
+ },
2539
+ {
2540
+ "epoch": 1.73,
2541
+ "learning_rate": 8.02986036314952e-06,
2542
+ "loss": 1.0465,
2543
+ "step": 1980
2544
+ },
2545
+ {
2546
+ "epoch": 1.74,
2547
+ "learning_rate": 7.983258240165406e-06,
2548
+ "loss": 1.0622,
2549
+ "step": 1985
2550
+ },
2551
+ {
2552
+ "epoch": 1.74,
2553
+ "learning_rate": 7.936701728442308e-06,
2554
+ "loss": 1.0638,
2555
+ "step": 1990
2556
+ },
2557
+ {
2558
+ "epoch": 1.75,
2559
+ "learning_rate": 7.890191880916813e-06,
2560
+ "loss": 1.0572,
2561
+ "step": 1995
2562
+ },
2563
+ {
2564
+ "epoch": 1.75,
2565
+ "learning_rate": 7.84372974947016e-06,
2566
+ "loss": 1.07,
2567
+ "step": 2000
2568
+ },
2569
+ {
2570
+ "epoch": 1.75,
2571
+ "eval_loss": 1.1158109903335571,
2572
+ "eval_runtime": 425.626,
2573
+ "eval_samples_per_second": 37.982,
2574
+ "eval_steps_per_second": 1.189,
2575
+ "step": 2000
2576
+ },
2577
+ {
2578
+ "epoch": 1.76,
2579
+ "learning_rate": 7.797316384904402e-06,
2580
+ "loss": 1.0536,
2581
+ "step": 2005
2582
+ },
2583
+ {
2584
+ "epoch": 1.76,
2585
+ "learning_rate": 7.750952836918679e-06,
2586
+ "loss": 1.0543,
2587
+ "step": 2010
2588
+ },
2589
+ {
2590
+ "epoch": 1.77,
2591
+ "learning_rate": 7.704640154085466e-06,
2592
+ "loss": 1.06,
2593
+ "step": 2015
2594
+ },
2595
+ {
2596
+ "epoch": 1.77,
2597
+ "learning_rate": 7.658379383826841e-06,
2598
+ "loss": 1.0623,
2599
+ "step": 2020
2600
+ },
2601
+ {
2602
+ "epoch": 1.77,
2603
+ "learning_rate": 7.612171572390834e-06,
2604
+ "loss": 1.0611,
2605
+ "step": 2025
2606
+ },
2607
+ {
2608
+ "epoch": 1.78,
2609
+ "learning_rate": 7.566017764827717e-06,
2610
+ "loss": 1.0635,
2611
+ "step": 2030
2612
+ },
2613
+ {
2614
+ "epoch": 1.78,
2615
+ "learning_rate": 7.519919004966414e-06,
2616
+ "loss": 1.0583,
2617
+ "step": 2035
2618
+ },
2619
+ {
2620
+ "epoch": 1.79,
2621
+ "learning_rate": 7.473876335390857e-06,
2622
+ "loss": 1.0371,
2623
+ "step": 2040
2624
+ },
2625
+ {
2626
+ "epoch": 1.79,
2627
+ "learning_rate": 7.427890797416435e-06,
2628
+ "loss": 1.0538,
2629
+ "step": 2045
2630
+ },
2631
+ {
2632
+ "epoch": 1.8,
2633
+ "learning_rate": 7.3819634310664224e-06,
2634
+ "loss": 1.0404,
2635
+ "step": 2050
2636
+ },
2637
+ {
2638
+ "epoch": 1.8,
2639
+ "learning_rate": 7.336095275048474e-06,
2640
+ "loss": 1.0607,
2641
+ "step": 2055
2642
+ },
2643
+ {
2644
+ "epoch": 1.8,
2645
+ "learning_rate": 7.29028736673111e-06,
2646
+ "loss": 1.0592,
2647
+ "step": 2060
2648
+ },
2649
+ {
2650
+ "epoch": 1.81,
2651
+ "learning_rate": 7.244540742120294e-06,
2652
+ "loss": 1.0323,
2653
+ "step": 2065
2654
+ },
2655
+ {
2656
+ "epoch": 1.81,
2657
+ "learning_rate": 7.1988564358359566e-06,
2658
+ "loss": 1.0516,
2659
+ "step": 2070
2660
+ },
2661
+ {
2662
+ "epoch": 1.82,
2663
+ "learning_rate": 7.153235481088624e-06,
2664
+ "loss": 1.0695,
2665
+ "step": 2075
2666
+ },
2667
+ {
2668
+ "epoch": 1.82,
2669
+ "learning_rate": 7.107678909656052e-06,
2670
+ "loss": 1.0659,
2671
+ "step": 2080
2672
+ },
2673
+ {
2674
+ "epoch": 1.83,
2675
+ "learning_rate": 7.062187751859868e-06,
2676
+ "loss": 1.0667,
2677
+ "step": 2085
2678
+ },
2679
+ {
2680
+ "epoch": 1.83,
2681
+ "learning_rate": 7.016763036542305e-06,
2682
+ "loss": 1.0574,
2683
+ "step": 2090
2684
+ },
2685
+ {
2686
+ "epoch": 1.84,
2687
+ "learning_rate": 6.971405791042889e-06,
2688
+ "loss": 1.0494,
2689
+ "step": 2095
2690
+ },
2691
+ {
2692
+ "epoch": 1.84,
2693
+ "learning_rate": 6.92611704117525e-06,
2694
+ "loss": 1.0567,
2695
+ "step": 2100
2696
+ },
2697
+ {
2698
+ "epoch": 1.84,
2699
+ "eval_loss": 1.114147663116455,
2700
+ "eval_runtime": 424.5226,
2701
+ "eval_samples_per_second": 38.08,
2702
+ "eval_steps_per_second": 1.192,
2703
+ "step": 2100
2704
+ },
2705
+ {
2706
+ "epoch": 1.84,
2707
+ "learning_rate": 6.880897811203877e-06,
2708
+ "loss": 1.0624,
2709
+ "step": 2105
2710
+ },
2711
+ {
2712
+ "epoch": 1.85,
2713
+ "learning_rate": 6.835749123820997e-06,
2714
+ "loss": 1.048,
2715
+ "step": 2110
2716
+ },
2717
+ {
2718
+ "epoch": 1.85,
2719
+ "learning_rate": 6.790672000123405e-06,
2720
+ "loss": 1.0783,
2721
+ "step": 2115
2722
+ },
2723
+ {
2724
+ "epoch": 1.86,
2725
+ "learning_rate": 6.7456674595894065e-06,
2726
+ "loss": 1.0464,
2727
+ "step": 2120
2728
+ },
2729
+ {
2730
+ "epoch": 1.86,
2731
+ "learning_rate": 6.700736520055725e-06,
2732
+ "loss": 1.0437,
2733
+ "step": 2125
2734
+ },
2735
+ {
2736
+ "epoch": 1.87,
2737
+ "learning_rate": 6.6558801976945206e-06,
2738
+ "loss": 1.0552,
2739
+ "step": 2130
2740
+ },
2741
+ {
2742
+ "epoch": 1.87,
2743
+ "learning_rate": 6.611099506990372e-06,
2744
+ "loss": 1.0622,
2745
+ "step": 2135
2746
+ },
2747
+ {
2748
+ "epoch": 1.87,
2749
+ "learning_rate": 6.566395460717356e-06,
2750
+ "loss": 1.0519,
2751
+ "step": 2140
2752
+ },
2753
+ {
2754
+ "epoch": 1.88,
2755
+ "learning_rate": 6.521769069916136e-06,
2756
+ "loss": 1.0433,
2757
+ "step": 2145
2758
+ },
2759
+ {
2760
+ "epoch": 1.88,
2761
+ "learning_rate": 6.477221343871088e-06,
2762
+ "loss": 1.0578,
2763
+ "step": 2150
2764
+ },
2765
+ {
2766
+ "epoch": 1.89,
2767
+ "learning_rate": 6.4327532900874945e-06,
2768
+ "loss": 1.0628,
2769
+ "step": 2155
2770
+ },
2771
+ {
2772
+ "epoch": 1.89,
2773
+ "learning_rate": 6.38836591426873e-06,
2774
+ "loss": 1.0376,
2775
+ "step": 2160
2776
+ },
2777
+ {
2778
+ "epoch": 1.9,
2779
+ "learning_rate": 6.344060220293542e-06,
2780
+ "loss": 1.056,
2781
+ "step": 2165
2782
+ },
2783
+ {
2784
+ "epoch": 1.9,
2785
+ "learning_rate": 6.299837210193331e-06,
2786
+ "loss": 1.0485,
2787
+ "step": 2170
2788
+ },
2789
+ {
2790
+ "epoch": 1.91,
2791
+ "learning_rate": 6.255697884129495e-06,
2792
+ "loss": 1.073,
2793
+ "step": 2175
2794
+ },
2795
+ {
2796
+ "epoch": 1.91,
2797
+ "learning_rate": 6.2116432403708015e-06,
2798
+ "loss": 1.0604,
2799
+ "step": 2180
2800
+ },
2801
+ {
2802
+ "epoch": 1.91,
2803
+ "learning_rate": 6.167674275270832e-06,
2804
+ "loss": 1.0579,
2805
+ "step": 2185
2806
+ },
2807
+ {
2808
+ "epoch": 1.92,
2809
+ "learning_rate": 6.123791983245411e-06,
2810
+ "loss": 1.0493,
2811
+ "step": 2190
2812
+ },
2813
+ {
2814
+ "epoch": 1.92,
2815
+ "learning_rate": 6.0799973567501616e-06,
2816
+ "loss": 1.0583,
2817
+ "step": 2195
2818
+ },
2819
+ {
2820
+ "epoch": 1.93,
2821
+ "learning_rate": 6.036291386258013e-06,
2822
+ "loss": 1.0304,
2823
+ "step": 2200
2824
+ },
2825
+ {
2826
+ "epoch": 1.93,
2827
+ "eval_loss": 1.1127439737319946,
2828
+ "eval_runtime": 425.0783,
2829
+ "eval_samples_per_second": 38.031,
2830
+ "eval_steps_per_second": 1.19,
2831
+ "step": 2200
2832
+ },
2833
+ {
2834
+ "epoch": 1.93,
2835
+ "learning_rate": 5.992675060236841e-06,
2836
+ "loss": 1.0728,
2837
+ "step": 2205
2838
+ },
2839
+ {
2840
+ "epoch": 1.94,
2841
+ "learning_rate": 5.94914936512708e-06,
2842
+ "loss": 1.0472,
2843
+ "step": 2210
2844
+ },
2845
+ {
2846
+ "epoch": 1.94,
2847
+ "learning_rate": 5.905715285319442e-06,
2848
+ "loss": 1.0555,
2849
+ "step": 2215
2850
+ },
2851
+ {
2852
+ "epoch": 1.94,
2853
+ "learning_rate": 5.862373803132625e-06,
2854
+ "loss": 1.067,
2855
+ "step": 2220
2856
+ },
2857
+ {
2858
+ "epoch": 1.95,
2859
+ "learning_rate": 5.819125898791115e-06,
2860
+ "loss": 1.0504,
2861
+ "step": 2225
2862
+ },
2863
+ {
2864
+ "epoch": 1.95,
2865
+ "learning_rate": 5.775972550403015e-06,
2866
+ "loss": 1.0541,
2867
+ "step": 2230
2868
+ },
2869
+ {
2870
+ "epoch": 1.96,
2871
+ "learning_rate": 5.732914733937917e-06,
2872
+ "loss": 1.0508,
2873
+ "step": 2235
2874
+ },
2875
+ {
2876
+ "epoch": 1.96,
2877
+ "learning_rate": 5.6899534232048395e-06,
2878
+ "loss": 1.0763,
2879
+ "step": 2240
2880
+ },
2881
+ {
2882
+ "epoch": 1.97,
2883
+ "learning_rate": 5.647089589830186e-06,
2884
+ "loss": 1.0592,
2885
+ "step": 2245
2886
+ },
2887
+ {
2888
+ "epoch": 1.97,
2889
+ "learning_rate": 5.604324203235798e-06,
2890
+ "loss": 1.0535,
2891
+ "step": 2250
2892
+ },
2893
+ {
2894
+ "epoch": 1.98,
2895
+ "learning_rate": 5.561658230616997e-06,
2896
+ "loss": 1.0667,
2897
+ "step": 2255
2898
+ },
2899
+ {
2900
+ "epoch": 1.98,
2901
+ "learning_rate": 5.519092636920741e-06,
2902
+ "loss": 1.05,
2903
+ "step": 2260
2904
+ },
2905
+ {
2906
+ "epoch": 1.98,
2907
+ "learning_rate": 5.476628384823773e-06,
2908
+ "loss": 1.0721,
2909
+ "step": 2265
2910
+ },
2911
+ {
2912
+ "epoch": 1.99,
2913
+ "learning_rate": 5.434266434710879e-06,
2914
+ "loss": 1.0546,
2915
+ "step": 2270
2916
+ },
2917
+ {
2918
+ "epoch": 1.99,
2919
+ "learning_rate": 5.392007744653134e-06,
2920
+ "loss": 1.0448,
2921
+ "step": 2275
2922
+ },
2923
+ {
2924
+ "epoch": 2.0,
2925
+ "learning_rate": 5.3498532703862685e-06,
2926
+ "loss": 1.0486,
2927
+ "step": 2280
2928
+ },
2929
+ {
2930
+ "epoch": 2.0,
2931
+ "learning_rate": 5.307803965289023e-06,
2932
+ "loss": 1.036,
2933
+ "step": 2285
2934
+ },
2935
+ {
2936
+ "epoch": 2.01,
2937
+ "learning_rate": 5.265860780361602e-06,
2938
+ "loss": 0.9944,
2939
+ "step": 2290
2940
+ },
2941
+ {
2942
+ "epoch": 2.01,
2943
+ "learning_rate": 5.2240246642041705e-06,
2944
+ "loss": 0.9918,
2945
+ "step": 2295
2946
+ },
2947
+ {
2948
+ "epoch": 2.01,
2949
+ "learning_rate": 5.182296562995383e-06,
2950
+ "loss": 1.0132,
2951
+ "step": 2300
2952
+ },
2953
+ {
2954
+ "epoch": 2.01,
2955
+ "eval_loss": 1.1170079708099365,
2956
+ "eval_runtime": 424.7277,
2957
+ "eval_samples_per_second": 38.062,
2958
+ "eval_steps_per_second": 1.191,
2959
+ "step": 2300
2960
+ },
2961
+ {
2962
+ "epoch": 2.02,
2963
+ "learning_rate": 5.140677420471003e-06,
2964
+ "loss": 1.0045,
2965
+ "step": 2305
2966
+ },
2967
+ {
2968
+ "epoch": 2.02,
2969
+ "learning_rate": 5.099168177902539e-06,
2970
+ "loss": 1.0096,
2971
+ "step": 2310
2972
+ },
2973
+ {
2974
+ "epoch": 2.03,
2975
+ "learning_rate": 5.057769774075985e-06,
2976
+ "loss": 1.0135,
2977
+ "step": 2315
2978
+ },
2979
+ {
2980
+ "epoch": 2.03,
2981
+ "learning_rate": 5.0164831452705494e-06,
2982
+ "loss": 1.015,
2983
+ "step": 2320
2984
+ },
2985
+ {
2986
+ "epoch": 2.04,
2987
+ "learning_rate": 4.9753092252375245e-06,
2988
+ "loss": 1.008,
2989
+ "step": 2325
2990
+ },
2991
+ {
2992
+ "epoch": 2.04,
2993
+ "learning_rate": 4.934248945179127e-06,
2994
+ "loss": 1.0101,
2995
+ "step": 2330
2996
+ },
2997
+ {
2998
+ "epoch": 2.05,
2999
+ "learning_rate": 4.893303233727472e-06,
3000
+ "loss": 1.0,
3001
+ "step": 2335
3002
+ },
3003
+ {
3004
+ "epoch": 2.05,
3005
+ "learning_rate": 4.8524730169235404e-06,
3006
+ "loss": 1.0187,
3007
+ "step": 2340
3008
+ },
3009
+ {
3010
+ "epoch": 2.05,
3011
+ "learning_rate": 4.811759218196262e-06,
3012
+ "loss": 1.009,
3013
+ "step": 2345
3014
+ },
3015
+ {
3016
+ "epoch": 2.06,
3017
+ "learning_rate": 4.771162758341612e-06,
3018
+ "loss": 1.0071,
3019
+ "step": 2350
3020
+ },
3021
+ {
3022
+ "epoch": 2.06,
3023
+ "learning_rate": 4.730684555501799e-06,
3024
+ "loss": 1.0141,
3025
+ "step": 2355
3026
+ },
3027
+ {
3028
+ "epoch": 2.07,
3029
+ "learning_rate": 4.690325525144488e-06,
3030
+ "loss": 1.0091,
3031
+ "step": 2360
3032
+ },
3033
+ {
3034
+ "epoch": 2.07,
3035
+ "learning_rate": 4.6500865800421015e-06,
3036
+ "loss": 1.0098,
3037
+ "step": 2365
3038
+ },
3039
+ {
3040
+ "epoch": 2.08,
3041
+ "learning_rate": 4.609968630251187e-06,
3042
+ "loss": 1.0056,
3043
+ "step": 2370
3044
+ },
3045
+ {
3046
+ "epoch": 2.08,
3047
+ "learning_rate": 4.569972583091807e-06,
3048
+ "loss": 0.9974,
3049
+ "step": 2375
3050
+ },
3051
+ {
3052
+ "epoch": 2.08,
3053
+ "learning_rate": 4.5300993431270565e-06,
3054
+ "loss": 1.0151,
3055
+ "step": 2380
3056
+ },
3057
+ {
3058
+ "epoch": 2.09,
3059
+ "learning_rate": 4.490349812142564e-06,
3060
+ "loss": 1.0208,
3061
+ "step": 2385
3062
+ },
3063
+ {
3064
+ "epoch": 2.09,
3065
+ "learning_rate": 4.450724889126135e-06,
3066
+ "loss": 1.0104,
3067
+ "step": 2390
3068
+ },
3069
+ {
3070
+ "epoch": 2.1,
3071
+ "learning_rate": 4.411225470247387e-06,
3072
+ "loss": 0.9955,
3073
+ "step": 2395
3074
+ },
3075
+ {
3076
+ "epoch": 2.1,
3077
+ "learning_rate": 4.371852448837511e-06,
3078
+ "loss": 1.0203,
3079
+ "step": 2400
3080
+ },
3081
+ {
3082
+ "epoch": 2.1,
3083
+ "eval_loss": 1.1169644594192505,
3084
+ "eval_runtime": 424.5951,
3085
+ "eval_samples_per_second": 38.074,
3086
+ "eval_steps_per_second": 1.192,
3087
+ "step": 2400
3088
+ },
3089
+ {
3090
+ "epoch": 2.11,
3091
+ "learning_rate": 4.332606715369041e-06,
3092
+ "loss": 1.0023,
3093
+ "step": 2405
3094
+ },
3095
+ {
3096
+ "epoch": 2.11,
3097
+ "learning_rate": 4.2934891574357375e-06,
3098
+ "loss": 0.9944,
3099
+ "step": 2410
3100
+ },
3101
+ {
3102
+ "epoch": 2.12,
3103
+ "learning_rate": 4.254500659732496e-06,
3104
+ "loss": 1.015,
3105
+ "step": 2415
3106
+ },
3107
+ {
3108
+ "epoch": 2.12,
3109
+ "learning_rate": 4.2156421040353435e-06,
3110
+ "loss": 1.0193,
3111
+ "step": 2420
3112
+ },
3113
+ {
3114
+ "epoch": 2.12,
3115
+ "learning_rate": 4.1769143691815095e-06,
3116
+ "loss": 1.0124,
3117
+ "step": 2425
3118
+ },
3119
+ {
3120
+ "epoch": 2.13,
3121
+ "learning_rate": 4.138318331049525e-06,
3122
+ "loss": 1.0151,
3123
+ "step": 2430
3124
+ },
3125
+ {
3126
+ "epoch": 2.13,
3127
+ "learning_rate": 4.09985486253944e-06,
3128
+ "loss": 0.9957,
3129
+ "step": 2435
3130
+ },
3131
+ {
3132
+ "epoch": 2.14,
3133
+ "learning_rate": 4.061524833553058e-06,
3134
+ "loss": 1.0116,
3135
+ "step": 2440
3136
+ },
3137
+ {
3138
+ "epoch": 2.14,
3139
+ "learning_rate": 4.0233291109742726e-06,
3140
+ "loss": 1.0087,
3141
+ "step": 2445
3142
+ },
3143
+ {
3144
+ "epoch": 2.15,
3145
+ "learning_rate": 3.985268558649472e-06,
3146
+ "loss": 0.991,
3147
+ "step": 2450
3148
+ },
3149
+ {
3150
+ "epoch": 2.15,
3151
+ "learning_rate": 3.947344037367983e-06,
3152
+ "loss": 1.0072,
3153
+ "step": 2455
3154
+ },
3155
+ {
3156
+ "epoch": 2.16,
3157
+ "learning_rate": 3.909556404842609e-06,
3158
+ "loss": 0.9968,
3159
+ "step": 2460
3160
+ },
3161
+ {
3162
+ "epoch": 2.16,
3163
+ "learning_rate": 3.871906515690249e-06,
3164
+ "loss": 1.0264,
3165
+ "step": 2465
3166
+ },
3167
+ {
3168
+ "epoch": 2.16,
3169
+ "learning_rate": 3.834395221412537e-06,
3170
+ "loss": 1.0041,
3171
+ "step": 2470
3172
+ },
3173
+ {
3174
+ "epoch": 2.17,
3175
+ "learning_rate": 3.797023370376618e-06,
3176
+ "loss": 1.0179,
3177
+ "step": 2475
3178
+ },
3179
+ {
3180
+ "epoch": 2.17,
3181
+ "learning_rate": 3.7597918077959306e-06,
3182
+ "loss": 0.9973,
3183
+ "step": 2480
3184
+ },
3185
+ {
3186
+ "epoch": 2.18,
3187
+ "learning_rate": 3.7227013757111197e-06,
3188
+ "loss": 1.0187,
3189
+ "step": 2485
3190
+ },
3191
+ {
3192
+ "epoch": 2.18,
3193
+ "learning_rate": 3.6857529129709655e-06,
3194
+ "loss": 0.9903,
3195
+ "step": 2490
3196
+ },
3197
+ {
3198
+ "epoch": 2.19,
3199
+ "learning_rate": 3.64894725521344e-06,
3200
+ "loss": 1.0046,
3201
+ "step": 2495
3202
+ },
3203
+ {
3204
+ "epoch": 2.19,
3205
+ "learning_rate": 3.61228523484678e-06,
3206
+ "loss": 1.0088,
3207
+ "step": 2500
3208
+ },
3209
+ {
3210
+ "epoch": 2.19,
3211
+ "eval_loss": 1.116758108139038,
3212
+ "eval_runtime": 424.4029,
3213
+ "eval_samples_per_second": 38.091,
3214
+ "eval_steps_per_second": 1.192,
3215
+ "step": 2500
3216
+ },
3217
+ {
3218
+ "epoch": 2.19,
3219
+ "learning_rate": 3.5757676810306775e-06,
3220
+ "loss": 1.0017,
3221
+ "step": 2505
3222
+ },
3223
+ {
3224
+ "epoch": 2.2,
3225
+ "learning_rate": 3.539395419657531e-06,
3226
+ "loss": 1.012,
3227
+ "step": 2510
3228
+ },
3229
+ {
3230
+ "epoch": 2.2,
3231
+ "learning_rate": 3.5031692733337475e-06,
3232
+ "loss": 1.0044,
3233
+ "step": 2515
3234
+ },
3235
+ {
3236
+ "epoch": 2.21,
3237
+ "learning_rate": 3.4670900613611656e-06,
3238
+ "loss": 0.9957,
3239
+ "step": 2520
3240
+ },
3241
+ {
3242
+ "epoch": 2.21,
3243
+ "learning_rate": 3.431158599718496e-06,
3244
+ "loss": 1.0168,
3245
+ "step": 2525
3246
+ },
3247
+ {
3248
+ "epoch": 2.22,
3249
+ "learning_rate": 3.3953757010428946e-06,
3250
+ "loss": 1.0231,
3251
+ "step": 2530
3252
+ },
3253
+ {
3254
+ "epoch": 2.22,
3255
+ "learning_rate": 3.359742174611558e-06,
3256
+ "loss": 1.0059,
3257
+ "step": 2535
3258
+ },
3259
+ {
3260
+ "epoch": 2.23,
3261
+ "learning_rate": 3.3242588263234467e-06,
3262
+ "loss": 1.0014,
3263
+ "step": 2540
3264
+ },
3265
+ {
3266
+ "epoch": 2.23,
3267
+ "learning_rate": 3.2889264586810323e-06,
3268
+ "loss": 0.9925,
3269
+ "step": 2545
3270
+ },
3271
+ {
3272
+ "epoch": 2.23,
3273
+ "learning_rate": 3.2537458707721735e-06,
3274
+ "loss": 1.0068,
3275
+ "step": 2550
3276
+ },
3277
+ {
3278
+ "epoch": 2.24,
3279
+ "learning_rate": 3.2187178582520206e-06,
3280
+ "loss": 1.0192,
3281
+ "step": 2555
3282
+ },
3283
+ {
3284
+ "epoch": 2.24,
3285
+ "learning_rate": 3.183843213325042e-06,
3286
+ "loss": 0.9856,
3287
+ "step": 2560
3288
+ },
3289
+ {
3290
+ "epoch": 2.25,
3291
+ "learning_rate": 3.149122724727083e-06,
3292
+ "loss": 1.018,
3293
+ "step": 2565
3294
+ },
3295
+ {
3296
+ "epoch": 2.25,
3297
+ "learning_rate": 3.1145571777075577e-06,
3298
+ "loss": 1.0022,
3299
+ "step": 2570
3300
+ },
3301
+ {
3302
+ "epoch": 2.26,
3303
+ "learning_rate": 3.080147354011659e-06,
3304
+ "loss": 0.9909,
3305
+ "step": 2575
3306
+ },
3307
+ {
3308
+ "epoch": 2.26,
3309
+ "learning_rate": 3.0458940318626963e-06,
3310
+ "loss": 1.0082,
3311
+ "step": 2580
3312
+ },
3313
+ {
3314
+ "epoch": 2.26,
3315
+ "learning_rate": 3.011797985944499e-06,
3316
+ "loss": 1.0178,
3317
+ "step": 2585
3318
+ },
3319
+ {
3320
+ "epoch": 2.27,
3321
+ "learning_rate": 2.977859987383874e-06,
3322
+ "loss": 1.0055,
3323
+ "step": 2590
3324
+ },
3325
+ {
3326
+ "epoch": 2.27,
3327
+ "learning_rate": 2.944080803733197e-06,
3328
+ "loss": 0.9863,
3329
+ "step": 2595
3330
+ },
3331
+ {
3332
+ "epoch": 2.28,
3333
+ "learning_rate": 2.9104611989530196e-06,
3334
+ "loss": 1.002,
3335
+ "step": 2600
3336
+ },
3337
+ {
3338
+ "epoch": 2.28,
3339
+ "eval_loss": 1.116153359413147,
3340
+ "eval_runtime": 424.7781,
3341
+ "eval_samples_per_second": 38.058,
3342
+ "eval_steps_per_second": 1.191,
3343
+ "step": 2600
3344
+ },
3345
+ {
3346
+ "epoch": 2.28,
3347
+ "learning_rate": 2.8770019333948197e-06,
3348
+ "loss": 1.0141,
3349
+ "step": 2605
3350
+ },
3351
+ {
3352
+ "epoch": 2.29,
3353
+ "learning_rate": 2.843703763783785e-06,
3354
+ "loss": 1.0078,
3355
+ "step": 2610
3356
+ },
3357
+ {
3358
+ "epoch": 2.29,
3359
+ "learning_rate": 2.810567443201717e-06,
3360
+ "loss": 1.0025,
3361
+ "step": 2615
3362
+ },
3363
+ {
3364
+ "epoch": 2.3,
3365
+ "learning_rate": 2.7775937210699754e-06,
3366
+ "loss": 1.009,
3367
+ "step": 2620
3368
+ },
3369
+ {
3370
+ "epoch": 2.3,
3371
+ "learning_rate": 2.7447833431325566e-06,
3372
+ "loss": 0.9782,
3373
+ "step": 2625
3374
+ },
3375
+ {
3376
+ "epoch": 2.3,
3377
+ "learning_rate": 2.712137051439202e-06,
3378
+ "loss": 1.0158,
3379
+ "step": 2630
3380
+ },
3381
+ {
3382
+ "epoch": 2.31,
3383
+ "learning_rate": 2.6796555843286375e-06,
3384
+ "loss": 1.007,
3385
+ "step": 2635
3386
+ },
3387
+ {
3388
+ "epoch": 2.31,
3389
+ "learning_rate": 2.6473396764118575e-06,
3390
+ "loss": 1.0088,
3391
+ "step": 2640
3392
+ },
3393
+ {
3394
+ "epoch": 2.32,
3395
+ "learning_rate": 2.6151900585555178e-06,
3396
+ "loss": 1.0053,
3397
+ "step": 2645
3398
+ },
3399
+ {
3400
+ "epoch": 2.32,
3401
+ "learning_rate": 2.583207457865413e-06,
3402
+ "loss": 1.0002,
3403
+ "step": 2650
3404
+ },
3405
+ {
3406
+ "epoch": 2.33,
3407
+ "learning_rate": 2.5513925976700217e-06,
3408
+ "loss": 1.0031,
3409
+ "step": 2655
3410
+ },
3411
+ {
3412
+ "epoch": 2.33,
3413
+ "learning_rate": 2.519746197504144e-06,
3414
+ "loss": 1.0156,
3415
+ "step": 2660
3416
+ },
3417
+ {
3418
+ "epoch": 2.33,
3419
+ "learning_rate": 2.488268973092649e-06,
3420
+ "loss": 1.0093,
3421
+ "step": 2665
3422
+ },
3423
+ {
3424
+ "epoch": 2.34,
3425
+ "learning_rate": 2.456961636334265e-06,
3426
+ "loss": 1.0296,
3427
+ "step": 2670
3428
+ },
3429
+ {
3430
+ "epoch": 2.34,
3431
+ "learning_rate": 2.425824895285488e-06,
3432
+ "loss": 1.0007,
3433
+ "step": 2675
3434
+ },
3435
+ {
3436
+ "epoch": 2.35,
3437
+ "learning_rate": 2.3948594541445735e-06,
3438
+ "loss": 1.0107,
3439
+ "step": 2680
3440
+ },
3441
+ {
3442
+ "epoch": 2.35,
3443
+ "learning_rate": 2.3640660132356e-06,
3444
+ "loss": 0.9927,
3445
+ "step": 2685
3446
+ },
3447
+ {
3448
+ "epoch": 2.36,
3449
+ "learning_rate": 2.333445268992639e-06,
3450
+ "loss": 1.0004,
3451
+ "step": 2690
3452
+ },
3453
+ {
3454
+ "epoch": 2.36,
3455
+ "learning_rate": 2.302997913943994e-06,
3456
+ "loss": 1.014,
3457
+ "step": 2695
3458
+ },
3459
+ {
3460
+ "epoch": 2.37,
3461
+ "learning_rate": 2.272724636696555e-06,
3462
+ "loss": 1.0004,
3463
+ "step": 2700
3464
+ },
3465
+ {
3466
+ "epoch": 2.37,
3467
+ "eval_loss": 1.1156811714172363,
3468
+ "eval_runtime": 425.4077,
3469
+ "eval_samples_per_second": 38.001,
3470
+ "eval_steps_per_second": 1.189,
3471
+ "step": 2700
3472
+ },
3473
+ {
3474
+ "epoch": 2.37,
3475
+ "learning_rate": 2.2426261219202006e-06,
3476
+ "loss": 1.0019,
3477
+ "step": 2705
3478
+ },
3479
+ {
3480
+ "epoch": 2.37,
3481
+ "learning_rate": 2.21270305033234e-06,
3482
+ "loss": 1.0127,
3483
+ "step": 2710
3484
+ },
3485
+ {
3486
+ "epoch": 2.38,
3487
+ "learning_rate": 2.1829560986824937e-06,
3488
+ "loss": 1.0168,
3489
+ "step": 2715
3490
+ },
3491
+ {
3492
+ "epoch": 2.38,
3493
+ "learning_rate": 2.1533859397370084e-06,
3494
+ "loss": 1.0059,
3495
+ "step": 2720
3496
+ },
3497
+ {
3498
+ "epoch": 2.39,
3499
+ "learning_rate": 2.1239932422638234e-06,
3500
+ "loss": 1.0172,
3501
+ "step": 2725
3502
+ },
3503
+ {
3504
+ "epoch": 2.39,
3505
+ "learning_rate": 2.0947786710173545e-06,
3506
+ "loss": 1.0097,
3507
+ "step": 2730
3508
+ },
3509
+ {
3510
+ "epoch": 2.4,
3511
+ "learning_rate": 2.06574288672347e-06,
3512
+ "loss": 1.0079,
3513
+ "step": 2735
3514
+ },
3515
+ {
3516
+ "epoch": 2.4,
3517
+ "learning_rate": 2.0368865460645202e-06,
3518
+ "loss": 0.9919,
3519
+ "step": 2740
3520
+ },
3521
+ {
3522
+ "epoch": 2.4,
3523
+ "learning_rate": 2.008210301664518e-06,
3524
+ "loss": 1.0132,
3525
+ "step": 2745
3526
+ },
3527
+ {
3528
+ "epoch": 2.41,
3529
+ "learning_rate": 1.9797148020743496e-06,
3530
+ "loss": 1.0044,
3531
+ "step": 2750
3532
+ },
3533
+ {
3534
+ "epoch": 2.41,
3535
+ "learning_rate": 1.951400691757133e-06,
3536
+ "loss": 1.0037,
3537
+ "step": 2755
3538
+ },
3539
+ {
3540
+ "epoch": 2.42,
3541
+ "learning_rate": 1.9232686110736165e-06,
3542
+ "loss": 1.0074,
3543
+ "step": 2760
3544
+ },
3545
+ {
3546
+ "epoch": 2.42,
3547
+ "learning_rate": 1.895319196267722e-06,
3548
+ "loss": 1.0179,
3549
+ "step": 2765
3550
+ },
3551
+ {
3552
+ "epoch": 2.43,
3553
+ "learning_rate": 1.8675530794521312e-06,
3554
+ "loss": 0.9982,
3555
+ "step": 2770
3556
+ },
3557
+ {
3558
+ "epoch": 2.43,
3559
+ "learning_rate": 1.8399708885940136e-06,
3560
+ "loss": 1.0088,
3561
+ "step": 2775
3562
+ },
3563
+ {
3564
+ "epoch": 2.44,
3565
+ "learning_rate": 1.8125732475007983e-06,
3566
+ "loss": 1.0117,
3567
+ "step": 2780
3568
+ },
3569
+ {
3570
+ "epoch": 2.44,
3571
+ "learning_rate": 1.785360775806093e-06,
3572
+ "loss": 1.0079,
3573
+ "step": 2785
3574
+ },
3575
+ {
3576
+ "epoch": 2.44,
3577
+ "learning_rate": 1.7583340889556456e-06,
3578
+ "loss": 0.9921,
3579
+ "step": 2790
3580
+ },
3581
+ {
3582
+ "epoch": 2.45,
3583
+ "learning_rate": 1.7314937981934399e-06,
3584
+ "loss": 1.0013,
3585
+ "step": 2795
3586
+ },
3587
+ {
3588
+ "epoch": 2.45,
3589
+ "learning_rate": 1.7048405105478717e-06,
3590
+ "loss": 1.0058,
3591
+ "step": 2800
3592
+ },
3593
+ {
3594
+ "epoch": 2.45,
3595
+ "eval_loss": 1.115598440170288,
3596
+ "eval_runtime": 425.102,
3597
+ "eval_samples_per_second": 38.029,
3598
+ "eval_steps_per_second": 1.19,
3599
+ "step": 2800
3600
+ },
3601
+ {
3602
+ "epoch": 2.46,
3603
+ "learning_rate": 1.6783748288180058e-06,
3604
+ "loss": 1.0048,
3605
+ "step": 2805
3606
+ },
3607
+ {
3608
+ "epoch": 2.46,
3609
+ "learning_rate": 1.652097351559967e-06,
3610
+ "loss": 1.0108,
3611
+ "step": 2810
3612
+ },
3613
+ {
3614
+ "epoch": 2.47,
3615
+ "learning_rate": 1.6260086730733749e-06,
3616
+ "loss": 1.014,
3617
+ "step": 2815
3618
+ },
3619
+ {
3620
+ "epoch": 2.47,
3621
+ "learning_rate": 1.6001093833879288e-06,
3622
+ "loss": 1.0037,
3623
+ "step": 2820
3624
+ },
3625
+ {
3626
+ "epoch": 2.47,
3627
+ "learning_rate": 1.5744000682500426e-06,
3628
+ "loss": 0.9953,
3629
+ "step": 2825
3630
+ },
3631
+ {
3632
+ "epoch": 2.48,
3633
+ "learning_rate": 1.5488813091096145e-06,
3634
+ "loss": 1.0152,
3635
+ "step": 2830
3636
+ },
3637
+ {
3638
+ "epoch": 2.48,
3639
+ "learning_rate": 1.523553683106861e-06,
3640
+ "loss": 1.0242,
3641
+ "step": 2835
3642
+ },
3643
+ {
3644
+ "epoch": 2.49,
3645
+ "learning_rate": 1.49841776305928e-06,
3646
+ "loss": 1.0172,
3647
+ "step": 2840
3648
+ },
3649
+ {
3650
+ "epoch": 2.49,
3651
+ "learning_rate": 1.4734741174486788e-06,
3652
+ "loss": 0.9999,
3653
+ "step": 2845
3654
+ },
3655
+ {
3656
+ "epoch": 2.5,
3657
+ "learning_rate": 1.4487233104083354e-06,
3658
+ "loss": 1.011,
3659
+ "step": 2850
3660
+ },
3661
+ {
3662
+ "epoch": 2.5,
3663
+ "learning_rate": 1.424165901710224e-06,
3664
+ "loss": 1.0154,
3665
+ "step": 2855
3666
+ },
3667
+ {
3668
+ "epoch": 2.51,
3669
+ "learning_rate": 1.3998024467523596e-06,
3670
+ "loss": 0.9912,
3671
+ "step": 2860
3672
+ },
3673
+ {
3674
+ "epoch": 2.51,
3675
+ "learning_rate": 1.3756334965462502e-06,
3676
+ "loss": 1.0276,
3677
+ "step": 2865
3678
+ },
3679
+ {
3680
+ "epoch": 2.51,
3681
+ "learning_rate": 1.3516595977044112e-06,
3682
+ "loss": 1.0051,
3683
+ "step": 2870
3684
+ },
3685
+ {
3686
+ "epoch": 2.52,
3687
+ "learning_rate": 1.3278812924280192e-06,
3688
+ "loss": 0.9893,
3689
+ "step": 2875
3690
+ },
3691
+ {
3692
+ "epoch": 2.52,
3693
+ "learning_rate": 1.304299118494652e-06,
3694
+ "loss": 1.0048,
3695
+ "step": 2880
3696
+ },
3697
+ {
3698
+ "epoch": 2.53,
3699
+ "learning_rate": 1.2809136092461084e-06,
3700
+ "loss": 1.0196,
3701
+ "step": 2885
3702
+ },
3703
+ {
3704
+ "epoch": 2.53,
3705
+ "learning_rate": 1.2577252935763695e-06,
3706
+ "loss": 1.0261,
3707
+ "step": 2890
3708
+ },
3709
+ {
3710
+ "epoch": 2.54,
3711
+ "learning_rate": 1.234734695919616e-06,
3712
+ "loss": 1.0118,
3713
+ "step": 2895
3714
+ },
3715
+ {
3716
+ "epoch": 2.54,
3717
+ "learning_rate": 1.2119423362383776e-06,
3718
+ "loss": 1.0118,
3719
+ "step": 2900
3720
+ },
3721
+ {
3722
+ "epoch": 2.54,
3723
+ "eval_loss": 1.1150012016296387,
3724
+ "eval_runtime": 425.4063,
3725
+ "eval_samples_per_second": 38.001,
3726
+ "eval_steps_per_second": 1.189,
3727
+ "step": 2900
3728
+ },
3729
+ {
3730
+ "epoch": 2.54,
3731
+ "learning_rate": 1.189348730011778e-06,
3732
+ "loss": 0.9927,
3733
+ "step": 2905
3734
+ },
3735
+ {
3736
+ "epoch": 2.55,
3737
+ "learning_rate": 1.166954388223862e-06,
3738
+ "loss": 1.0023,
3739
+ "step": 2910
3740
+ },
3741
+ {
3742
+ "epoch": 2.55,
3743
+ "learning_rate": 1.1447598173520558e-06,
3744
+ "loss": 1.009,
3745
+ "step": 2915
3746
+ },
3747
+ {
3748
+ "epoch": 2.56,
3749
+ "learning_rate": 1.1227655193556973e-06,
3750
+ "loss": 1.0049,
3751
+ "step": 2920
3752
+ },
3753
+ {
3754
+ "epoch": 2.56,
3755
+ "learning_rate": 1.1009719916646977e-06,
3756
+ "loss": 1.0065,
3757
+ "step": 2925
3758
+ },
3759
+ {
3760
+ "epoch": 2.57,
3761
+ "learning_rate": 1.079379727168276e-06,
3762
+ "loss": 1.0006,
3763
+ "step": 2930
3764
+ },
3765
+ {
3766
+ "epoch": 2.57,
3767
+ "learning_rate": 1.0579892142038284e-06,
3768
+ "loss": 1.0174,
3769
+ "step": 2935
3770
+ },
3771
+ {
3772
+ "epoch": 2.58,
3773
+ "learning_rate": 1.0368009365458697e-06,
3774
+ "loss": 1.0044,
3775
+ "step": 2940
3776
+ },
3777
+ {
3778
+ "epoch": 2.58,
3779
+ "learning_rate": 1.0158153733950981e-06,
3780
+ "loss": 1.0045,
3781
+ "step": 2945
3782
+ },
3783
+ {
3784
+ "epoch": 2.58,
3785
+ "learning_rate": 9.950329993675623e-07,
3786
+ "loss": 0.9917,
3787
+ "step": 2950
3788
+ },
3789
+ {
3790
+ "epoch": 2.59,
3791
+ "learning_rate": 9.744542844839145e-07,
3792
+ "loss": 1.006,
3793
+ "step": 2955
3794
+ },
3795
+ {
3796
+ "epoch": 2.59,
3797
+ "learning_rate": 9.540796941587983e-07,
3798
+ "loss": 0.9955,
3799
+ "step": 2960
3800
+ },
3801
+ {
3802
+ "epoch": 2.6,
3803
+ "learning_rate": 9.33909689190301e-07,
3804
+ "loss": 1.0065,
3805
+ "step": 2965
3806
+ },
3807
+ {
3808
+ "epoch": 2.6,
3809
+ "learning_rate": 9.139447257495537e-07,
3810
+ "loss": 1.0212,
3811
+ "step": 2970
3812
+ },
3813
+ {
3814
+ "epoch": 2.61,
3815
+ "learning_rate": 8.941852553703966e-07,
3816
+ "loss": 1.0156,
3817
+ "step": 2975
3818
+ },
3819
+ {
3820
+ "epoch": 2.61,
3821
+ "learning_rate": 8.746317249391834e-07,
3822
+ "loss": 1.0045,
3823
+ "step": 2980
3824
+ },
3825
+ {
3826
+ "epoch": 2.61,
3827
+ "learning_rate": 8.55284576684654e-07,
3828
+ "loss": 1.0163,
3829
+ "step": 2985
3830
+ },
3831
+ {
3832
+ "epoch": 2.62,
3833
+ "learning_rate": 8.361442481679561e-07,
3834
+ "loss": 1.0048,
3835
+ "step": 2990
3836
+ },
3837
+ {
3838
+ "epoch": 2.62,
3839
+ "learning_rate": 8.172111722727294e-07,
3840
+ "loss": 1.0107,
3841
+ "step": 2995
3842
+ },
3843
+ {
3844
+ "epoch": 2.63,
3845
+ "learning_rate": 7.984857771953303e-07,
3846
+ "loss": 0.9941,
3847
+ "step": 3000
3848
+ },
3849
+ {
3850
+ "epoch": 2.63,
3851
+ "eval_loss": 1.1147677898406982,
3852
+ "eval_runtime": 425.2487,
3853
+ "eval_samples_per_second": 38.015,
3854
+ "eval_steps_per_second": 1.19,
3855
+ "step": 3000
3856
+ },
3857
+ {
3858
+ "epoch": 2.63,
3859
+ "learning_rate": 7.799684864351342e-07,
3860
+ "loss": 1.0063,
3861
+ "step": 3005
3862
+ },
3863
+ {
3864
+ "epoch": 2.64,
3865
+ "learning_rate": 7.616597187849683e-07,
3866
+ "loss": 0.9949,
3867
+ "step": 3010
3868
+ },
3869
+ {
3870
+ "epoch": 2.64,
3871
+ "learning_rate": 7.435598883216377e-07,
3872
+ "loss": 1.0021,
3873
+ "step": 3015
3874
+ },
3875
+ {
3876
+ "epoch": 2.65,
3877
+ "learning_rate": 7.256694043965528e-07,
3878
+ "loss": 1.001,
3879
+ "step": 3020
3880
+ },
3881
+ {
3882
+ "epoch": 2.65,
3883
+ "learning_rate": 7.07988671626485e-07,
3884
+ "loss": 1.018,
3885
+ "step": 3025
3886
+ },
3887
+ {
3888
+ "epoch": 2.65,
3889
+ "learning_rate": 6.905180898844022e-07,
3890
+ "loss": 0.9964,
3891
+ "step": 3030
3892
+ },
3893
+ {
3894
+ "epoch": 2.66,
3895
+ "learning_rate": 6.732580542904343e-07,
3896
+ "loss": 0.9887,
3897
+ "step": 3035
3898
+ },
3899
+ {
3900
+ "epoch": 2.66,
3901
+ "learning_rate": 6.562089552029305e-07,
3902
+ "loss": 1.0085,
3903
+ "step": 3040
3904
+ },
3905
+ {
3906
+ "epoch": 2.67,
3907
+ "learning_rate": 6.39371178209639e-07,
3908
+ "loss": 1.0064,
3909
+ "step": 3045
3910
+ },
3911
+ {
3912
+ "epoch": 2.67,
3913
+ "learning_rate": 6.227451041189759e-07,
3914
+ "loss": 1.0034,
3915
+ "step": 3050
3916
+ },
3917
+ {
3918
+ "epoch": 2.68,
3919
+ "learning_rate": 6.063311089514256e-07,
3920
+ "loss": 1.0158,
3921
+ "step": 3055
3922
+ },
3923
+ {
3924
+ "epoch": 2.68,
3925
+ "learning_rate": 5.901295639310212e-07,
3926
+ "loss": 1.0108,
3927
+ "step": 3060
3928
+ },
3929
+ {
3930
+ "epoch": 2.69,
3931
+ "learning_rate": 5.74140835476964e-07,
3932
+ "loss": 1.0218,
3933
+ "step": 3065
3934
+ },
3935
+ {
3936
+ "epoch": 2.69,
3937
+ "learning_rate": 5.583652851953225e-07,
3938
+ "loss": 1.0002,
3939
+ "step": 3070
3940
+ },
3941
+ {
3942
+ "epoch": 2.69,
3943
+ "learning_rate": 5.428032698708696e-07,
3944
+ "loss": 0.9974,
3945
+ "step": 3075
3946
+ },
3947
+ {
3948
+ "epoch": 2.7,
3949
+ "learning_rate": 5.274551414589979e-07,
3950
+ "loss": 1.0071,
3951
+ "step": 3080
3952
+ },
3953
+ {
3954
+ "epoch": 2.7,
3955
+ "learning_rate": 5.123212470777684e-07,
3956
+ "loss": 1.0226,
3957
+ "step": 3085
3958
+ },
3959
+ {
3960
+ "epoch": 2.71,
3961
+ "learning_rate": 4.97401929000062e-07,
3962
+ "loss": 1.005,
3963
+ "step": 3090
3964
+ },
3965
+ {
3966
+ "epoch": 2.71,
3967
+ "learning_rate": 4.826975246458299e-07,
3968
+ "loss": 1.0073,
3969
+ "step": 3095
3970
+ },
3971
+ {
3972
+ "epoch": 2.72,
3973
+ "learning_rate": 4.6820836657446964e-07,
3974
+ "loss": 1.0127,
3975
+ "step": 3100
3976
+ },
3977
+ {
3978
+ "epoch": 2.72,
3979
+ "eval_loss": 1.11471688747406,
3980
+ "eval_runtime": 425.0331,
3981
+ "eval_samples_per_second": 38.035,
3982
+ "eval_steps_per_second": 1.19,
3983
+ "step": 3100
3984
+ },
3985
+ {
3986
+ "epoch": 2.72,
3987
+ "learning_rate": 4.5393478247730436e-07,
3988
+ "loss": 1.0162,
3989
+ "step": 3105
3990
+ },
3991
+ {
3992
+ "epoch": 2.72,
3993
+ "learning_rate": 4.398770951701647e-07,
3994
+ "loss": 1.0037,
3995
+ "step": 3110
3996
+ },
3997
+ {
3998
+ "epoch": 2.73,
3999
+ "learning_rate": 4.2603562258609176e-07,
4000
+ "loss": 1.0036,
4001
+ "step": 3115
4002
+ },
4003
+ {
4004
+ "epoch": 2.73,
4005
+ "learning_rate": 4.124106777681536e-07,
4006
+ "loss": 0.9959,
4007
+ "step": 3120
4008
+ },
4009
+ {
4010
+ "epoch": 2.74,
4011
+ "learning_rate": 3.9900256886235e-07,
4012
+ "loss": 0.9993,
4013
+ "step": 3125
4014
+ },
4015
+ {
4016
+ "epoch": 2.74,
4017
+ "learning_rate": 3.8581159911065926e-07,
4018
+ "loss": 1.0137,
4019
+ "step": 3130
4020
+ },
4021
+ {
4022
+ "epoch": 2.75,
4023
+ "learning_rate": 3.7283806684416777e-07,
4024
+ "loss": 1.0166,
4025
+ "step": 3135
4026
+ },
4027
+ {
4028
+ "epoch": 2.75,
4029
+ "learning_rate": 3.600822654763314e-07,
4030
+ "loss": 0.9931,
4031
+ "step": 3140
4032
+ },
4033
+ {
4034
+ "epoch": 2.76,
4035
+ "learning_rate": 3.4754448349633374e-07,
4036
+ "loss": 0.9939,
4037
+ "step": 3145
4038
+ },
4039
+ {
4040
+ "epoch": 2.76,
4041
+ "learning_rate": 3.35225004462566e-07,
4042
+ "loss": 1.0135,
4043
+ "step": 3150
4044
+ },
4045
+ {
4046
+ "epoch": 2.76,
4047
+ "learning_rate": 3.2312410699620986e-07,
4048
+ "loss": 1.0142,
4049
+ "step": 3155
4050
+ },
4051
+ {
4052
+ "epoch": 2.77,
4053
+ "learning_rate": 3.11242064774937e-07,
4054
+ "loss": 0.993,
4055
+ "step": 3160
4056
+ },
4057
+ {
4058
+ "epoch": 2.77,
4059
+ "learning_rate": 2.99579146526725e-07,
4060
+ "loss": 0.9925,
4061
+ "step": 3165
4062
+ },
4063
+ {
4064
+ "epoch": 2.78,
4065
+ "learning_rate": 2.8813561602377025e-07,
4066
+ "loss": 1.0074,
4067
+ "step": 3170
4068
+ },
4069
+ {
4070
+ "epoch": 2.78,
4071
+ "learning_rate": 2.7691173207653355e-07,
4072
+ "loss": 1.0099,
4073
+ "step": 3175
4074
+ },
4075
+ {
4076
+ "epoch": 2.79,
4077
+ "learning_rate": 2.659077485278716e-07,
4078
+ "loss": 1.0112,
4079
+ "step": 3180
4080
+ },
4081
+ {
4082
+ "epoch": 2.79,
4083
+ "learning_rate": 2.551239142473161e-07,
4084
+ "loss": 1.0004,
4085
+ "step": 3185
4086
+ },
4087
+ {
4088
+ "epoch": 2.79,
4089
+ "learning_rate": 2.4456047312542365e-07,
4090
+ "loss": 1.0155,
4091
+ "step": 3190
4092
+ },
4093
+ {
4094
+ "epoch": 2.8,
4095
+ "learning_rate": 2.3421766406827807e-07,
4096
+ "loss": 1.0085,
4097
+ "step": 3195
4098
+ },
4099
+ {
4100
+ "epoch": 2.8,
4101
+ "learning_rate": 2.2409572099207576e-07,
4102
+ "loss": 1.0039,
4103
+ "step": 3200
4104
+ },
4105
+ {
4106
+ "epoch": 2.8,
4107
+ "eval_loss": 1.114408254623413,
4108
+ "eval_runtime": 425.0695,
4109
+ "eval_samples_per_second": 38.031,
4110
+ "eval_steps_per_second": 1.19,
4111
+ "step": 3200
4112
+ },
4113
+ {
4114
+ "epoch": 2.81,
4115
+ "learning_rate": 2.1419487281784002e-07,
4116
+ "loss": 0.9914,
4117
+ "step": 3205
4118
+ },
4119
+ {
4120
+ "epoch": 2.81,
4121
+ "learning_rate": 2.045153434662428e-07,
4122
+ "loss": 1.0038,
4123
+ "step": 3210
4124
+ },
4125
+ {
4126
+ "epoch": 2.82,
4127
+ "learning_rate": 1.9505735185254226e-07,
4128
+ "loss": 0.9977,
4129
+ "step": 3215
4130
+ },
4131
+ {
4132
+ "epoch": 2.82,
4133
+ "learning_rate": 1.8582111188162555e-07,
4134
+ "loss": 1.0048,
4135
+ "step": 3220
4136
+ },
4137
+ {
4138
+ "epoch": 2.83,
4139
+ "learning_rate": 1.7680683244318154e-07,
4140
+ "loss": 1.0028,
4141
+ "step": 3225
4142
+ },
4143
+ {
4144
+ "epoch": 2.83,
4145
+ "learning_rate": 1.6801471740696462e-07,
4146
+ "loss": 1.0116,
4147
+ "step": 3230
4148
+ },
4149
+ {
4150
+ "epoch": 2.83,
4151
+ "learning_rate": 1.594449656181918e-07,
4152
+ "loss": 1.0002,
4153
+ "step": 3235
4154
+ },
4155
+ {
4156
+ "epoch": 2.84,
4157
+ "learning_rate": 1.510977708930461e-07,
4158
+ "loss": 1.0176,
4159
+ "step": 3240
4160
+ },
4161
+ {
4162
+ "epoch": 2.84,
4163
+ "learning_rate": 1.4297332201428703e-07,
4164
+ "loss": 0.9904,
4165
+ "step": 3245
4166
+ },
4167
+ {
4168
+ "epoch": 2.85,
4169
+ "learning_rate": 1.3507180272698594e-07,
4170
+ "loss": 1.025,
4171
+ "step": 3250
4172
+ },
4173
+ {
4174
+ "epoch": 2.85,
4175
+ "learning_rate": 1.2739339173436838e-07,
4176
+ "loss": 1.0045,
4177
+ "step": 3255
4178
+ },
4179
+ {
4180
+ "epoch": 2.86,
4181
+ "learning_rate": 1.1993826269377506e-07,
4182
+ "loss": 1.0145,
4183
+ "step": 3260
4184
+ },
4185
+ {
4186
+ "epoch": 2.86,
4187
+ "learning_rate": 1.1270658421273062e-07,
4188
+ "loss": 0.9961,
4189
+ "step": 3265
4190
+ },
4191
+ {
4192
+ "epoch": 2.86,
4193
+ "learning_rate": 1.0569851984513102e-07,
4194
+ "loss": 0.9992,
4195
+ "step": 3270
4196
+ },
4197
+ {
4198
+ "epoch": 2.87,
4199
+ "learning_rate": 9.89142280875477e-08,
4200
+ "loss": 1.0086,
4201
+ "step": 3275
4202
+ },
4203
+ {
4204
+ "epoch": 2.87,
4205
+ "learning_rate": 9.235386237564148e-08,
4206
+ "loss": 1.0101,
4207
+ "step": 3280
4208
+ },
4209
+ {
4210
+ "epoch": 2.88,
4211
+ "learning_rate": 8.601757108068876e-08,
4212
+ "loss": 0.9943,
4213
+ "step": 3285
4214
+ },
4215
+ {
4216
+ "epoch": 2.88,
4217
+ "learning_rate": 7.990549750623189e-08,
4218
+ "loss": 1.0004,
4219
+ "step": 3290
4220
+ },
4221
+ {
4222
+ "epoch": 2.89,
4223
+ "learning_rate": 7.401777988483406e-08,
4224
+ "loss": 1.0132,
4225
+ "step": 3295
4226
+ },
4227
+ {
4228
+ "epoch": 2.89,
4229
+ "learning_rate": 6.835455137495395e-08,
4230
+ "loss": 1.0,
4231
+ "step": 3300
4232
+ },
4233
+ {
4234
+ "epoch": 2.89,
4235
+ "eval_loss": 1.1143468618392944,
4236
+ "eval_runtime": 425.1593,
4237
+ "eval_samples_per_second": 38.023,
4238
+ "eval_steps_per_second": 1.19,
4239
+ "step": 3300
4240
+ },
4241
+ {
4242
+ "epoch": 2.9,
4243
+ "learning_rate": 6.2915940057936e-08,
4244
+ "loss": 0.9879,
4245
+ "step": 3305
4246
+ },
4247
+ {
4248
+ "epoch": 2.9,
4249
+ "learning_rate": 5.7702068935109324e-08,
4250
+ "loss": 0.9821,
4251
+ "step": 3310
4252
+ },
4253
+ {
4254
+ "epoch": 2.9,
4255
+ "learning_rate": 5.271305592501108e-08,
4256
+ "loss": 0.9954,
4257
+ "step": 3315
4258
+ },
4259
+ {
4260
+ "epoch": 2.91,
4261
+ "learning_rate": 4.794901386071749e-08,
4262
+ "loss": 1.0052,
4263
+ "step": 3320
4264
+ },
4265
+ {
4266
+ "epoch": 2.91,
4267
+ "learning_rate": 4.341005048728919e-08,
4268
+ "loss": 1.0061,
4269
+ "step": 3325
4270
+ },
4271
+ {
4272
+ "epoch": 2.92,
4273
+ "learning_rate": 3.9096268459339893e-08,
4274
+ "loss": 1.0068,
4275
+ "step": 3330
4276
+ },
4277
+ {
4278
+ "epoch": 2.92,
4279
+ "learning_rate": 3.50077653387082e-08,
4280
+ "loss": 1.0101,
4281
+ "step": 3335
4282
+ },
4283
+ {
4284
+ "epoch": 2.93,
4285
+ "learning_rate": 3.114463359225717e-08,
4286
+ "loss": 1.0068,
4287
+ "step": 3340
4288
+ },
4289
+ {
4290
+ "epoch": 2.93,
4291
+ "learning_rate": 2.7506960589781527e-08,
4292
+ "loss": 1.0072,
4293
+ "step": 3345
4294
+ },
4295
+ {
4296
+ "epoch": 2.93,
4297
+ "learning_rate": 2.4094828602027052e-08,
4298
+ "loss": 0.9931,
4299
+ "step": 3350
4300
+ },
4301
+ {
4302
+ "epoch": 2.94,
4303
+ "learning_rate": 2.0908314798836483e-08,
4304
+ "loss": 1.0091,
4305
+ "step": 3355
4306
+ },
4307
+ {
4308
+ "epoch": 2.94,
4309
+ "learning_rate": 1.7947491247399808e-08,
4310
+ "loss": 1.0162,
4311
+ "step": 3360
4312
+ },
4313
+ {
4314
+ "epoch": 2.95,
4315
+ "learning_rate": 1.5212424910627797e-08,
4316
+ "loss": 1.0119,
4317
+ "step": 3365
4318
+ },
4319
+ {
4320
+ "epoch": 2.95,
4321
+ "learning_rate": 1.2703177645634335e-08,
4322
+ "loss": 1.0053,
4323
+ "step": 3370
4324
+ },
4325
+ {
4326
+ "epoch": 2.96,
4327
+ "learning_rate": 1.0419806202336403e-08,
4328
+ "loss": 1.0044,
4329
+ "step": 3375
4330
+ },
4331
+ {
4332
+ "epoch": 2.96,
4333
+ "learning_rate": 8.362362222177345e-09,
4334
+ "loss": 1.008,
4335
+ "step": 3380
4336
+ },
4337
+ {
4338
+ "epoch": 2.97,
4339
+ "learning_rate": 6.530892236951136e-09,
4340
+ "loss": 0.9943,
4341
+ "step": 3385
4342
+ },
4343
+ {
4344
+ "epoch": 2.97,
4345
+ "learning_rate": 4.925437667755439e-09,
4346
+ "loss": 1.005,
4347
+ "step": 3390
4348
+ },
4349
+ {
4350
+ "epoch": 2.97,
4351
+ "learning_rate": 3.5460348240501376e-09,
4352
+ "loss": 0.9884,
4353
+ "step": 3395
4354
+ },
4355
+ {
4356
+ "epoch": 2.98,
4357
+ "learning_rate": 2.39271490284243e-09,
4358
+ "loss": 1.0188,
4359
+ "step": 3400
4360
+ },
4361
+ {
4362
+ "epoch": 2.98,
4363
+ "eval_loss": 1.114337682723999,
4364
+ "eval_runtime": 425.2781,
4365
+ "eval_samples_per_second": 38.013,
4366
+ "eval_steps_per_second": 1.19,
4367
+ "step": 3400
4368
+ },
4369
+ {
4370
+ "epoch": 2.98,
4371
+ "learning_rate": 1.4655039879740706e-09,
4372
+ "loss": 1.0096,
4373
+ "step": 3405
4374
+ },
4375
+ {
4376
+ "epoch": 2.99,
4377
+ "learning_rate": 7.644230495373884e-10,
4378
+ "loss": 1.0001,
4379
+ "step": 3410
4380
+ },
4381
+ {
4382
+ "epoch": 2.99,
4383
+ "learning_rate": 2.8948794339789255e-10,
4384
+ "loss": 1.0028,
4385
+ "step": 3415
4386
+ },
4387
+ {
4388
+ "epoch": 3.0,
4389
+ "learning_rate": 4.07094108345607e-11,
4390
+ "loss": 1.009,
4391
+ "step": 3420
4392
+ },
4393
+ {
4394
+ "epoch": 3.0,
4395
+ "step": 3423,
4396
+ "total_flos": 5.633765648992567e+18,
4397
+ "train_loss": 1.0902616986746403,
4398
+ "train_runtime": 47329.7113,
4399
+ "train_samples_per_second": 9.258,
4400
+ "train_steps_per_second": 0.072
4401
+ }
4402
+ ],
4403
+ "logging_steps": 5,
4404
+ "max_steps": 3423,
4405
+ "num_input_tokens_seen": 0,
4406
+ "num_train_epochs": 3,
4407
+ "save_steps": 100,
4408
+ "total_flos": 5.633765648992567e+18,
4409
+ "train_batch_size": 16,
4410
+ "trial_name": null,
4411
+ "trial_params": null
4412
+ }