sci-m-wang's picture
Upload 15 files
f63c0a5 verified
05/31/2024 00:02:42 - INFO - transformers.tokenization_utils_base - loading file tokenizer.model
05/31/2024 00:02:42 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json
05/31/2024 00:02:42 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json
05/31/2024 00:02:42 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json
05/31/2024 00:02:42 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json
05/31/2024 00:02:42 - WARNING - transformers.tokenization_utils_base - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
05/31/2024 00:02:42 - INFO - llmtuner.data.template - Replace eos token: <|end|>
05/31/2024 00:02:42 - WARNING - llmtuner.data.template - New tokens have been added, make sure `resize_vocab` is True.
05/31/2024 00:02:42 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/LangGPT_community.jsonl...
05/31/2024 00:02:42 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/31/2024 00:02:44 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_alpaca.jsonl...
05/31/2024 00:02:44 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/31/2024 00:02:48 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_seed.jsonl...
05/31/2024 00:02:48 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/31/2024 00:02:50 - WARNING - transformers.tokenization_utils_base - Token indices sequence length is longer than the specified maximum sequence length for this model (5088 > 4096). Running this sequence through the model will result in indexing errors
05/31/2024 00:03:05 - INFO - transformers.configuration_utils - loading configuration file /datas/huggingface/Phi-3-mini-4k-instruct/config.json
05/31/2024 00:03:05 - INFO - transformers.configuration_utils - loading configuration file /datas/huggingface/Phi-3-mini-4k-instruct/config.json
05/31/2024 00:03:05 - INFO - transformers.configuration_utils - Model config Phi3Config {
"_name_or_path": "/datas/huggingface/Phi-3-mini-4k-instruct",
"architectures": [
"Phi3ForCausalLM"
],
"attention_dropout": 0.0,
"auto_map": {
"AutoConfig": "configuration_phi3.Phi3Config",
"AutoModelForCausalLM": "modeling_phi3.Phi3ForCausalLM"
},
"bos_token_id": 1,
"embd_pdrop": 0.0,
"eos_token_id": 32000,
"hidden_act": "silu",
"hidden_size": 3072,
"initializer_range": 0.02,
"intermediate_size": 8192,
"max_position_embeddings": 4096,
"model_type": "phi3",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"original_max_position_embeddings": 4096,
"pad_token_id": 32000,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"sliding_window": 2047,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.41.1",
"use_cache": true,
"vocab_size": 32064
}
05/31/2024 00:03:05 - WARNING - transformers_modules.Phi-3-mini-4k-instruct.modeling_phi3 - `flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
05/31/2024 00:03:05 - WARNING - transformers_modules.Phi-3-mini-4k-instruct.modeling_phi3 - Current `flash-attention` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.
05/31/2024 00:03:05 - INFO - transformers.modeling_utils - loading weights file /datas/huggingface/Phi-3-mini-4k-instruct/model.safetensors.index.json
05/31/2024 00:03:05 - INFO - transformers.modeling_utils - Instantiating Phi3ForCausalLM model under default dtype torch.bfloat16.
05/31/2024 00:03:05 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 32000,
"pad_token_id": 32000
}
05/31/2024 00:03:19 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing Phi3ForCausalLM.
05/31/2024 00:03:19 - INFO - transformers.modeling_utils - All the weights of Phi3ForCausalLM were initialized from the model checkpoint at /datas/huggingface/Phi-3-mini-4k-instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use Phi3ForCausalLM for predictions without further training.
05/31/2024 00:03:19 - INFO - transformers.generation.configuration_utils - loading configuration file /datas/huggingface/Phi-3-mini-4k-instruct/generation_config.json
05/31/2024 00:03:19 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": [
32000,
32001,
32007
],
"pad_token_id": 32000
}
05/31/2024 00:03:19 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled.
05/31/2024 00:03:19 - INFO - llmtuner.model.utils.attention - Using vanilla Attention implementation.
05/31/2024 00:03:19 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA
05/31/2024 00:03:19 - INFO - llmtuner.model.loader - trainable params: 3145728 || all params: 3824225280 || trainable%: 0.0823
05/31/2024 00:03:19 - INFO - transformers.trainer - Using auto half precision backend
05/31/2024 00:03:19 - INFO - transformers.trainer - ***** Running training *****
05/31/2024 00:03:19 - INFO - transformers.trainer - Num examples = 8,531
05/31/2024 00:03:19 - INFO - transformers.trainer - Num Epochs = 5
05/31/2024 00:03:19 - INFO - transformers.trainer - Instantaneous batch size per device = 2
05/31/2024 00:03:19 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16
05/31/2024 00:03:19 - INFO - transformers.trainer - Gradient Accumulation steps = 8
05/31/2024 00:03:19 - INFO - transformers.trainer - Total optimization steps = 2,665
05/31/2024 00:03:19 - INFO - transformers.trainer - Number of trainable parameters = 3,145,728
05/31/2024 00:03:20 - WARNING - transformers_modules.Phi-3-mini-4k-instruct.modeling_phi3 - You are not running the flash-attention implementation, expect numerical differences.
05/31/2024 00:05:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6093, 'learning_rate': 5.0000e-05, 'epoch': 0.01}
05/31/2024 00:07:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5708, 'learning_rate': 4.9998e-05, 'epoch': 0.02}
05/31/2024 00:09:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5403, 'learning_rate': 4.9996e-05, 'epoch': 0.03}
05/31/2024 00:11:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5590, 'learning_rate': 4.9993e-05, 'epoch': 0.04}
05/31/2024 00:13:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5298, 'learning_rate': 4.9989e-05, 'epoch': 0.05}
05/31/2024 00:15:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5082, 'learning_rate': 4.9984e-05, 'epoch': 0.06}
05/31/2024 00:17:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5165, 'learning_rate': 4.9979e-05, 'epoch': 0.07}
05/31/2024 00:19:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5224, 'learning_rate': 4.9972e-05, 'epoch': 0.08}
05/31/2024 00:21:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5322, 'learning_rate': 4.9965e-05, 'epoch': 0.08}
05/31/2024 00:23:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4886, 'learning_rate': 4.9957e-05, 'epoch': 0.09}
05/31/2024 00:25:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4870, 'learning_rate': 4.9947e-05, 'epoch': 0.10}
05/31/2024 00:27:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4749, 'learning_rate': 4.9937e-05, 'epoch': 0.11}
05/31/2024 00:28:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4769, 'learning_rate': 4.9927e-05, 'epoch': 0.12}
05/31/2024 00:31:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4587, 'learning_rate': 4.9915e-05, 'epoch': 0.13}
05/31/2024 00:33:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4858, 'learning_rate': 4.9902e-05, 'epoch': 0.14}
05/31/2024 00:34:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4485, 'learning_rate': 4.9889e-05, 'epoch': 0.15}
05/31/2024 00:37:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4629, 'learning_rate': 4.9875e-05, 'epoch': 0.16}
05/31/2024 00:38:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4683, 'learning_rate': 4.9859e-05, 'epoch': 0.17}
05/31/2024 00:40:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4625, 'learning_rate': 4.9843e-05, 'epoch': 0.18}
05/31/2024 00:42:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4346, 'learning_rate': 4.9826e-05, 'epoch': 0.19}
05/31/2024 00:42:43 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-100
05/31/2024 00:42:43 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-100/tokenizer_config.json
05/31/2024 00:42:43 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-100/special_tokens_map.json
05/31/2024 00:44:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4871, 'learning_rate': 4.9809e-05, 'epoch': 0.20}
05/31/2024 00:46:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4564, 'learning_rate': 4.9790e-05, 'epoch': 0.21}
05/31/2024 00:48:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4801, 'learning_rate': 4.9771e-05, 'epoch': 0.22}
05/31/2024 00:50:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4678, 'learning_rate': 4.9750e-05, 'epoch': 0.23}
05/31/2024 00:52:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4794, 'learning_rate': 4.9729e-05, 'epoch': 0.23}
05/31/2024 00:54:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4729, 'learning_rate': 4.9707e-05, 'epoch': 0.24}
05/31/2024 00:56:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4754, 'learning_rate': 4.9684e-05, 'epoch': 0.25}
05/31/2024 00:58:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4374, 'learning_rate': 4.9660e-05, 'epoch': 0.26}
05/31/2024 01:00:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4539, 'learning_rate': 4.9636e-05, 'epoch': 0.27}
05/31/2024 01:02:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4713, 'learning_rate': 4.9610e-05, 'epoch': 0.28}
05/31/2024 01:04:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4394, 'learning_rate': 4.9584e-05, 'epoch': 0.29}
05/31/2024 01:06:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4462, 'learning_rate': 4.9557e-05, 'epoch': 0.30}
05/31/2024 01:08:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4750, 'learning_rate': 4.9529e-05, 'epoch': 0.31}
05/31/2024 01:10:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4620, 'learning_rate': 4.9500e-05, 'epoch': 0.32}
05/31/2024 01:12:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4543, 'learning_rate': 4.9470e-05, 'epoch': 0.33}
05/31/2024 01:14:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4420, 'learning_rate': 4.9439e-05, 'epoch': 0.34}
05/31/2024 01:16:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4445, 'learning_rate': 4.9408e-05, 'epoch': 0.35}
05/31/2024 01:18:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4327, 'learning_rate': 4.9376e-05, 'epoch': 0.36}
05/31/2024 01:20:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4804, 'learning_rate': 4.9342e-05, 'epoch': 0.37}
05/31/2024 01:22:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4701, 'learning_rate': 4.9308e-05, 'epoch': 0.38}
05/31/2024 01:22:51 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-200
05/31/2024 01:22:51 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-200/tokenizer_config.json
05/31/2024 01:22:51 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-200/special_tokens_map.json
05/31/2024 01:24:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4513, 'learning_rate': 4.9274e-05, 'epoch': 0.38}
05/31/2024 01:26:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4733, 'learning_rate': 4.9238e-05, 'epoch': 0.39}
05/31/2024 01:28:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4399, 'learning_rate': 4.9201e-05, 'epoch': 0.40}
05/31/2024 01:30:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4483, 'learning_rate': 4.9164e-05, 'epoch': 0.41}
05/31/2024 01:32:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4415, 'learning_rate': 4.9126e-05, 'epoch': 0.42}
05/31/2024 01:34:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4167, 'learning_rate': 4.9087e-05, 'epoch': 0.43}
05/31/2024 01:36:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4625, 'learning_rate': 4.9047e-05, 'epoch': 0.44}
05/31/2024 01:38:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4473, 'learning_rate': 4.9006e-05, 'epoch': 0.45}
05/31/2024 01:40:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4512, 'learning_rate': 4.8965e-05, 'epoch': 0.46}
05/31/2024 01:42:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4469, 'learning_rate': 4.8922e-05, 'epoch': 0.47}
05/31/2024 01:44:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4655, 'learning_rate': 4.8879e-05, 'epoch': 0.48}
05/31/2024 01:46:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4181, 'learning_rate': 4.8835e-05, 'epoch': 0.49}
05/31/2024 01:48:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4295, 'learning_rate': 4.8790e-05, 'epoch': 0.50}
05/31/2024 01:50:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4398, 'learning_rate': 4.8744e-05, 'epoch': 0.51}
05/31/2024 01:52:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4303, 'learning_rate': 4.8698e-05, 'epoch': 0.52}
05/31/2024 01:54:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4462, 'learning_rate': 4.8650e-05, 'epoch': 0.53}
05/31/2024 01:56:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4321, 'learning_rate': 4.8602e-05, 'epoch': 0.53}
05/31/2024 01:58:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4253, 'learning_rate': 4.8553e-05, 'epoch': 0.54}
05/31/2024 02:00:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4438, 'learning_rate': 4.8503e-05, 'epoch': 0.55}
05/31/2024 02:02:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4449, 'learning_rate': 4.8453e-05, 'epoch': 0.56}
05/31/2024 02:02:37 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-300
05/31/2024 02:02:37 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-300/tokenizer_config.json
05/31/2024 02:02:37 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-300/special_tokens_map.json
05/31/2024 02:04:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4372, 'learning_rate': 4.8401e-05, 'epoch': 0.57}
05/31/2024 02:06:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4308, 'learning_rate': 4.8349e-05, 'epoch': 0.58}
05/31/2024 02:08:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4290, 'learning_rate': 4.8296e-05, 'epoch': 0.59}
05/31/2024 02:10:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4755, 'learning_rate': 4.8242e-05, 'epoch': 0.60}
05/31/2024 02:12:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4391, 'learning_rate': 4.8188e-05, 'epoch': 0.61}
05/31/2024 02:14:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4184, 'learning_rate': 4.8132e-05, 'epoch': 0.62}
05/31/2024 02:16:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4677, 'learning_rate': 4.8076e-05, 'epoch': 0.63}
05/31/2024 02:18:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4366, 'learning_rate': 4.8019e-05, 'epoch': 0.64}
05/31/2024 02:20:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4536, 'learning_rate': 4.7961e-05, 'epoch': 0.65}
05/31/2024 02:22:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4678, 'learning_rate': 4.7902e-05, 'epoch': 0.66}
05/31/2024 02:24:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4178, 'learning_rate': 4.7843e-05, 'epoch': 0.67}
05/31/2024 02:26:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4570, 'learning_rate': 4.7782e-05, 'epoch': 0.68}
05/31/2024 02:28:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4098, 'learning_rate': 4.7721e-05, 'epoch': 0.68}
05/31/2024 02:30:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4263, 'learning_rate': 4.7659e-05, 'epoch': 0.69}
05/31/2024 02:32:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4433, 'learning_rate': 4.7597e-05, 'epoch': 0.70}
05/31/2024 02:34:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4369, 'learning_rate': 4.7533e-05, 'epoch': 0.71}
05/31/2024 02:36:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4206, 'learning_rate': 4.7469e-05, 'epoch': 0.72}
05/31/2024 02:38:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4311, 'learning_rate': 4.7404e-05, 'epoch': 0.73}
05/31/2024 02:40:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4471, 'learning_rate': 4.7338e-05, 'epoch': 0.74}
05/31/2024 02:42:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4567, 'learning_rate': 4.7272e-05, 'epoch': 0.75}
05/31/2024 02:42:07 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-400
05/31/2024 02:42:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-400/tokenizer_config.json
05/31/2024 02:42:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-400/special_tokens_map.json
05/31/2024 02:44:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4307, 'learning_rate': 4.7204e-05, 'epoch': 0.76}
05/31/2024 02:46:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4303, 'learning_rate': 4.7136e-05, 'epoch': 0.77}
05/31/2024 02:48:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4266, 'learning_rate': 4.7068e-05, 'epoch': 0.78}
05/31/2024 02:50:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4401, 'learning_rate': 4.6998e-05, 'epoch': 0.79}
05/31/2024 02:52:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4343, 'learning_rate': 4.6928e-05, 'epoch': 0.80}
05/31/2024 02:54:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4205, 'learning_rate': 4.6856e-05, 'epoch': 0.81}
05/31/2024 02:56:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.3997, 'learning_rate': 4.6784e-05, 'epoch': 0.82}
05/31/2024 02:58:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4103, 'learning_rate': 4.6712e-05, 'epoch': 0.83}
05/31/2024 03:00:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4417, 'learning_rate': 4.6638e-05, 'epoch': 0.83}
05/31/2024 03:01:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4285, 'learning_rate': 4.6564e-05, 'epoch': 0.84}
05/31/2024 03:03:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4207, 'learning_rate': 4.6489e-05, 'epoch': 0.85}
05/31/2024 03:05:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4621, 'learning_rate': 4.6414e-05, 'epoch': 0.86}
05/31/2024 03:07:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4201, 'learning_rate': 4.6337e-05, 'epoch': 0.87}
05/31/2024 03:09:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4435, 'learning_rate': 4.6260e-05, 'epoch': 0.88}
05/31/2024 03:11:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4527, 'learning_rate': 4.6182e-05, 'epoch': 0.89}
05/31/2024 03:13:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4268, 'learning_rate': 4.6103e-05, 'epoch': 0.90}
05/31/2024 03:15:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4094, 'learning_rate': 4.6024e-05, 'epoch': 0.91}
05/31/2024 03:17:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4333, 'learning_rate': 4.5944e-05, 'epoch': 0.92}
05/31/2024 03:19:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4821, 'learning_rate': 4.5863e-05, 'epoch': 0.93}
05/31/2024 03:21:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4228, 'learning_rate': 4.5782e-05, 'epoch': 0.94}
05/31/2024 03:21:40 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-500
05/31/2024 03:21:40 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-500/tokenizer_config.json
05/31/2024 03:21:40 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-500/special_tokens_map.json
05/31/2024 03:23:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4333, 'learning_rate': 4.5699e-05, 'epoch': 0.95}
05/31/2024 03:25:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4698, 'learning_rate': 4.5616e-05, 'epoch': 0.96}
05/31/2024 03:27:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4211, 'learning_rate': 4.5533e-05, 'epoch': 0.97}
05/31/2024 03:29:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4183, 'learning_rate': 4.5448e-05, 'epoch': 0.98}
05/31/2024 03:31:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4251, 'learning_rate': 4.5363e-05, 'epoch': 0.98}
05/31/2024 03:33:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.4292, 'learning_rate': 4.5277e-05, 'epoch': 0.99}
05/31/2024 03:35:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4051, 'learning_rate': 4.5191e-05, 'epoch': 1.00}
05/31/2024 03:37:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4027, 'learning_rate': 4.5103e-05, 'epoch': 1.01}
05/31/2024 03:39:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4008, 'learning_rate': 4.5016e-05, 'epoch': 1.02}
05/31/2024 03:41:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4421, 'learning_rate': 4.4927e-05, 'epoch': 1.03}
05/31/2024 03:43:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4124, 'learning_rate': 4.4838e-05, 'epoch': 1.04}
05/31/2024 03:45:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4023, 'learning_rate': 4.4748e-05, 'epoch': 1.05}
05/31/2024 03:47:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4376, 'learning_rate': 4.4657e-05, 'epoch': 1.06}
05/31/2024 03:49:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4421, 'learning_rate': 4.4565e-05, 'epoch': 1.07}
05/31/2024 03:51:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4119, 'learning_rate': 4.4473e-05, 'epoch': 1.08}
05/31/2024 03:53:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4276, 'learning_rate': 4.4381e-05, 'epoch': 1.09}
05/31/2024 03:55:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4258, 'learning_rate': 4.4287e-05, 'epoch': 1.10}
05/31/2024 03:57:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4100, 'learning_rate': 4.4193e-05, 'epoch': 1.11}
05/31/2024 03:59:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4292, 'learning_rate': 4.4098e-05, 'epoch': 1.12}
05/31/2024 04:01:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4079, 'learning_rate': 4.4003e-05, 'epoch': 1.13}
05/31/2024 04:01:30 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-600
05/31/2024 04:01:30 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-600/tokenizer_config.json
05/31/2024 04:01:30 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-600/special_tokens_map.json
05/31/2024 04:03:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4123, 'learning_rate': 4.3907e-05, 'epoch': 1.13}
05/31/2024 04:05:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.3983, 'learning_rate': 4.3810e-05, 'epoch': 1.14}
05/31/2024 04:07:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.3923, 'learning_rate': 4.3713e-05, 'epoch': 1.15}
05/31/2024 04:09:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4020, 'learning_rate': 4.3615e-05, 'epoch': 1.16}
05/31/2024 04:11:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4138, 'learning_rate': 4.3516e-05, 'epoch': 1.17}
05/31/2024 04:13:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4254, 'learning_rate': 4.3417e-05, 'epoch': 1.18}
05/31/2024 04:15:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4174, 'learning_rate': 4.3317e-05, 'epoch': 1.19}
05/31/2024 04:17:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.3954, 'learning_rate': 4.3216e-05, 'epoch': 1.20}
05/31/2024 04:19:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.3944, 'learning_rate': 4.3115e-05, 'epoch': 1.21}
05/31/2024 04:21:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4265, 'learning_rate': 4.3013e-05, 'epoch': 1.22}
05/31/2024 04:23:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4255, 'learning_rate': 4.2911e-05, 'epoch': 1.23}
05/31/2024 04:25:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4240, 'learning_rate': 4.2807e-05, 'epoch': 1.24}
05/31/2024 04:27:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4227, 'learning_rate': 4.2704e-05, 'epoch': 1.25}
05/31/2024 04:29:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4674, 'learning_rate': 4.2599e-05, 'epoch': 1.26}
05/31/2024 04:31:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4365, 'learning_rate': 4.2494e-05, 'epoch': 1.27}
05/31/2024 04:33:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.3918, 'learning_rate': 4.2389e-05, 'epoch': 1.28}
05/31/2024 04:35:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4257, 'learning_rate': 4.2283e-05, 'epoch': 1.28}
05/31/2024 04:37:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4442, 'learning_rate': 4.2176e-05, 'epoch': 1.29}
05/31/2024 04:39:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4292, 'learning_rate': 4.2069e-05, 'epoch': 1.30}
05/31/2024 04:41:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4135, 'learning_rate': 4.1961e-05, 'epoch': 1.31}
05/31/2024 04:41:02 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-700
05/31/2024 04:41:02 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-700/tokenizer_config.json
05/31/2024 04:41:02 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-700/special_tokens_map.json
05/31/2024 04:43:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4163, 'learning_rate': 4.1852e-05, 'epoch': 1.32}
05/31/2024 04:45:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4144, 'learning_rate': 4.1743e-05, 'epoch': 1.33}
05/31/2024 04:47:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.3898, 'learning_rate': 4.1633e-05, 'epoch': 1.34}
05/31/2024 04:48:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4220, 'learning_rate': 4.1523e-05, 'epoch': 1.35}
05/31/2024 04:51:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4279, 'learning_rate': 4.1412e-05, 'epoch': 1.36}
05/31/2024 04:52:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4233, 'learning_rate': 4.1301e-05, 'epoch': 1.37}
05/31/2024 04:54:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4214, 'learning_rate': 4.1189e-05, 'epoch': 1.38}
05/31/2024 04:56:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.3940, 'learning_rate': 4.1076e-05, 'epoch': 1.39}
05/31/2024 04:58:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4283, 'learning_rate': 4.0963e-05, 'epoch': 1.40}
05/31/2024 05:00:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.4164, 'learning_rate': 4.0849e-05, 'epoch': 1.41}
05/31/2024 05:02:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4242, 'learning_rate': 4.0735e-05, 'epoch': 1.42}
05/31/2024 05:04:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4356, 'learning_rate': 4.0620e-05, 'epoch': 1.43}
05/31/2024 05:06:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4283, 'learning_rate': 4.0505e-05, 'epoch': 1.43}
05/31/2024 05:08:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4069, 'learning_rate': 4.0389e-05, 'epoch': 1.44}
05/31/2024 05:10:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4103, 'learning_rate': 4.0273e-05, 'epoch': 1.45}
05/31/2024 05:12:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4243, 'learning_rate': 4.0156e-05, 'epoch': 1.46}
05/31/2024 05:14:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4222, 'learning_rate': 4.0038e-05, 'epoch': 1.47}
05/31/2024 05:16:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4597, 'learning_rate': 3.9920e-05, 'epoch': 1.48}
05/31/2024 05:18:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4049, 'learning_rate': 3.9802e-05, 'epoch': 1.49}
05/31/2024 05:20:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4573, 'learning_rate': 3.9683e-05, 'epoch': 1.50}
05/31/2024 05:20:36 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-800
05/31/2024 05:20:36 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-800/tokenizer_config.json
05/31/2024 05:20:36 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-800/special_tokens_map.json
05/31/2024 05:22:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4064, 'learning_rate': 3.9563e-05, 'epoch': 1.51}
05/31/2024 05:24:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4125, 'learning_rate': 3.9443e-05, 'epoch': 1.52}
05/31/2024 05:26:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.3997, 'learning_rate': 3.9323e-05, 'epoch': 1.53}
05/31/2024 05:28:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4374, 'learning_rate': 3.9202e-05, 'epoch': 1.54}
05/31/2024 05:30:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4273, 'learning_rate': 3.9080e-05, 'epoch': 1.55}
05/31/2024 05:32:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4255, 'learning_rate': 3.8958e-05, 'epoch': 1.56}
05/31/2024 05:34:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.3804, 'learning_rate': 3.8836e-05, 'epoch': 1.57}
05/31/2024 05:36:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4172, 'learning_rate': 3.8713e-05, 'epoch': 1.58}
05/31/2024 05:38:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4132, 'learning_rate': 3.8589e-05, 'epoch': 1.58}
05/31/2024 05:40:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4295, 'learning_rate': 3.8465e-05, 'epoch': 1.59}
05/31/2024 05:42:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4305, 'learning_rate': 3.8341e-05, 'epoch': 1.60}
05/31/2024 05:44:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4114, 'learning_rate': 3.8216e-05, 'epoch': 1.61}
05/31/2024 05:46:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3906, 'learning_rate': 3.8091e-05, 'epoch': 1.62}
05/31/2024 05:47:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4302, 'learning_rate': 3.7965e-05, 'epoch': 1.63}
05/31/2024 05:49:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4044, 'learning_rate': 3.7839e-05, 'epoch': 1.64}
05/31/2024 05:51:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4040, 'learning_rate': 3.7712e-05, 'epoch': 1.65}
05/31/2024 05:54:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4390, 'learning_rate': 3.7585e-05, 'epoch': 1.66}
05/31/2024 05:56:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.3843, 'learning_rate': 3.7457e-05, 'epoch': 1.67}
05/31/2024 05:57:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4189, 'learning_rate': 3.7329e-05, 'epoch': 1.68}
05/31/2024 06:00:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4532, 'learning_rate': 3.7201e-05, 'epoch': 1.69}
05/31/2024 06:00:05 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-900
05/31/2024 06:00:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-900/tokenizer_config.json
05/31/2024 06:00:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-900/special_tokens_map.json
05/31/2024 06:02:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4298, 'learning_rate': 3.7072e-05, 'epoch': 1.70}
05/31/2024 06:04:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4442, 'learning_rate': 3.6943e-05, 'epoch': 1.71}
05/31/2024 06:06:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4263, 'learning_rate': 3.6813e-05, 'epoch': 1.72}
05/31/2024 06:08:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4905, 'learning_rate': 3.6683e-05, 'epoch': 1.73}
05/31/2024 06:10:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.3976, 'learning_rate': 3.6553e-05, 'epoch': 1.73}
05/31/2024 06:12:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4259, 'learning_rate': 3.6422e-05, 'epoch': 1.74}
05/31/2024 06:14:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4184, 'learning_rate': 3.6291e-05, 'epoch': 1.75}
05/31/2024 06:16:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4510, 'learning_rate': 3.6159e-05, 'epoch': 1.76}
05/31/2024 06:18:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4075, 'learning_rate': 3.6027e-05, 'epoch': 1.77}
05/31/2024 06:20:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4320, 'learning_rate': 3.5894e-05, 'epoch': 1.78}
05/31/2024 06:22:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.3979, 'learning_rate': 3.5762e-05, 'epoch': 1.79}
05/31/2024 06:24:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4580, 'learning_rate': 3.5628e-05, 'epoch': 1.80}
05/31/2024 06:26:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4111, 'learning_rate': 3.5495e-05, 'epoch': 1.81}
05/31/2024 06:28:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4276, 'learning_rate': 3.5361e-05, 'epoch': 1.82}
05/31/2024 06:30:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.3976, 'learning_rate': 3.5227e-05, 'epoch': 1.83}
05/31/2024 06:32:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.3993, 'learning_rate': 3.5092e-05, 'epoch': 1.84}
05/31/2024 06:34:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4019, 'learning_rate': 3.4957e-05, 'epoch': 1.85}
05/31/2024 06:36:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.3951, 'learning_rate': 3.4822e-05, 'epoch': 1.86}
05/31/2024 06:38:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4119, 'learning_rate': 3.4686e-05, 'epoch': 1.87}
05/31/2024 06:40:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.3997, 'learning_rate': 3.4550e-05, 'epoch': 1.88}
05/31/2024 06:40:19 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1000
05/31/2024 06:40:19 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1000/tokenizer_config.json
05/31/2024 06:40:19 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1000/special_tokens_map.json
05/31/2024 06:42:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.3949, 'learning_rate': 3.4414e-05, 'epoch': 1.88}
05/31/2024 06:44:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4041, 'learning_rate': 3.4277e-05, 'epoch': 1.89}
05/31/2024 06:46:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4247, 'learning_rate': 3.4140e-05, 'epoch': 1.90}
05/31/2024 06:48:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4060, 'learning_rate': 3.4003e-05, 'epoch': 1.91}
05/31/2024 06:50:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4050, 'learning_rate': 3.3865e-05, 'epoch': 1.92}
05/31/2024 06:52:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4093, 'learning_rate': 3.3727e-05, 'epoch': 1.93}
05/31/2024 06:54:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4090, 'learning_rate': 3.3589e-05, 'epoch': 1.94}
05/31/2024 06:55:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4052, 'learning_rate': 3.3450e-05, 'epoch': 1.95}
05/31/2024 06:57:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4271, 'learning_rate': 3.3312e-05, 'epoch': 1.96}
05/31/2024 06:59:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.3977, 'learning_rate': 3.3172e-05, 'epoch': 1.97}
05/31/2024 07:01:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.3831, 'learning_rate': 3.3033e-05, 'epoch': 1.98}
05/31/2024 07:03:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.3874, 'learning_rate': 3.2893e-05, 'epoch': 1.99}
05/31/2024 07:05:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4092, 'learning_rate': 3.2753e-05, 'epoch': 2.00}
05/31/2024 07:07:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.3795, 'learning_rate': 3.2613e-05, 'epoch': 2.01}
05/31/2024 07:09:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3895, 'learning_rate': 3.2473e-05, 'epoch': 2.02}
05/31/2024 07:12:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4279, 'learning_rate': 3.2332e-05, 'epoch': 2.03}
05/31/2024 07:13:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4056, 'learning_rate': 3.2191e-05, 'epoch': 2.03}
05/31/2024 07:16:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4142, 'learning_rate': 3.2050e-05, 'epoch': 2.04}
05/31/2024 07:17:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4310, 'learning_rate': 3.1908e-05, 'epoch': 2.05}
05/31/2024 07:19:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.3952, 'learning_rate': 3.1767e-05, 'epoch': 2.06}
05/31/2024 07:19:54 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1100
05/31/2024 07:19:54 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1100/tokenizer_config.json
05/31/2024 07:19:54 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1100/special_tokens_map.json
05/31/2024 07:21:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.3717, 'learning_rate': 3.1625e-05, 'epoch': 2.07}
05/31/2024 07:23:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.3920, 'learning_rate': 3.1482e-05, 'epoch': 2.08}
05/31/2024 07:25:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4004, 'learning_rate': 3.1340e-05, 'epoch': 2.09}
05/31/2024 07:27:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4095, 'learning_rate': 3.1197e-05, 'epoch': 2.10}
05/31/2024 07:29:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4169, 'learning_rate': 3.1054e-05, 'epoch': 2.11}
05/31/2024 07:31:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4105, 'learning_rate': 3.0911e-05, 'epoch': 2.12}
05/31/2024 07:33:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4085, 'learning_rate': 3.0768e-05, 'epoch': 2.13}
05/31/2024 07:35:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4704, 'learning_rate': 3.0625e-05, 'epoch': 2.14}
05/31/2024 07:37:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.3913, 'learning_rate': 3.0481e-05, 'epoch': 2.15}
05/31/2024 07:39:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.3904, 'learning_rate': 3.0337e-05, 'epoch': 2.16}
05/31/2024 07:41:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.3899, 'learning_rate': 3.0193e-05, 'epoch': 2.17}
05/31/2024 07:43:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4025, 'learning_rate': 3.0049e-05, 'epoch': 2.18}
05/31/2024 07:45:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4038, 'learning_rate': 2.9904e-05, 'epoch': 2.18}
05/31/2024 07:47:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4396, 'learning_rate': 2.9760e-05, 'epoch': 2.19}
05/31/2024 07:49:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4164, 'learning_rate': 2.9615e-05, 'epoch': 2.20}
05/31/2024 07:51:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4128, 'learning_rate': 2.9470e-05, 'epoch': 2.21}
05/31/2024 07:53:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4096, 'learning_rate': 2.9325e-05, 'epoch': 2.22}
05/31/2024 07:55:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.3859, 'learning_rate': 2.9180e-05, 'epoch': 2.23}
05/31/2024 07:57:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.3905, 'learning_rate': 2.9035e-05, 'epoch': 2.24}
05/31/2024 07:59:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4121, 'learning_rate': 2.8889e-05, 'epoch': 2.25}
05/31/2024 07:59:35 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1200
05/31/2024 07:59:35 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1200/tokenizer_config.json
05/31/2024 07:59:35 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1200/special_tokens_map.json
05/31/2024 08:01:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4000, 'learning_rate': 2.8743e-05, 'epoch': 2.26}
05/31/2024 08:03:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4482, 'learning_rate': 2.8598e-05, 'epoch': 2.27}
05/31/2024 08:05:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.3914, 'learning_rate': 2.8452e-05, 'epoch': 2.28}
05/31/2024 08:07:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.3862, 'learning_rate': 2.8306e-05, 'epoch': 2.29}
05/31/2024 08:09:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4039, 'learning_rate': 2.8160e-05, 'epoch': 2.30}
05/31/2024 08:11:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.3868, 'learning_rate': 2.8013e-05, 'epoch': 2.31}
05/31/2024 08:13:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4118, 'learning_rate': 2.7867e-05, 'epoch': 2.32}
05/31/2024 08:15:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4070, 'learning_rate': 2.7721e-05, 'epoch': 2.33}
05/31/2024 08:17:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4101, 'learning_rate': 2.7574e-05, 'epoch': 2.33}
05/31/2024 08:19:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4205, 'learning_rate': 2.7428e-05, 'epoch': 2.34}
05/31/2024 08:21:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4116, 'learning_rate': 2.7281e-05, 'epoch': 2.35}
05/31/2024 08:23:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.3917, 'learning_rate': 2.7134e-05, 'epoch': 2.36}
05/31/2024 08:25:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4100, 'learning_rate': 2.6987e-05, 'epoch': 2.37}
05/31/2024 08:27:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4449, 'learning_rate': 2.6840e-05, 'epoch': 2.38}
05/31/2024 08:29:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.3831, 'learning_rate': 2.6693e-05, 'epoch': 2.39}
05/31/2024 08:31:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4321, 'learning_rate': 2.6546e-05, 'epoch': 2.40}
05/31/2024 08:33:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4236, 'learning_rate': 2.6399e-05, 'epoch': 2.41}
05/31/2024 08:35:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4029, 'learning_rate': 2.6252e-05, 'epoch': 2.42}
05/31/2024 08:37:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.3794, 'learning_rate': 2.6105e-05, 'epoch': 2.43}
05/31/2024 08:39:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4178, 'learning_rate': 2.5958e-05, 'epoch': 2.44}
05/31/2024 08:39:27 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1300
05/31/2024 08:39:27 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1300/tokenizer_config.json
05/31/2024 08:39:27 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1300/special_tokens_map.json
05/31/2024 08:41:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4496, 'learning_rate': 2.5810e-05, 'epoch': 2.45}
05/31/2024 08:43:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4215, 'learning_rate': 2.5663e-05, 'epoch': 2.46}
05/31/2024 08:45:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.3964, 'learning_rate': 2.5516e-05, 'epoch': 2.47}
05/31/2024 08:47:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.3871, 'learning_rate': 2.5368e-05, 'epoch': 2.48}
05/31/2024 08:49:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4146, 'learning_rate': 2.5221e-05, 'epoch': 2.48}
05/31/2024 08:51:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.3892, 'learning_rate': 2.5074e-05, 'epoch': 2.49}
05/31/2024 08:53:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.3995, 'learning_rate': 2.4926e-05, 'epoch': 2.50}
05/31/2024 08:55:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4230, 'learning_rate': 2.4779e-05, 'epoch': 2.51}
05/31/2024 08:57:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.3845, 'learning_rate': 2.4632e-05, 'epoch': 2.52}
05/31/2024 08:59:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.3854, 'learning_rate': 2.4484e-05, 'epoch': 2.53}
05/31/2024 09:01:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4096, 'learning_rate': 2.4337e-05, 'epoch': 2.54}
05/31/2024 09:03:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4167, 'learning_rate': 2.4190e-05, 'epoch': 2.55}
05/31/2024 09:05:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.3829, 'learning_rate': 2.4042e-05, 'epoch': 2.56}
05/31/2024 09:07:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.3861, 'learning_rate': 2.3895e-05, 'epoch': 2.57}
05/31/2024 09:09:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4080, 'learning_rate': 2.3748e-05, 'epoch': 2.58}
05/31/2024 09:11:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4080, 'learning_rate': 2.3601e-05, 'epoch': 2.59}
05/31/2024 09:13:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.3994, 'learning_rate': 2.3454e-05, 'epoch': 2.60}
05/31/2024 09:15:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.3801, 'learning_rate': 2.3307e-05, 'epoch': 2.61}
05/31/2024 09:17:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4210, 'learning_rate': 2.3160e-05, 'epoch': 2.62}
05/31/2024 09:19:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4058, 'learning_rate': 2.3013e-05, 'epoch': 2.63}
05/31/2024 09:19:45 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1400
05/31/2024 09:19:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1400/tokenizer_config.json
05/31/2024 09:19:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1400/special_tokens_map.json
05/31/2024 09:21:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.3922, 'learning_rate': 2.2866e-05, 'epoch': 2.63}
05/31/2024 09:23:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.3822, 'learning_rate': 2.2719e-05, 'epoch': 2.64}
05/31/2024 09:25:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.3820, 'learning_rate': 2.2572e-05, 'epoch': 2.65}
05/31/2024 09:27:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.3881, 'learning_rate': 2.2426e-05, 'epoch': 2.66}
05/31/2024 09:29:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4004, 'learning_rate': 2.2279e-05, 'epoch': 2.67}
05/31/2024 09:31:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4052, 'learning_rate': 2.2133e-05, 'epoch': 2.68}
05/31/2024 09:33:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4018, 'learning_rate': 2.1987e-05, 'epoch': 2.69}
05/31/2024 09:35:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.3969, 'learning_rate': 2.1840e-05, 'epoch': 2.70}
05/31/2024 09:37:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4210, 'learning_rate': 2.1694e-05, 'epoch': 2.71}
05/31/2024 09:39:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4105, 'learning_rate': 2.1548e-05, 'epoch': 2.72}
05/31/2024 09:41:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.3879, 'learning_rate': 2.1402e-05, 'epoch': 2.73}
05/31/2024 09:43:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4014, 'learning_rate': 2.1257e-05, 'epoch': 2.74}
05/31/2024 09:45:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4116, 'learning_rate': 2.1111e-05, 'epoch': 2.75}
05/31/2024 09:47:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3940, 'learning_rate': 2.0965e-05, 'epoch': 2.76}
05/31/2024 09:49:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.3984, 'learning_rate': 2.0820e-05, 'epoch': 2.77}
05/31/2024 09:51:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.3906, 'learning_rate': 2.0675e-05, 'epoch': 2.78}
05/31/2024 09:53:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4101, 'learning_rate': 2.0530e-05, 'epoch': 2.78}
05/31/2024 09:55:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4056, 'learning_rate': 2.0385e-05, 'epoch': 2.79}
05/31/2024 09:57:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4285, 'learning_rate': 2.0240e-05, 'epoch': 2.80}
05/31/2024 09:59:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3852, 'learning_rate': 2.0096e-05, 'epoch': 2.81}
05/31/2024 09:59:05 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1500
05/31/2024 09:59:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1500/tokenizer_config.json
05/31/2024 09:59:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1500/special_tokens_map.json
05/31/2024 10:01:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.3815, 'learning_rate': 1.9951e-05, 'epoch': 2.82}
05/31/2024 10:03:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4003, 'learning_rate': 1.9807e-05, 'epoch': 2.83}
05/31/2024 10:05:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4190, 'learning_rate': 1.9663e-05, 'epoch': 2.84}
05/31/2024 10:06:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3939, 'learning_rate': 1.9519e-05, 'epoch': 2.85}
05/31/2024 10:08:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4255, 'learning_rate': 1.9375e-05, 'epoch': 2.86}
05/31/2024 10:10:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.3898, 'learning_rate': 1.9232e-05, 'epoch': 2.87}
05/31/2024 10:13:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4040, 'learning_rate': 1.9089e-05, 'epoch': 2.88}
05/31/2024 10:14:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.3834, 'learning_rate': 1.8946e-05, 'epoch': 2.89}
05/31/2024 10:17:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4378, 'learning_rate': 1.8803e-05, 'epoch': 2.90}
05/31/2024 10:19:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.3953, 'learning_rate': 1.8660e-05, 'epoch': 2.91}
05/31/2024 10:20:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4010, 'learning_rate': 1.8518e-05, 'epoch': 2.92}
05/31/2024 10:22:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4101, 'learning_rate': 1.8375e-05, 'epoch': 2.93}
05/31/2024 10:24:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4172, 'learning_rate': 1.8233e-05, 'epoch': 2.93}
05/31/2024 10:26:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4161, 'learning_rate': 1.8092e-05, 'epoch': 2.94}
05/31/2024 10:28:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4291, 'learning_rate': 1.7950e-05, 'epoch': 2.95}
05/31/2024 10:30:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4102, 'learning_rate': 1.7809e-05, 'epoch': 2.96}
05/31/2024 10:32:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.3811, 'learning_rate': 1.7668e-05, 'epoch': 2.97}
05/31/2024 10:34:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4292, 'learning_rate': 1.7527e-05, 'epoch': 2.98}
05/31/2024 10:36:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4238, 'learning_rate': 1.7387e-05, 'epoch': 2.99}
05/31/2024 10:38:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4033, 'learning_rate': 1.7247e-05, 'epoch': 3.00}
05/31/2024 10:38:49 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1600
05/31/2024 10:38:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1600/tokenizer_config.json
05/31/2024 10:38:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1600/special_tokens_map.json
05/31/2024 10:40:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4157, 'learning_rate': 1.7107e-05, 'epoch': 3.01}
05/31/2024 10:42:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.3900, 'learning_rate': 1.6967e-05, 'epoch': 3.02}
05/31/2024 10:44:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4134, 'learning_rate': 1.6828e-05, 'epoch': 3.03}
05/31/2024 10:46:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4095, 'learning_rate': 1.6688e-05, 'epoch': 3.04}
05/31/2024 10:48:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4060, 'learning_rate': 1.6550e-05, 'epoch': 3.05}
05/31/2024 10:50:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.3721, 'learning_rate': 1.6411e-05, 'epoch': 3.06}
05/31/2024 10:52:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.3739, 'learning_rate': 1.6273e-05, 'epoch': 3.07}
05/31/2024 10:54:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4147, 'learning_rate': 1.6135e-05, 'epoch': 3.08}
05/31/2024 10:56:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.3684, 'learning_rate': 1.5997e-05, 'epoch': 3.08}
05/31/2024 10:58:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.3977, 'learning_rate': 1.5860e-05, 'epoch': 3.09}
05/31/2024 11:00:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4219, 'learning_rate': 1.5723e-05, 'epoch': 3.10}
05/31/2024 11:02:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.3957, 'learning_rate': 1.5586e-05, 'epoch': 3.11}
05/31/2024 11:04:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.3977, 'learning_rate': 1.5450e-05, 'epoch': 3.12}
05/31/2024 11:06:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.3842, 'learning_rate': 1.5314e-05, 'epoch': 3.13}
05/31/2024 11:08:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.3607, 'learning_rate': 1.5178e-05, 'epoch': 3.14}
05/31/2024 11:10:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4065, 'learning_rate': 1.5043e-05, 'epoch': 3.15}
05/31/2024 11:12:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.3856, 'learning_rate': 1.4908e-05, 'epoch': 3.16}
05/31/2024 11:14:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4035, 'learning_rate': 1.4773e-05, 'epoch': 3.17}
05/31/2024 11:16:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.3798, 'learning_rate': 1.4639e-05, 'epoch': 3.18}
05/31/2024 11:18:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4021, 'learning_rate': 1.4505e-05, 'epoch': 3.19}
05/31/2024 11:18:24 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1700
05/31/2024 11:18:24 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1700/tokenizer_config.json
05/31/2024 11:18:24 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1700/special_tokens_map.json
05/31/2024 11:20:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.3648, 'learning_rate': 1.4372e-05, 'epoch': 3.20}
05/31/2024 11:22:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.3895, 'learning_rate': 1.4238e-05, 'epoch': 3.21}
05/31/2024 11:24:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4117, 'learning_rate': 1.4106e-05, 'epoch': 3.22}
05/31/2024 11:26:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4201, 'learning_rate': 1.3973e-05, 'epoch': 3.23}
05/31/2024 11:28:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.3906, 'learning_rate': 1.3841e-05, 'epoch': 3.23}
05/31/2024 11:30:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.3695, 'learning_rate': 1.3709e-05, 'epoch': 3.24}
05/31/2024 11:32:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.3650, 'learning_rate': 1.3578e-05, 'epoch': 3.25}
05/31/2024 11:34:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4157, 'learning_rate': 1.3447e-05, 'epoch': 3.26}
05/31/2024 11:36:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4005, 'learning_rate': 1.3317e-05, 'epoch': 3.27}
05/31/2024 11:38:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.3792, 'learning_rate': 1.3187e-05, 'epoch': 3.28}
05/31/2024 11:40:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4093, 'learning_rate': 1.3057e-05, 'epoch': 3.29}
05/31/2024 11:42:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.3647, 'learning_rate': 1.2928e-05, 'epoch': 3.30}
05/31/2024 11:44:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.3740, 'learning_rate': 1.2799e-05, 'epoch': 3.31}
05/31/2024 11:46:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.3888, 'learning_rate': 1.2671e-05, 'epoch': 3.32}
05/31/2024 11:48:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.3969, 'learning_rate': 1.2543e-05, 'epoch': 3.33}
05/31/2024 11:50:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4196, 'learning_rate': 1.2415e-05, 'epoch': 3.34}
05/31/2024 11:52:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.3885, 'learning_rate': 1.2288e-05, 'epoch': 3.35}
05/31/2024 11:54:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4118, 'learning_rate': 1.2161e-05, 'epoch': 3.36}
05/31/2024 11:56:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4124, 'learning_rate': 1.2035e-05, 'epoch': 3.37}
05/31/2024 11:58:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3940, 'learning_rate': 1.1909e-05, 'epoch': 3.38}
05/31/2024 11:58:05 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1800
05/31/2024 11:58:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1800/tokenizer_config.json
05/31/2024 11:58:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1800/special_tokens_map.json
05/31/2024 12:00:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.3996, 'learning_rate': 1.1784e-05, 'epoch': 3.38}
05/31/2024 12:02:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4036, 'learning_rate': 1.1659e-05, 'epoch': 3.39}
05/31/2024 12:04:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.3976, 'learning_rate': 1.1535e-05, 'epoch': 3.40}
05/31/2024 12:06:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4282, 'learning_rate': 1.1411e-05, 'epoch': 3.41}
05/31/2024 12:08:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.3807, 'learning_rate': 1.1287e-05, 'epoch': 3.42}
05/31/2024 12:10:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.3761, 'learning_rate': 1.1164e-05, 'epoch': 3.43}
05/31/2024 12:12:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4152, 'learning_rate': 1.1042e-05, 'epoch': 3.44}
05/31/2024 12:14:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.3630, 'learning_rate': 1.0920e-05, 'epoch': 3.45}
05/31/2024 12:16:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.3789, 'learning_rate': 1.0798e-05, 'epoch': 3.46}
05/31/2024 12:18:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4130, 'learning_rate': 1.0677e-05, 'epoch': 3.47}
05/31/2024 12:20:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3812, 'learning_rate': 1.0557e-05, 'epoch': 3.48}
05/31/2024 12:22:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.3949, 'learning_rate': 1.0437e-05, 'epoch': 3.49}
05/31/2024 12:24:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.3649, 'learning_rate': 1.0317e-05, 'epoch': 3.50}
05/31/2024 12:26:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.3942, 'learning_rate': 1.0198e-05, 'epoch': 3.51}
05/31/2024 12:27:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4007, 'learning_rate': 1.0080e-05, 'epoch': 3.52}
05/31/2024 12:29:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.3773, 'learning_rate': 9.9618e-06, 'epoch': 3.53}
05/31/2024 12:31:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4196, 'learning_rate': 9.8444e-06, 'epoch': 3.53}
05/31/2024 12:33:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.3792, 'learning_rate': 9.7274e-06, 'epoch': 3.54}
05/31/2024 12:35:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4036, 'learning_rate': 9.6110e-06, 'epoch': 3.55}
05/31/2024 12:37:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.3955, 'learning_rate': 9.4952e-06, 'epoch': 3.56}
05/31/2024 12:37:53 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1900
05/31/2024 12:37:53 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1900/tokenizer_config.json
05/31/2024 12:37:53 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-1900/special_tokens_map.json
05/31/2024 12:39:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4056, 'learning_rate': 9.3799e-06, 'epoch': 3.57}
05/31/2024 12:41:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4144, 'learning_rate': 9.2651e-06, 'epoch': 3.58}
05/31/2024 12:43:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.3900, 'learning_rate': 9.1508e-06, 'epoch': 3.59}
05/31/2024 12:45:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.3976, 'learning_rate': 9.0372e-06, 'epoch': 3.60}
05/31/2024 12:47:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.3774, 'learning_rate': 8.9240e-06, 'epoch': 3.61}
05/31/2024 12:50:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4209, 'learning_rate': 8.8115e-06, 'epoch': 3.62}
05/31/2024 12:52:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4280, 'learning_rate': 8.6995e-06, 'epoch': 3.63}
05/31/2024 12:54:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4121, 'learning_rate': 8.5880e-06, 'epoch': 3.64}
05/31/2024 12:56:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4191, 'learning_rate': 8.4772e-06, 'epoch': 3.65}
05/31/2024 12:58:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4170, 'learning_rate': 8.3669e-06, 'epoch': 3.66}
05/31/2024 13:00:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4047, 'learning_rate': 8.2571e-06, 'epoch': 3.67}
05/31/2024 13:02:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4119, 'learning_rate': 8.1480e-06, 'epoch': 3.68}
05/31/2024 13:03:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.3908, 'learning_rate': 8.0395e-06, 'epoch': 3.68}
05/31/2024 13:06:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4636, 'learning_rate': 7.9315e-06, 'epoch': 3.69}
05/31/2024 13:08:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.3925, 'learning_rate': 7.8241e-06, 'epoch': 3.70}
05/31/2024 13:10:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.3957, 'learning_rate': 7.7173e-06, 'epoch': 3.71}
05/31/2024 13:12:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4004, 'learning_rate': 7.6112e-06, 'epoch': 3.72}
05/31/2024 13:14:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.3945, 'learning_rate': 7.5056e-06, 'epoch': 3.73}
05/31/2024 13:16:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4112, 'learning_rate': 7.4006e-06, 'epoch': 3.74}
05/31/2024 13:18:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4123, 'learning_rate': 7.2963e-06, 'epoch': 3.75}
05/31/2024 13:18:36 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2000
05/31/2024 13:18:36 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2000/tokenizer_config.json
05/31/2024 13:18:36 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2000/special_tokens_map.json
05/31/2024 13:20:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4225, 'learning_rate': 7.1926e-06, 'epoch': 3.76}
05/31/2024 13:22:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.3601, 'learning_rate': 7.0895e-06, 'epoch': 3.77}
05/31/2024 13:24:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.3654, 'learning_rate': 6.9870e-06, 'epoch': 3.78}
05/31/2024 13:26:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4040, 'learning_rate': 6.8851e-06, 'epoch': 3.79}
05/31/2024 13:28:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.4015, 'learning_rate': 6.7839e-06, 'epoch': 3.80}
05/31/2024 13:30:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.3995, 'learning_rate': 6.6833e-06, 'epoch': 3.81}
05/31/2024 13:32:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4102, 'learning_rate': 6.5833e-06, 'epoch': 3.82}
05/31/2024 13:34:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.3872, 'learning_rate': 6.4840e-06, 'epoch': 3.83}
05/31/2024 13:36:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4137, 'learning_rate': 6.3853e-06, 'epoch': 3.83}
05/31/2024 13:38:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.3769, 'learning_rate': 6.2872e-06, 'epoch': 3.84}
05/31/2024 13:40:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4172, 'learning_rate': 6.1898e-06, 'epoch': 3.85}
05/31/2024 13:42:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.3964, 'learning_rate': 6.0931e-06, 'epoch': 3.86}
05/31/2024 13:44:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.3858, 'learning_rate': 5.9970e-06, 'epoch': 3.87}
05/31/2024 13:46:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4061, 'learning_rate': 5.9016e-06, 'epoch': 3.88}
05/31/2024 13:48:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4316, 'learning_rate': 5.8069e-06, 'epoch': 3.89}
05/31/2024 13:50:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.3980, 'learning_rate': 5.7128e-06, 'epoch': 3.90}
05/31/2024 13:52:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.3819, 'learning_rate': 5.6194e-06, 'epoch': 3.91}
05/31/2024 13:54:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.3873, 'learning_rate': 5.5266e-06, 'epoch': 3.92}
05/31/2024 13:56:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4210, 'learning_rate': 5.4345e-06, 'epoch': 3.93}
05/31/2024 13:58:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4135, 'learning_rate': 5.3432e-06, 'epoch': 3.94}
05/31/2024 13:58:13 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2100
05/31/2024 13:58:13 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2100/tokenizer_config.json
05/31/2024 13:58:13 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2100/special_tokens_map.json
05/31/2024 14:00:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.3910, 'learning_rate': 5.2524e-06, 'epoch': 3.95}
05/31/2024 14:02:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4018, 'learning_rate': 5.1624e-06, 'epoch': 3.96}
05/31/2024 14:04:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.3860, 'learning_rate': 5.0731e-06, 'epoch': 3.97}
05/31/2024 14:06:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4180, 'learning_rate': 4.9845e-06, 'epoch': 3.98}
05/31/2024 14:08:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.3901, 'learning_rate': 4.8965e-06, 'epoch': 3.98}
05/31/2024 14:10:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.3967, 'learning_rate': 4.8093e-06, 'epoch': 3.99}
05/31/2024 14:12:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4028, 'learning_rate': 4.7227e-06, 'epoch': 4.00}
05/31/2024 14:14:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4171, 'learning_rate': 4.6369e-06, 'epoch': 4.01}
05/31/2024 14:16:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.3881, 'learning_rate': 4.5518e-06, 'epoch': 4.02}
05/31/2024 14:18:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4012, 'learning_rate': 4.4673e-06, 'epoch': 4.03}
05/31/2024 14:20:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3872, 'learning_rate': 4.3836e-06, 'epoch': 4.04}
05/31/2024 14:22:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.3946, 'learning_rate': 4.3006e-06, 'epoch': 4.05}
05/31/2024 14:24:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4011, 'learning_rate': 4.2184e-06, 'epoch': 4.06}
05/31/2024 14:26:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4089, 'learning_rate': 4.1368e-06, 'epoch': 4.07}
05/31/2024 14:27:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.3993, 'learning_rate': 4.0560e-06, 'epoch': 4.08}
05/31/2024 14:29:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.3773, 'learning_rate': 3.9759e-06, 'epoch': 4.09}
05/31/2024 14:31:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.3855, 'learning_rate': 3.8965e-06, 'epoch': 4.10}
05/31/2024 14:33:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4134, 'learning_rate': 3.8179e-06, 'epoch': 4.11}
05/31/2024 14:35:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4058, 'learning_rate': 3.7400e-06, 'epoch': 4.12}
05/31/2024 14:37:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.3808, 'learning_rate': 3.6629e-06, 'epoch': 4.13}
05/31/2024 14:37:51 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2200
05/31/2024 14:37:51 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2200/tokenizer_config.json
05/31/2024 14:37:51 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2200/special_tokens_map.json
05/31/2024 14:39:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.3882, 'learning_rate': 3.5864e-06, 'epoch': 4.14}
05/31/2024 14:41:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.3846, 'learning_rate': 3.5108e-06, 'epoch': 4.14}
05/31/2024 14:43:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.3876, 'learning_rate': 3.4358e-06, 'epoch': 4.15}
05/31/2024 14:45:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.3866, 'learning_rate': 3.3617e-06, 'epoch': 4.16}
05/31/2024 14:47:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.3884, 'learning_rate': 3.2882e-06, 'epoch': 4.17}
05/31/2024 14:49:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4142, 'learning_rate': 3.2156e-06, 'epoch': 4.18}
05/31/2024 14:51:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4048, 'learning_rate': 3.1436e-06, 'epoch': 4.19}
05/31/2024 14:53:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4244, 'learning_rate': 3.0725e-06, 'epoch': 4.20}
05/31/2024 14:55:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.3679, 'learning_rate': 3.0021e-06, 'epoch': 4.21}
05/31/2024 14:57:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4163, 'learning_rate': 2.9325e-06, 'epoch': 4.22}
05/31/2024 14:59:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.3861, 'learning_rate': 2.8636e-06, 'epoch': 4.23}
05/31/2024 15:01:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.3841, 'learning_rate': 2.7955e-06, 'epoch': 4.24}
05/31/2024 15:03:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.3884, 'learning_rate': 2.7282e-06, 'epoch': 4.25}
05/31/2024 15:05:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4325, 'learning_rate': 2.6616e-06, 'epoch': 4.26}
05/31/2024 15:07:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.3927, 'learning_rate': 2.5959e-06, 'epoch': 4.27}
05/31/2024 15:09:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.3879, 'learning_rate': 2.5309e-06, 'epoch': 4.28}
05/31/2024 15:11:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4045, 'learning_rate': 2.4667e-06, 'epoch': 4.29}
05/31/2024 15:13:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4215, 'learning_rate': 2.4032e-06, 'epoch': 4.29}
05/31/2024 15:15:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.3735, 'learning_rate': 2.3406e-06, 'epoch': 4.30}
05/31/2024 15:17:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.3858, 'learning_rate': 2.2787e-06, 'epoch': 4.31}
05/31/2024 15:17:26 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2300
05/31/2024 15:17:26 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2300/tokenizer_config.json
05/31/2024 15:17:26 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2300/special_tokens_map.json
05/31/2024 15:19:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.3568, 'learning_rate': 2.2176e-06, 'epoch': 4.32}
05/31/2024 15:21:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.3850, 'learning_rate': 2.1574e-06, 'epoch': 4.33}
05/31/2024 15:23:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.3884, 'learning_rate': 2.0979e-06, 'epoch': 4.34}
05/31/2024 15:25:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.3690, 'learning_rate': 2.0392e-06, 'epoch': 4.35}
05/31/2024 15:27:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.3680, 'learning_rate': 1.9813e-06, 'epoch': 4.36}
05/31/2024 15:29:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.3907, 'learning_rate': 1.9242e-06, 'epoch': 4.37}
05/31/2024 15:31:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.3816, 'learning_rate': 1.8679e-06, 'epoch': 4.38}
05/31/2024 15:33:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.3577, 'learning_rate': 1.8124e-06, 'epoch': 4.39}
05/31/2024 15:35:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.3922, 'learning_rate': 1.7578e-06, 'epoch': 4.40}
05/31/2024 15:37:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.3870, 'learning_rate': 1.7039e-06, 'epoch': 4.41}
05/31/2024 15:39:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4288, 'learning_rate': 1.6508e-06, 'epoch': 4.42}
05/31/2024 15:41:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.3881, 'learning_rate': 1.5986e-06, 'epoch': 4.43}
05/31/2024 15:43:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4099, 'learning_rate': 1.5471e-06, 'epoch': 4.44}
05/31/2024 15:45:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.3691, 'learning_rate': 1.4965e-06, 'epoch': 4.44}
05/31/2024 15:47:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.3807, 'learning_rate': 1.4467e-06, 'epoch': 4.45}
05/31/2024 15:49:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.3943, 'learning_rate': 1.3977e-06, 'epoch': 4.46}
05/31/2024 15:51:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.3941, 'learning_rate': 1.3495e-06, 'epoch': 4.47}
05/31/2024 15:53:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.3850, 'learning_rate': 1.3022e-06, 'epoch': 4.48}
05/31/2024 15:55:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4137, 'learning_rate': 1.2557e-06, 'epoch': 4.49}
05/31/2024 15:57:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.3827, 'learning_rate': 1.2100e-06, 'epoch': 4.50}
05/31/2024 15:57:19 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2400
05/31/2024 15:57:19 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2400/tokenizer_config.json
05/31/2024 15:57:19 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2400/special_tokens_map.json
05/31/2024 15:59:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.3695, 'learning_rate': 1.1651e-06, 'epoch': 4.51}
05/31/2024 16:01:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4440, 'learning_rate': 1.1210e-06, 'epoch': 4.52}
05/31/2024 16:03:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.3816, 'learning_rate': 1.0778e-06, 'epoch': 4.53}
05/31/2024 16:05:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.3774, 'learning_rate': 1.0354e-06, 'epoch': 4.54}
05/31/2024 16:07:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4021, 'learning_rate': 9.9389e-07, 'epoch': 4.55}
05/31/2024 16:08:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3901, 'learning_rate': 9.5317e-07, 'epoch': 4.56}
05/31/2024 16:10:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4357, 'learning_rate': 9.1329e-07, 'epoch': 4.57}
05/31/2024 16:12:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3877, 'learning_rate': 8.7424e-07, 'epoch': 4.58}
05/31/2024 16:14:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3994, 'learning_rate': 8.3604e-07, 'epoch': 4.59}
05/31/2024 16:17:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4222, 'learning_rate': 7.9867e-07, 'epoch': 4.59}
05/31/2024 16:19:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.3991, 'learning_rate': 7.6214e-07, 'epoch': 4.60}
05/31/2024 16:21:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.3690, 'learning_rate': 7.2645e-07, 'epoch': 4.61}
05/31/2024 16:22:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3856, 'learning_rate': 6.9161e-07, 'epoch': 4.62}
05/31/2024 16:25:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4261, 'learning_rate': 6.5761e-07, 'epoch': 4.63}
05/31/2024 16:27:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.3831, 'learning_rate': 6.2446e-07, 'epoch': 4.64}
05/31/2024 16:29:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4014, 'learning_rate': 5.9216e-07, 'epoch': 4.65}
05/31/2024 16:31:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.3954, 'learning_rate': 5.6070e-07, 'epoch': 4.66}
05/31/2024 16:33:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4065, 'learning_rate': 5.3009e-07, 'epoch': 4.67}
05/31/2024 16:35:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.3947, 'learning_rate': 5.0033e-07, 'epoch': 4.68}
05/31/2024 16:37:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4113, 'learning_rate': 4.7143e-07, 'epoch': 4.69}
05/31/2024 16:37:04 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2500
05/31/2024 16:37:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2500/tokenizer_config.json
05/31/2024 16:37:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2500/special_tokens_map.json
05/31/2024 16:39:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3972, 'learning_rate': 4.4337e-07, 'epoch': 4.70}
05/31/2024 16:41:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4381, 'learning_rate': 4.1617e-07, 'epoch': 4.71}
05/31/2024 16:43:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.3794, 'learning_rate': 3.8982e-07, 'epoch': 4.72}
05/31/2024 16:45:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4074, 'learning_rate': 3.6433e-07, 'epoch': 4.73}
05/31/2024 16:47:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.3893, 'learning_rate': 3.3969e-07, 'epoch': 4.74}
05/31/2024 16:49:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.3983, 'learning_rate': 3.1591e-07, 'epoch': 4.74}
05/31/2024 16:51:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3846, 'learning_rate': 2.9299e-07, 'epoch': 4.75}
05/31/2024 16:53:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.3793, 'learning_rate': 2.7093e-07, 'epoch': 4.76}
05/31/2024 16:55:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4213, 'learning_rate': 2.4972e-07, 'epoch': 4.77}
05/31/2024 16:57:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.3905, 'learning_rate': 2.2937e-07, 'epoch': 4.78}
05/31/2024 16:59:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4093, 'learning_rate': 2.0989e-07, 'epoch': 4.79}
05/31/2024 17:01:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4119, 'learning_rate': 1.9127e-07, 'epoch': 4.80}
05/31/2024 17:03:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.3885, 'learning_rate': 1.7351e-07, 'epoch': 4.81}
05/31/2024 17:05:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.3872, 'learning_rate': 1.5661e-07, 'epoch': 4.82}
05/31/2024 17:07:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4040, 'learning_rate': 1.4057e-07, 'epoch': 4.83}
05/31/2024 17:09:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4079, 'learning_rate': 1.2540e-07, 'epoch': 4.84}
05/31/2024 17:11:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.3629, 'learning_rate': 1.1109e-07, 'epoch': 4.85}
05/31/2024 17:13:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.3688, 'learning_rate': 9.7646e-08, 'epoch': 4.86}
05/31/2024 17:15:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.3782, 'learning_rate': 8.5068e-08, 'epoch': 4.87}
05/31/2024 17:16:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4003, 'learning_rate': 7.3355e-08, 'epoch': 4.88}
05/31/2024 17:16:59 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2600
05/31/2024 17:16:59 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2600/tokenizer_config.json
05/31/2024 17:16:59 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/checkpoint-2600/special_tokens_map.json
05/31/2024 17:19:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4175, 'learning_rate': 6.2508e-08, 'epoch': 4.89}
05/31/2024 17:20:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.3857, 'learning_rate': 5.2528e-08, 'epoch': 4.89}
05/31/2024 17:22:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3995, 'learning_rate': 4.3414e-08, 'epoch': 4.90}
05/31/2024 17:24:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.3690, 'learning_rate': 3.5167e-08, 'epoch': 4.91}
05/31/2024 17:26:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.3704, 'learning_rate': 2.7788e-08, 'epoch': 4.92}
05/31/2024 17:28:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4038, 'learning_rate': 2.1276e-08, 'epoch': 4.93}
05/31/2024 17:31:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4069, 'learning_rate': 1.5632e-08, 'epoch': 4.94}
05/31/2024 17:33:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4016, 'learning_rate': 1.0856e-08, 'epoch': 4.95}
05/31/2024 17:34:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3734, 'learning_rate': 6.9479e-09, 'epoch': 4.96}
05/31/2024 17:36:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.3704, 'learning_rate': 3.9083e-09, 'epoch': 4.97}
05/31/2024 17:39:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4141, 'learning_rate': 1.7370e-09, 'epoch': 4.98}
05/31/2024 17:41:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4006, 'learning_rate': 4.3426e-10, 'epoch': 4.99}
05/31/2024 17:43:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.3921, 'learning_rate': 0.0000e+00, 'epoch': 5.00}
05/31/2024 17:43:06 - INFO - transformers.trainer -
Training completed. Do not forget to share your model on huggingface.co/models =)
05/31/2024 17:43:06 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct
05/31/2024 17:43:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/tokenizer_config.json
05/31/2024 17:43:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Phi-3-mini-4k-instruct/special_tokens_map.json
05/31/2024 17:43:06 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}