05/30/2024 13:46:05 - INFO - transformers.tokenization_utils_base - loading file tokenizer.model 05/30/2024 13:46:05 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json 05/30/2024 13:46:05 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json 05/30/2024 13:46:05 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json 05/30/2024 13:46:05 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json 05/30/2024 13:46:05 - WARNING - transformers.models.llama.tokenization_llama_fast - You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers 05/30/2024 13:46:05 - INFO - llmtuner.data.template - Replace eos token: <|im_end|> 05/30/2024 13:46:05 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/LangGPT_community.jsonl... 05/30/2024 13:46:05 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json. 05/30/2024 13:46:06 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_alpaca.jsonl... 05/30/2024 13:46:06 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json. 05/30/2024 13:46:07 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_seed.jsonl... 05/30/2024 13:46:07 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json. 05/30/2024 13:46:09 - INFO - transformers.configuration_utils - loading configuration file /datas/huggingface/Yi-1.5-6B-Chat/config.json 05/30/2024 13:46:09 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "/datas/huggingface/Yi-1.5-6B-Chat", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 4096, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 4, "pad_token_id": 0, "pretraining_tp": 1, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 5000000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": false, "vocab_size": 64000 } 05/30/2024 13:46:09 - INFO - transformers.modeling_utils - loading weights file /datas/huggingface/Yi-1.5-6B-Chat/model.safetensors.index.json 05/30/2024 13:46:09 - INFO - transformers.modeling_utils - Instantiating LlamaForCausalLM model under default dtype torch.float16. 05/30/2024 13:46:09 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 2, "pad_token_id": 0, "use_cache": false } 05/30/2024 13:46:13 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing LlamaForCausalLM. 05/30/2024 13:46:13 - INFO - transformers.modeling_utils - All the weights of LlamaForCausalLM were initialized from the model checkpoint at /datas/huggingface/Yi-1.5-6B-Chat. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. 05/30/2024 13:46:13 - INFO - transformers.generation.configuration_utils - loading configuration file /datas/huggingface/Yi-1.5-6B-Chat/generation_config.json 05/30/2024 13:46:13 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 2, "pad_token_id": 0 } 05/30/2024 13:46:13 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled. 05/30/2024 13:46:13 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference. 05/30/2024 13:46:13 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA 05/30/2024 13:46:13 - INFO - llmtuner.model.loader - trainable params: 3276800 || all params: 6064312320 || trainable%: 0.0540 05/30/2024 13:46:13 - INFO - transformers.trainer - Using auto half precision backend 05/30/2024 13:46:13 - INFO - transformers.trainer - ***** Running training ***** 05/30/2024 13:46:13 - INFO - transformers.trainer - Num examples = 8,531 05/30/2024 13:46:13 - INFO - transformers.trainer - Num Epochs = 5 05/30/2024 13:46:13 - INFO - transformers.trainer - Instantaneous batch size per device = 2 05/30/2024 13:46:13 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 05/30/2024 13:46:13 - INFO - transformers.trainer - Gradient Accumulation steps = 8 05/30/2024 13:46:13 - INFO - transformers.trainer - Total optimization steps = 2,665 05/30/2024 13:46:13 - INFO - transformers.trainer - Number of trainable parameters = 3,276,800 05/30/2024 13:47:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.0208, 'learning_rate': 5.0000e-05, 'epoch': 0.01} 05/30/2024 13:48:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.9376, 'learning_rate': 4.9998e-05, 'epoch': 0.02} 05/30/2024 13:49:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.8496, 'learning_rate': 4.9996e-05, 'epoch': 0.03} 05/30/2024 13:50:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.8758, 'learning_rate': 4.9993e-05, 'epoch': 0.04} 05/30/2024 13:51:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.8192, 'learning_rate': 4.9989e-05, 'epoch': 0.05} 05/30/2024 13:52:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.7655, 'learning_rate': 4.9984e-05, 'epoch': 0.06} 05/30/2024 13:53:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.7674, 'learning_rate': 4.9979e-05, 'epoch': 0.07} 05/30/2024 13:54:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.7761, 'learning_rate': 4.9972e-05, 'epoch': 0.08} 05/30/2024 13:55:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7728, 'learning_rate': 4.9965e-05, 'epoch': 0.08} 05/30/2024 13:56:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.7164, 'learning_rate': 4.9957e-05, 'epoch': 0.09} 05/30/2024 13:57:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7153, 'learning_rate': 4.9947e-05, 'epoch': 0.10} 05/30/2024 13:58:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6907, 'learning_rate': 4.9937e-05, 'epoch': 0.11} 05/30/2024 13:59:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6808, 'learning_rate': 4.9927e-05, 'epoch': 0.12} 05/30/2024 14:01:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6640, 'learning_rate': 4.9915e-05, 'epoch': 0.13} 05/30/2024 14:02:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.7062, 'learning_rate': 4.9902e-05, 'epoch': 0.14} 05/30/2024 14:03:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6445, 'learning_rate': 4.9889e-05, 'epoch': 0.15} 05/30/2024 14:04:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6601, 'learning_rate': 4.9875e-05, 'epoch': 0.16} 05/30/2024 14:05:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6575, 'learning_rate': 4.9859e-05, 'epoch': 0.17} 05/30/2024 14:06:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6308, 'learning_rate': 4.9843e-05, 'epoch': 0.18} 05/30/2024 14:07:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.6273, 'learning_rate': 4.9826e-05, 'epoch': 0.19} 05/30/2024 14:07:19 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-100 05/30/2024 14:07:19 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-100/tokenizer_config.json 05/30/2024 14:07:19 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-100/special_tokens_map.json 05/30/2024 14:08:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6978, 'learning_rate': 4.9809e-05, 'epoch': 0.20} 05/30/2024 14:09:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6509, 'learning_rate': 4.9790e-05, 'epoch': 0.21} 05/30/2024 14:10:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6714, 'learning_rate': 4.9771e-05, 'epoch': 0.22} 05/30/2024 14:11:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6892, 'learning_rate': 4.9750e-05, 'epoch': 0.23} 05/30/2024 14:12:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6477, 'learning_rate': 4.9729e-05, 'epoch': 0.23} 05/30/2024 14:13:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6496, 'learning_rate': 4.9707e-05, 'epoch': 0.24} 05/30/2024 14:14:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6762, 'learning_rate': 4.9684e-05, 'epoch': 0.25} 05/30/2024 14:15:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6270, 'learning_rate': 4.9660e-05, 'epoch': 0.26} 05/30/2024 14:16:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6188, 'learning_rate': 4.9636e-05, 'epoch': 0.27} 05/30/2024 14:17:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6563, 'learning_rate': 4.9610e-05, 'epoch': 0.28} 05/30/2024 14:18:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6153, 'learning_rate': 4.9584e-05, 'epoch': 0.29} 05/30/2024 14:20:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6536, 'learning_rate': 4.9557e-05, 'epoch': 0.30} 05/30/2024 14:21:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6837, 'learning_rate': 4.9529e-05, 'epoch': 0.31} 05/30/2024 14:22:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6506, 'learning_rate': 4.9500e-05, 'epoch': 0.32} 05/30/2024 14:23:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6684, 'learning_rate': 4.9470e-05, 'epoch': 0.33} 05/30/2024 14:24:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6447, 'learning_rate': 4.9439e-05, 'epoch': 0.34} 05/30/2024 14:25:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6472, 'learning_rate': 4.9408e-05, 'epoch': 0.35} 05/30/2024 14:26:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6211, 'learning_rate': 4.9376e-05, 'epoch': 0.36} 05/30/2024 14:27:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6889, 'learning_rate': 4.9342e-05, 'epoch': 0.37} 05/30/2024 14:28:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6830, 'learning_rate': 4.9308e-05, 'epoch': 0.38} 05/30/2024 14:28:39 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-200 05/30/2024 14:28:39 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-200/tokenizer_config.json 05/30/2024 14:28:39 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-200/special_tokens_map.json 05/30/2024 14:29:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6488, 'learning_rate': 4.9274e-05, 'epoch': 0.38} 05/30/2024 14:30:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6519, 'learning_rate': 4.9238e-05, 'epoch': 0.39} 05/30/2024 14:31:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6309, 'learning_rate': 4.9201e-05, 'epoch': 0.40} 05/30/2024 14:33:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6359, 'learning_rate': 4.9164e-05, 'epoch': 0.41} 05/30/2024 14:34:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6281, 'learning_rate': 4.9126e-05, 'epoch': 0.42} 05/30/2024 14:35:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6005, 'learning_rate': 4.9087e-05, 'epoch': 0.43} 05/30/2024 14:36:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.6441, 'learning_rate': 4.9047e-05, 'epoch': 0.44} 05/30/2024 14:37:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6400, 'learning_rate': 4.9006e-05, 'epoch': 0.45} 05/30/2024 14:38:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6544, 'learning_rate': 4.8965e-05, 'epoch': 0.46} 05/30/2024 14:39:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6524, 'learning_rate': 4.8922e-05, 'epoch': 0.47} 05/30/2024 14:40:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6602, 'learning_rate': 4.8879e-05, 'epoch': 0.48} 05/30/2024 14:41:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6063, 'learning_rate': 4.8835e-05, 'epoch': 0.49} 05/30/2024 14:42:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6158, 'learning_rate': 4.8790e-05, 'epoch': 0.50} 05/30/2024 14:43:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6174, 'learning_rate': 4.8744e-05, 'epoch': 0.51} 05/30/2024 14:44:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6027, 'learning_rate': 4.8698e-05, 'epoch': 0.52} 05/30/2024 14:45:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6304, 'learning_rate': 4.8650e-05, 'epoch': 0.53} 05/30/2024 14:46:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5999, 'learning_rate': 4.8602e-05, 'epoch': 0.53} 05/30/2024 14:47:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6190, 'learning_rate': 4.8553e-05, 'epoch': 0.54} 05/30/2024 14:48:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6402, 'learning_rate': 4.8503e-05, 'epoch': 0.55} 05/30/2024 14:49:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6288, 'learning_rate': 4.8453e-05, 'epoch': 0.56} 05/30/2024 14:49:57 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-300 05/30/2024 14:49:57 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-300/tokenizer_config.json 05/30/2024 14:49:57 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-300/special_tokens_map.json 05/30/2024 14:51:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6361, 'learning_rate': 4.8401e-05, 'epoch': 0.57} 05/30/2024 14:52:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6182, 'learning_rate': 4.8349e-05, 'epoch': 0.58} 05/30/2024 14:53:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6066, 'learning_rate': 4.8296e-05, 'epoch': 0.59} 05/30/2024 14:54:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6735, 'learning_rate': 4.8242e-05, 'epoch': 0.60} 05/30/2024 14:55:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6331, 'learning_rate': 4.8188e-05, 'epoch': 0.61} 05/30/2024 14:56:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5939, 'learning_rate': 4.8132e-05, 'epoch': 0.62} 05/30/2024 14:57:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6630, 'learning_rate': 4.8076e-05, 'epoch': 0.63} 05/30/2024 14:58:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6141, 'learning_rate': 4.8019e-05, 'epoch': 0.64} 05/30/2024 14:59:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6572, 'learning_rate': 4.7961e-05, 'epoch': 0.65} 05/30/2024 15:00:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6593, 'learning_rate': 4.7902e-05, 'epoch': 0.66} 05/30/2024 15:01:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6052, 'learning_rate': 4.7843e-05, 'epoch': 0.67} 05/30/2024 15:02:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6534, 'learning_rate': 4.7782e-05, 'epoch': 0.68} 05/30/2024 15:03:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5894, 'learning_rate': 4.7721e-05, 'epoch': 0.68} 05/30/2024 15:04:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6020, 'learning_rate': 4.7659e-05, 'epoch': 0.69} 05/30/2024 15:05:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6597, 'learning_rate': 4.7597e-05, 'epoch': 0.70} 05/30/2024 15:06:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6320, 'learning_rate': 4.7533e-05, 'epoch': 0.71} 05/30/2024 15:07:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6076, 'learning_rate': 4.7469e-05, 'epoch': 0.72} 05/30/2024 15:08:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.6121, 'learning_rate': 4.7404e-05, 'epoch': 0.73} 05/30/2024 15:09:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6424, 'learning_rate': 4.7338e-05, 'epoch': 0.74} 05/30/2024 15:10:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6492, 'learning_rate': 4.7272e-05, 'epoch': 0.75} 05/30/2024 15:10:58 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-400 05/30/2024 15:10:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-400/tokenizer_config.json 05/30/2024 15:10:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-400/special_tokens_map.json 05/30/2024 15:12:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6064, 'learning_rate': 4.7204e-05, 'epoch': 0.76} 05/30/2024 15:13:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6197, 'learning_rate': 4.7136e-05, 'epoch': 0.77} 05/30/2024 15:14:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6301, 'learning_rate': 4.7068e-05, 'epoch': 0.78} 05/30/2024 15:15:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6690, 'learning_rate': 4.6998e-05, 'epoch': 0.79} 05/30/2024 15:16:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6342, 'learning_rate': 4.6928e-05, 'epoch': 0.80} 05/30/2024 15:17:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5989, 'learning_rate': 4.6856e-05, 'epoch': 0.81} 05/30/2024 15:18:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5550, 'learning_rate': 4.6784e-05, 'epoch': 0.82} 05/30/2024 15:19:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5861, 'learning_rate': 4.6712e-05, 'epoch': 0.83} 05/30/2024 15:20:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6194, 'learning_rate': 4.6638e-05, 'epoch': 0.83} 05/30/2024 15:21:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6101, 'learning_rate': 4.6564e-05, 'epoch': 0.84} 05/30/2024 15:22:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5982, 'learning_rate': 4.6489e-05, 'epoch': 0.85} 05/30/2024 15:23:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6409, 'learning_rate': 4.6414e-05, 'epoch': 0.86} 05/30/2024 15:24:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6068, 'learning_rate': 4.6337e-05, 'epoch': 0.87} 05/30/2024 15:25:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6090, 'learning_rate': 4.6260e-05, 'epoch': 0.88} 05/30/2024 15:26:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6298, 'learning_rate': 4.6182e-05, 'epoch': 0.89} 05/30/2024 15:27:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6231, 'learning_rate': 4.6103e-05, 'epoch': 0.90} 05/30/2024 15:28:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6065, 'learning_rate': 4.6024e-05, 'epoch': 0.91} 05/30/2024 15:29:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6045, 'learning_rate': 4.5944e-05, 'epoch': 0.92} 05/30/2024 15:31:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6963, 'learning_rate': 4.5863e-05, 'epoch': 0.93} 05/30/2024 15:32:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6103, 'learning_rate': 4.5782e-05, 'epoch': 0.94} 05/30/2024 15:32:07 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-500 05/30/2024 15:32:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-500/tokenizer_config.json 05/30/2024 15:32:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-500/special_tokens_map.json 05/30/2024 15:33:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6232, 'learning_rate': 4.5699e-05, 'epoch': 0.95} 05/30/2024 15:34:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6579, 'learning_rate': 4.5616e-05, 'epoch': 0.96} 05/30/2024 15:35:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6152, 'learning_rate': 4.5533e-05, 'epoch': 0.97} 05/30/2024 15:36:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6022, 'learning_rate': 4.5448e-05, 'epoch': 0.98} 05/30/2024 15:37:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6095, 'learning_rate': 4.5363e-05, 'epoch': 0.98} 05/30/2024 15:38:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6184, 'learning_rate': 4.5277e-05, 'epoch': 0.99} 05/30/2024 15:39:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5962, 'learning_rate': 4.5191e-05, 'epoch': 1.00} 05/30/2024 15:40:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5736, 'learning_rate': 4.5103e-05, 'epoch': 1.01} 05/30/2024 15:41:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5828, 'learning_rate': 4.5016e-05, 'epoch': 1.02} 05/30/2024 15:43:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6404, 'learning_rate': 4.4927e-05, 'epoch': 1.03} 05/30/2024 15:44:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6008, 'learning_rate': 4.4838e-05, 'epoch': 1.04} 05/30/2024 15:45:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5757, 'learning_rate': 4.4748e-05, 'epoch': 1.05} 05/30/2024 15:46:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6319, 'learning_rate': 4.4657e-05, 'epoch': 1.06} 05/30/2024 15:47:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6130, 'learning_rate': 4.4565e-05, 'epoch': 1.07} 05/30/2024 15:48:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6035, 'learning_rate': 4.4473e-05, 'epoch': 1.08} 05/30/2024 15:49:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6090, 'learning_rate': 4.4381e-05, 'epoch': 1.09} 05/30/2024 15:50:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6019, 'learning_rate': 4.4287e-05, 'epoch': 1.10} 05/30/2024 15:51:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5885, 'learning_rate': 4.4193e-05, 'epoch': 1.11} 05/30/2024 15:52:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6139, 'learning_rate': 4.4098e-05, 'epoch': 1.12} 05/30/2024 15:53:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5989, 'learning_rate': 4.4003e-05, 'epoch': 1.13} 05/30/2024 15:53:43 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-600 05/30/2024 15:53:43 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-600/tokenizer_config.json 05/30/2024 15:53:43 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-600/special_tokens_map.json 05/30/2024 15:54:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6039, 'learning_rate': 4.3907e-05, 'epoch': 1.13} 05/30/2024 15:55:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5738, 'learning_rate': 4.3810e-05, 'epoch': 1.14} 05/30/2024 15:56:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5707, 'learning_rate': 4.3713e-05, 'epoch': 1.15} 05/30/2024 15:57:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5763, 'learning_rate': 4.3615e-05, 'epoch': 1.16} 05/30/2024 15:58:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5954, 'learning_rate': 4.3516e-05, 'epoch': 1.17} 05/30/2024 16:00:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6076, 'learning_rate': 4.3417e-05, 'epoch': 1.18} 05/30/2024 16:01:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6043, 'learning_rate': 4.3317e-05, 'epoch': 1.19} 05/30/2024 16:02:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5919, 'learning_rate': 4.3216e-05, 'epoch': 1.20} 05/30/2024 16:03:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5762, 'learning_rate': 4.3115e-05, 'epoch': 1.21} 05/30/2024 16:04:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6342, 'learning_rate': 4.3013e-05, 'epoch': 1.22} 05/30/2024 16:05:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6095, 'learning_rate': 4.2911e-05, 'epoch': 1.23} 05/30/2024 16:06:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6075, 'learning_rate': 4.2807e-05, 'epoch': 1.24} 05/30/2024 16:07:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5970, 'learning_rate': 4.2704e-05, 'epoch': 1.25} 05/30/2024 16:08:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6865, 'learning_rate': 4.2599e-05, 'epoch': 1.26} 05/30/2024 16:09:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6375, 'learning_rate': 4.2494e-05, 'epoch': 1.27} 05/30/2024 16:10:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5711, 'learning_rate': 4.2389e-05, 'epoch': 1.28} 05/30/2024 16:11:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6099, 'learning_rate': 4.2283e-05, 'epoch': 1.28} 05/30/2024 16:12:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6378, 'learning_rate': 4.2176e-05, 'epoch': 1.29} 05/30/2024 16:13:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6209, 'learning_rate': 4.2069e-05, 'epoch': 1.30} 05/30/2024 16:14:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5976, 'learning_rate': 4.1961e-05, 'epoch': 1.31} 05/30/2024 16:14:49 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-700 05/30/2024 16:14:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-700/tokenizer_config.json 05/30/2024 16:14:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-700/special_tokens_map.json 05/30/2024 16:15:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6079, 'learning_rate': 4.1852e-05, 'epoch': 1.32} 05/30/2024 16:16:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6134, 'learning_rate': 4.1743e-05, 'epoch': 1.33} 05/30/2024 16:18:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5682, 'learning_rate': 4.1633e-05, 'epoch': 1.34} 05/30/2024 16:19:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6011, 'learning_rate': 4.1523e-05, 'epoch': 1.35} 05/30/2024 16:20:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6182, 'learning_rate': 4.1412e-05, 'epoch': 1.36} 05/30/2024 16:21:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6231, 'learning_rate': 4.1301e-05, 'epoch': 1.37} 05/30/2024 16:22:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5997, 'learning_rate': 4.1189e-05, 'epoch': 1.38} 05/30/2024 16:23:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5869, 'learning_rate': 4.1076e-05, 'epoch': 1.39} 05/30/2024 16:24:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6077, 'learning_rate': 4.0963e-05, 'epoch': 1.40} 05/30/2024 16:25:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6025, 'learning_rate': 4.0849e-05, 'epoch': 1.41} 05/30/2024 16:26:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.6075, 'learning_rate': 4.0735e-05, 'epoch': 1.42} 05/30/2024 16:27:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6537, 'learning_rate': 4.0620e-05, 'epoch': 1.43} 05/30/2024 16:28:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6063, 'learning_rate': 4.0505e-05, 'epoch': 1.43} 05/30/2024 16:29:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5934, 'learning_rate': 4.0389e-05, 'epoch': 1.44} 05/30/2024 16:30:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5648, 'learning_rate': 4.0273e-05, 'epoch': 1.45} 05/30/2024 16:31:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6070, 'learning_rate': 4.0156e-05, 'epoch': 1.46} 05/30/2024 16:32:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5978, 'learning_rate': 4.0038e-05, 'epoch': 1.47} 05/30/2024 16:33:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6700, 'learning_rate': 3.9920e-05, 'epoch': 1.48} 05/30/2024 16:34:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6081, 'learning_rate': 3.9802e-05, 'epoch': 1.49} 05/30/2024 16:35:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6458, 'learning_rate': 3.9683e-05, 'epoch': 1.50} 05/30/2024 16:35:50 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-800 05/30/2024 16:35:50 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-800/tokenizer_config.json 05/30/2024 16:35:50 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-800/special_tokens_map.json 05/30/2024 16:36:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5989, 'learning_rate': 3.9563e-05, 'epoch': 1.51} 05/30/2024 16:37:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5800, 'learning_rate': 3.9443e-05, 'epoch': 1.52} 05/30/2024 16:38:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5670, 'learning_rate': 3.9323e-05, 'epoch': 1.53} 05/30/2024 16:40:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6213, 'learning_rate': 3.9202e-05, 'epoch': 1.54} 05/30/2024 16:41:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5955, 'learning_rate': 3.9080e-05, 'epoch': 1.55} 05/30/2024 16:42:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6037, 'learning_rate': 3.8958e-05, 'epoch': 1.56} 05/30/2024 16:43:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5421, 'learning_rate': 3.8836e-05, 'epoch': 1.57} 05/30/2024 16:44:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6055, 'learning_rate': 3.8713e-05, 'epoch': 1.58} 05/30/2024 16:45:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5815, 'learning_rate': 3.8589e-05, 'epoch': 1.58} 05/30/2024 16:46:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6003, 'learning_rate': 3.8465e-05, 'epoch': 1.59} 05/30/2024 16:47:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6236, 'learning_rate': 3.8341e-05, 'epoch': 1.60} 05/30/2024 16:48:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5996, 'learning_rate': 3.8216e-05, 'epoch': 1.61} 05/30/2024 16:49:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5695, 'learning_rate': 3.8091e-05, 'epoch': 1.62} 05/30/2024 16:50:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6033, 'learning_rate': 3.7965e-05, 'epoch': 1.63} 05/30/2024 16:51:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5780, 'learning_rate': 3.7839e-05, 'epoch': 1.64} 05/30/2024 16:52:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5748, 'learning_rate': 3.7712e-05, 'epoch': 1.65} 05/30/2024 16:53:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6203, 'learning_rate': 3.7585e-05, 'epoch': 1.66} 05/30/2024 16:54:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5516, 'learning_rate': 3.7457e-05, 'epoch': 1.67} 05/30/2024 16:55:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5829, 'learning_rate': 3.7329e-05, 'epoch': 1.68} 05/30/2024 16:56:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6371, 'learning_rate': 3.7201e-05, 'epoch': 1.69} 05/30/2024 16:56:56 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-900 05/30/2024 16:56:56 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-900/tokenizer_config.json 05/30/2024 16:56:56 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-900/special_tokens_map.json 05/30/2024 16:58:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6364, 'learning_rate': 3.7072e-05, 'epoch': 1.70} 05/30/2024 16:59:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6472, 'learning_rate': 3.6943e-05, 'epoch': 1.71} 05/30/2024 17:00:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6164, 'learning_rate': 3.6813e-05, 'epoch': 1.72} 05/30/2024 17:01:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.7089, 'learning_rate': 3.6683e-05, 'epoch': 1.73} 05/30/2024 17:02:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5948, 'learning_rate': 3.6553e-05, 'epoch': 1.73} 05/30/2024 17:03:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6064, 'learning_rate': 3.6422e-05, 'epoch': 1.74} 05/30/2024 17:04:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6020, 'learning_rate': 3.6291e-05, 'epoch': 1.75} 05/30/2024 17:05:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6408, 'learning_rate': 3.6159e-05, 'epoch': 1.76} 05/30/2024 17:06:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5833, 'learning_rate': 3.6027e-05, 'epoch': 1.77} 05/30/2024 17:07:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6257, 'learning_rate': 3.5894e-05, 'epoch': 1.78} 05/30/2024 17:08:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5920, 'learning_rate': 3.5762e-05, 'epoch': 1.79} 05/30/2024 17:10:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6496, 'learning_rate': 3.5628e-05, 'epoch': 1.80} 05/30/2024 17:11:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5781, 'learning_rate': 3.5495e-05, 'epoch': 1.81} 05/30/2024 17:12:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6298, 'learning_rate': 3.5361e-05, 'epoch': 1.82} 05/30/2024 17:13:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5683, 'learning_rate': 3.5227e-05, 'epoch': 1.83} 05/30/2024 17:14:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5508, 'learning_rate': 3.5092e-05, 'epoch': 1.84} 05/30/2024 17:15:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5773, 'learning_rate': 3.4957e-05, 'epoch': 1.85} 05/30/2024 17:16:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5686, 'learning_rate': 3.4822e-05, 'epoch': 1.86} 05/30/2024 17:17:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5972, 'learning_rate': 3.4686e-05, 'epoch': 1.87} 05/30/2024 17:18:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5752, 'learning_rate': 3.4550e-05, 'epoch': 1.88} 05/30/2024 17:18:30 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1000 05/30/2024 17:18:30 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1000/tokenizer_config.json 05/30/2024 17:18:30 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1000/special_tokens_map.json 05/30/2024 17:19:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5567, 'learning_rate': 3.4414e-05, 'epoch': 1.88} 05/30/2024 17:20:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5755, 'learning_rate': 3.4277e-05, 'epoch': 1.89} 05/30/2024 17:21:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6065, 'learning_rate': 3.4140e-05, 'epoch': 1.90} 05/30/2024 17:22:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6005, 'learning_rate': 3.4003e-05, 'epoch': 1.91} 05/30/2024 17:23:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5757, 'learning_rate': 3.3865e-05, 'epoch': 1.92} 05/30/2024 17:24:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5887, 'learning_rate': 3.3727e-05, 'epoch': 1.93} 05/30/2024 17:25:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5866, 'learning_rate': 3.3589e-05, 'epoch': 1.94} 05/30/2024 17:26:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5890, 'learning_rate': 3.3450e-05, 'epoch': 1.95} 05/30/2024 17:27:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6058, 'learning_rate': 3.3312e-05, 'epoch': 1.96} 05/30/2024 17:28:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5697, 'learning_rate': 3.3172e-05, 'epoch': 1.97} 05/30/2024 17:29:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5771, 'learning_rate': 3.3033e-05, 'epoch': 1.98} 05/30/2024 17:31:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5526, 'learning_rate': 3.2893e-05, 'epoch': 1.99} 05/30/2024 17:32:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5851, 'learning_rate': 3.2753e-05, 'epoch': 2.00} 05/30/2024 17:33:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5458, 'learning_rate': 3.2613e-05, 'epoch': 2.01} 05/30/2024 17:34:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5437, 'learning_rate': 3.2473e-05, 'epoch': 2.02} 05/30/2024 17:35:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6187, 'learning_rate': 3.2332e-05, 'epoch': 2.03} 05/30/2024 17:36:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5826, 'learning_rate': 3.2191e-05, 'epoch': 2.03} 05/30/2024 17:37:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5988, 'learning_rate': 3.2050e-05, 'epoch': 2.04} 05/30/2024 17:38:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6125, 'learning_rate': 3.1908e-05, 'epoch': 2.05} 05/30/2024 17:39:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5785, 'learning_rate': 3.1767e-05, 'epoch': 2.06} 05/30/2024 17:39:47 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1100 05/30/2024 17:39:47 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1100/tokenizer_config.json 05/30/2024 17:39:47 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1100/special_tokens_map.json 05/30/2024 17:40:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5395, 'learning_rate': 3.1625e-05, 'epoch': 2.07} 05/30/2024 17:41:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5674, 'learning_rate': 3.1482e-05, 'epoch': 2.08} 05/30/2024 17:42:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5890, 'learning_rate': 3.1340e-05, 'epoch': 2.09} 05/30/2024 17:44:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5824, 'learning_rate': 3.1197e-05, 'epoch': 2.10} 05/30/2024 17:45:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5993, 'learning_rate': 3.1054e-05, 'epoch': 2.11} 05/30/2024 17:46:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5877, 'learning_rate': 3.0911e-05, 'epoch': 2.12} 05/30/2024 17:47:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5787, 'learning_rate': 3.0768e-05, 'epoch': 2.13} 05/30/2024 17:48:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6761, 'learning_rate': 3.0625e-05, 'epoch': 2.14} 05/30/2024 17:49:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5611, 'learning_rate': 3.0481e-05, 'epoch': 2.15} 05/30/2024 17:50:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5619, 'learning_rate': 3.0337e-05, 'epoch': 2.16} 05/30/2024 17:51:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5655, 'learning_rate': 3.0193e-05, 'epoch': 2.17} 05/30/2024 17:52:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5767, 'learning_rate': 3.0049e-05, 'epoch': 2.18} 05/30/2024 17:53:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5877, 'learning_rate': 2.9904e-05, 'epoch': 2.18} 05/30/2024 17:54:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6429, 'learning_rate': 2.9760e-05, 'epoch': 2.19} 05/30/2024 17:55:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6046, 'learning_rate': 2.9615e-05, 'epoch': 2.20} 05/30/2024 17:56:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6054, 'learning_rate': 2.9470e-05, 'epoch': 2.21} 05/30/2024 17:57:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6006, 'learning_rate': 2.9325e-05, 'epoch': 2.22} 05/30/2024 17:58:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5746, 'learning_rate': 2.9180e-05, 'epoch': 2.23} 05/30/2024 18:00:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5470, 'learning_rate': 2.9035e-05, 'epoch': 2.24} 05/30/2024 18:01:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5941, 'learning_rate': 2.8889e-05, 'epoch': 2.25} 05/30/2024 18:01:08 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1200 05/30/2024 18:01:08 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1200/tokenizer_config.json 05/30/2024 18:01:08 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1200/special_tokens_map.json 05/30/2024 18:02:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5814, 'learning_rate': 2.8743e-05, 'epoch': 2.26} 05/30/2024 18:03:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6314, 'learning_rate': 2.8598e-05, 'epoch': 2.27} 05/30/2024 18:04:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5696, 'learning_rate': 2.8452e-05, 'epoch': 2.28} 05/30/2024 18:05:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5506, 'learning_rate': 2.8306e-05, 'epoch': 2.29} 05/30/2024 18:06:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5843, 'learning_rate': 2.8160e-05, 'epoch': 2.30} 05/30/2024 18:07:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5675, 'learning_rate': 2.8013e-05, 'epoch': 2.31} 05/30/2024 18:08:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5877, 'learning_rate': 2.7867e-05, 'epoch': 2.32} 05/30/2024 18:09:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5854, 'learning_rate': 2.7721e-05, 'epoch': 2.33} 05/30/2024 18:10:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6052, 'learning_rate': 2.7574e-05, 'epoch': 2.33} 05/30/2024 18:11:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6017, 'learning_rate': 2.7428e-05, 'epoch': 2.34} 05/30/2024 18:12:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6054, 'learning_rate': 2.7281e-05, 'epoch': 2.35} 05/30/2024 18:13:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5571, 'learning_rate': 2.7134e-05, 'epoch': 2.36} 05/30/2024 18:14:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5837, 'learning_rate': 2.6987e-05, 'epoch': 2.37} 05/30/2024 18:15:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6577, 'learning_rate': 2.6840e-05, 'epoch': 2.38} 05/30/2024 18:17:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5717, 'learning_rate': 2.6693e-05, 'epoch': 2.39} 05/30/2024 18:18:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6334, 'learning_rate': 2.6546e-05, 'epoch': 2.40} 05/30/2024 18:19:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6067, 'learning_rate': 2.6399e-05, 'epoch': 2.41} 05/30/2024 18:20:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5845, 'learning_rate': 2.6252e-05, 'epoch': 2.42} 05/30/2024 18:21:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5463, 'learning_rate': 2.6105e-05, 'epoch': 2.43} 05/30/2024 18:22:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5960, 'learning_rate': 2.5958e-05, 'epoch': 2.44} 05/30/2024 18:22:23 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1300 05/30/2024 18:22:23 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1300/tokenizer_config.json 05/30/2024 18:22:23 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1300/special_tokens_map.json 05/30/2024 18:23:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6683, 'learning_rate': 2.5810e-05, 'epoch': 2.45} 05/30/2024 18:24:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6129, 'learning_rate': 2.5663e-05, 'epoch': 2.46} 05/30/2024 18:25:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5860, 'learning_rate': 2.5516e-05, 'epoch': 2.47} 05/30/2024 18:26:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5484, 'learning_rate': 2.5368e-05, 'epoch': 2.48} 05/30/2024 18:27:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5897, 'learning_rate': 2.5221e-05, 'epoch': 2.48} 05/30/2024 18:28:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5627, 'learning_rate': 2.5074e-05, 'epoch': 2.49} 05/30/2024 18:29:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5700, 'learning_rate': 2.4926e-05, 'epoch': 2.50} 05/30/2024 18:30:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6077, 'learning_rate': 2.4779e-05, 'epoch': 2.51} 05/30/2024 18:31:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5656, 'learning_rate': 2.4632e-05, 'epoch': 2.52} 05/30/2024 18:32:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5511, 'learning_rate': 2.4484e-05, 'epoch': 2.53} 05/30/2024 18:34:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6006, 'learning_rate': 2.4337e-05, 'epoch': 2.54} 05/30/2024 18:35:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6242, 'learning_rate': 2.4190e-05, 'epoch': 2.55} 05/30/2024 18:36:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5469, 'learning_rate': 2.4042e-05, 'epoch': 2.56} 05/30/2024 18:37:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5479, 'learning_rate': 2.3895e-05, 'epoch': 2.57} 05/30/2024 18:38:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6107, 'learning_rate': 2.3748e-05, 'epoch': 2.58} 05/30/2024 18:39:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5726, 'learning_rate': 2.3601e-05, 'epoch': 2.59} 05/30/2024 18:40:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5724, 'learning_rate': 2.3454e-05, 'epoch': 2.60} 05/30/2024 18:41:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5503, 'learning_rate': 2.3307e-05, 'epoch': 2.61} 05/30/2024 18:42:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6231, 'learning_rate': 2.3160e-05, 'epoch': 2.62} 05/30/2024 18:43:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5820, 'learning_rate': 2.3013e-05, 'epoch': 2.63} 05/30/2024 18:43:40 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1400 05/30/2024 18:43:40 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1400/tokenizer_config.json 05/30/2024 18:43:40 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1400/special_tokens_map.json 05/30/2024 18:44:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5680, 'learning_rate': 2.2866e-05, 'epoch': 2.63} 05/30/2024 18:45:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5499, 'learning_rate': 2.2719e-05, 'epoch': 2.64} 05/30/2024 18:46:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5412, 'learning_rate': 2.2572e-05, 'epoch': 2.65} 05/30/2024 18:47:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5703, 'learning_rate': 2.2426e-05, 'epoch': 2.66} 05/30/2024 18:48:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5746, 'learning_rate': 2.2279e-05, 'epoch': 2.67} 05/30/2024 18:49:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5939, 'learning_rate': 2.2133e-05, 'epoch': 2.68} 05/30/2024 18:50:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5837, 'learning_rate': 2.1987e-05, 'epoch': 2.69} 05/30/2024 18:51:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5811, 'learning_rate': 2.1840e-05, 'epoch': 2.70} 05/30/2024 18:52:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6135, 'learning_rate': 2.1694e-05, 'epoch': 2.71} 05/30/2024 18:54:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6043, 'learning_rate': 2.1548e-05, 'epoch': 2.72} 05/30/2024 18:55:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5590, 'learning_rate': 2.1402e-05, 'epoch': 2.73} 05/30/2024 18:56:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5732, 'learning_rate': 2.1257e-05, 'epoch': 2.74} 05/30/2024 18:57:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5772, 'learning_rate': 2.1111e-05, 'epoch': 2.75} 05/30/2024 18:58:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5822, 'learning_rate': 2.0965e-05, 'epoch': 2.76} 05/30/2024 18:59:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5578, 'learning_rate': 2.0820e-05, 'epoch': 2.77} 05/30/2024 19:00:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5680, 'learning_rate': 2.0675e-05, 'epoch': 2.78} 05/30/2024 19:01:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6066, 'learning_rate': 2.0530e-05, 'epoch': 2.78} 05/30/2024 19:02:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5995, 'learning_rate': 2.0385e-05, 'epoch': 2.79} 05/30/2024 19:03:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6224, 'learning_rate': 2.0240e-05, 'epoch': 2.80} 05/30/2024 19:04:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5674, 'learning_rate': 2.0096e-05, 'epoch': 2.81} 05/30/2024 19:04:31 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1500 05/30/2024 19:04:31 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1500/tokenizer_config.json 05/30/2024 19:04:31 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1500/special_tokens_map.json 05/30/2024 19:05:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5517, 'learning_rate': 1.9951e-05, 'epoch': 2.82} 05/30/2024 19:06:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5784, 'learning_rate': 1.9807e-05, 'epoch': 2.83} 05/30/2024 19:07:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6194, 'learning_rate': 1.9663e-05, 'epoch': 2.84} 05/30/2024 19:08:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5756, 'learning_rate': 1.9519e-05, 'epoch': 2.85} 05/30/2024 19:09:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5894, 'learning_rate': 1.9375e-05, 'epoch': 2.86} 05/30/2024 19:10:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5518, 'learning_rate': 1.9232e-05, 'epoch': 2.87} 05/30/2024 19:11:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5849, 'learning_rate': 1.9089e-05, 'epoch': 2.88} 05/30/2024 19:12:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5422, 'learning_rate': 1.8946e-05, 'epoch': 2.89} 05/30/2024 19:13:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6246, 'learning_rate': 1.8803e-05, 'epoch': 2.90} 05/30/2024 19:15:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5808, 'learning_rate': 1.8660e-05, 'epoch': 2.91} 05/30/2024 19:16:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5782, 'learning_rate': 1.8518e-05, 'epoch': 2.92} 05/30/2024 19:17:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.6015, 'learning_rate': 1.8375e-05, 'epoch': 2.93} 05/30/2024 19:18:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6102, 'learning_rate': 1.8233e-05, 'epoch': 2.93} 05/30/2024 19:19:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5898, 'learning_rate': 1.8092e-05, 'epoch': 2.94} 05/30/2024 19:20:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6148, 'learning_rate': 1.7950e-05, 'epoch': 2.95} 05/30/2024 19:21:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6023, 'learning_rate': 1.7809e-05, 'epoch': 2.96} 05/30/2024 19:22:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5562, 'learning_rate': 1.7668e-05, 'epoch': 2.97} 05/30/2024 19:23:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6201, 'learning_rate': 1.7527e-05, 'epoch': 2.98} 05/30/2024 19:24:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6247, 'learning_rate': 1.7387e-05, 'epoch': 2.99} 05/30/2024 19:25:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5717, 'learning_rate': 1.7247e-05, 'epoch': 3.00} 05/30/2024 19:25:32 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1600 05/30/2024 19:25:32 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1600/tokenizer_config.json 05/30/2024 19:25:32 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1600/special_tokens_map.json 05/30/2024 19:26:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6235, 'learning_rate': 1.7107e-05, 'epoch': 3.01} 05/30/2024 19:27:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5751, 'learning_rate': 1.6967e-05, 'epoch': 3.02} 05/30/2024 19:28:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5947, 'learning_rate': 1.6828e-05, 'epoch': 3.03} 05/30/2024 19:29:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6059, 'learning_rate': 1.6688e-05, 'epoch': 3.04} 05/30/2024 19:30:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6026, 'learning_rate': 1.6550e-05, 'epoch': 3.05} 05/30/2024 19:31:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5492, 'learning_rate': 1.6411e-05, 'epoch': 3.06} 05/30/2024 19:32:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.5348, 'learning_rate': 1.6273e-05, 'epoch': 3.07} 05/30/2024 19:34:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6244, 'learning_rate': 1.6135e-05, 'epoch': 3.08} 05/30/2024 19:35:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5238, 'learning_rate': 1.5997e-05, 'epoch': 3.08} 05/30/2024 19:36:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5834, 'learning_rate': 1.5860e-05, 'epoch': 3.09} 05/30/2024 19:37:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6277, 'learning_rate': 1.5723e-05, 'epoch': 3.10} 05/30/2024 19:38:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5758, 'learning_rate': 1.5586e-05, 'epoch': 3.11} 05/30/2024 19:39:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5843, 'learning_rate': 1.5450e-05, 'epoch': 3.12} 05/30/2024 19:40:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5540, 'learning_rate': 1.5314e-05, 'epoch': 3.13} 05/30/2024 19:41:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5306, 'learning_rate': 1.5178e-05, 'epoch': 3.14} 05/30/2024 19:42:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6154, 'learning_rate': 1.5043e-05, 'epoch': 3.15} 05/30/2024 19:43:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5499, 'learning_rate': 1.4908e-05, 'epoch': 3.16} 05/30/2024 19:44:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5797, 'learning_rate': 1.4773e-05, 'epoch': 3.17} 05/30/2024 19:45:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5469, 'learning_rate': 1.4639e-05, 'epoch': 3.18} 05/30/2024 19:46:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5794, 'learning_rate': 1.4505e-05, 'epoch': 3.19} 05/30/2024 19:46:32 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1700 05/30/2024 19:46:32 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1700/tokenizer_config.json 05/30/2024 19:46:32 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1700/special_tokens_map.json 05/30/2024 19:47:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5357, 'learning_rate': 1.4372e-05, 'epoch': 3.20} 05/30/2024 19:48:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5605, 'learning_rate': 1.4238e-05, 'epoch': 3.21} 05/30/2024 19:49:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6034, 'learning_rate': 1.4106e-05, 'epoch': 3.22} 05/30/2024 19:50:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6217, 'learning_rate': 1.3973e-05, 'epoch': 3.23} 05/30/2024 19:51:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5666, 'learning_rate': 1.3841e-05, 'epoch': 3.23} 05/30/2024 19:52:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5434, 'learning_rate': 1.3709e-05, 'epoch': 3.24} 05/30/2024 19:53:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.5414, 'learning_rate': 1.3578e-05, 'epoch': 3.25} 05/30/2024 19:55:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6405, 'learning_rate': 1.3447e-05, 'epoch': 3.26} 05/30/2024 19:56:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5695, 'learning_rate': 1.3317e-05, 'epoch': 3.27} 05/30/2024 19:57:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5523, 'learning_rate': 1.3187e-05, 'epoch': 3.28} 05/30/2024 19:58:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5751, 'learning_rate': 1.3057e-05, 'epoch': 3.29} 05/30/2024 19:59:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5323, 'learning_rate': 1.2928e-05, 'epoch': 3.30} 05/30/2024 20:00:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5397, 'learning_rate': 1.2799e-05, 'epoch': 3.31} 05/30/2024 20:01:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5592, 'learning_rate': 1.2671e-05, 'epoch': 3.32} 05/30/2024 20:02:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5537, 'learning_rate': 1.2543e-05, 'epoch': 3.33} 05/30/2024 20:03:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6159, 'learning_rate': 1.2415e-05, 'epoch': 3.34} 05/30/2024 20:04:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5576, 'learning_rate': 1.2288e-05, 'epoch': 3.35} 05/30/2024 20:05:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5914, 'learning_rate': 1.2161e-05, 'epoch': 3.36} 05/30/2024 20:06:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6207, 'learning_rate': 1.2035e-05, 'epoch': 3.37} 05/30/2024 20:07:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5759, 'learning_rate': 1.1909e-05, 'epoch': 3.38} 05/30/2024 20:07:34 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1800 05/30/2024 20:07:34 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1800/tokenizer_config.json 05/30/2024 20:07:34 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1800/special_tokens_map.json 05/30/2024 20:08:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5638, 'learning_rate': 1.1784e-05, 'epoch': 3.38} 05/30/2024 20:09:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5905, 'learning_rate': 1.1659e-05, 'epoch': 3.39} 05/30/2024 20:10:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5689, 'learning_rate': 1.1535e-05, 'epoch': 3.40} 05/30/2024 20:11:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6289, 'learning_rate': 1.1411e-05, 'epoch': 3.41} 05/30/2024 20:12:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5722, 'learning_rate': 1.1287e-05, 'epoch': 3.42} 05/30/2024 20:13:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5545, 'learning_rate': 1.1164e-05, 'epoch': 3.43} 05/30/2024 20:14:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6004, 'learning_rate': 1.1042e-05, 'epoch': 3.44} 05/30/2024 20:16:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5117, 'learning_rate': 1.0920e-05, 'epoch': 3.45} 05/30/2024 20:17:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5550, 'learning_rate': 1.0798e-05, 'epoch': 3.46} 05/30/2024 20:18:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5878, 'learning_rate': 1.0677e-05, 'epoch': 3.47} 05/30/2024 20:19:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5508, 'learning_rate': 1.0557e-05, 'epoch': 3.48} 05/30/2024 20:20:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5604, 'learning_rate': 1.0437e-05, 'epoch': 3.49} 05/30/2024 20:21:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5350, 'learning_rate': 1.0317e-05, 'epoch': 3.50} 05/30/2024 20:22:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5643, 'learning_rate': 1.0198e-05, 'epoch': 3.51} 05/30/2024 20:23:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5696, 'learning_rate': 1.0080e-05, 'epoch': 3.52} 05/30/2024 20:24:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5581, 'learning_rate': 9.9618e-06, 'epoch': 3.53} 05/30/2024 20:25:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6107, 'learning_rate': 9.8444e-06, 'epoch': 3.53} 05/30/2024 20:26:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5505, 'learning_rate': 9.7274e-06, 'epoch': 3.54} 05/30/2024 20:27:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5985, 'learning_rate': 9.6110e-06, 'epoch': 3.55} 05/30/2024 20:28:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5681, 'learning_rate': 9.4952e-06, 'epoch': 3.56} 05/30/2024 20:28:59 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1900 05/30/2024 20:28:59 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1900/tokenizer_config.json 05/30/2024 20:28:59 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-1900/special_tokens_map.json 05/30/2024 20:30:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5991, 'learning_rate': 9.3799e-06, 'epoch': 3.57} 05/30/2024 20:31:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5998, 'learning_rate': 9.2651e-06, 'epoch': 3.58} 05/30/2024 20:32:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5713, 'learning_rate': 9.1508e-06, 'epoch': 3.59} 05/30/2024 20:33:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5707, 'learning_rate': 9.0372e-06, 'epoch': 3.60} 05/30/2024 20:34:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5470, 'learning_rate': 8.9240e-06, 'epoch': 3.61} 05/30/2024 20:35:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6162, 'learning_rate': 8.8115e-06, 'epoch': 3.62} 05/30/2024 20:36:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6402, 'learning_rate': 8.6995e-06, 'epoch': 3.63} 05/30/2024 20:37:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5975, 'learning_rate': 8.5880e-06, 'epoch': 3.64} 05/30/2024 20:38:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6069, 'learning_rate': 8.4772e-06, 'epoch': 3.65} 05/30/2024 20:39:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5906, 'learning_rate': 8.3669e-06, 'epoch': 3.66} 05/30/2024 20:40:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5676, 'learning_rate': 8.2571e-06, 'epoch': 3.67} 05/30/2024 20:41:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6021, 'learning_rate': 8.1480e-06, 'epoch': 3.68} 05/30/2024 20:42:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5730, 'learning_rate': 8.0395e-06, 'epoch': 3.68} 05/30/2024 20:43:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6690, 'learning_rate': 7.9315e-06, 'epoch': 3.69} 05/30/2024 20:44:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5580, 'learning_rate': 7.8241e-06, 'epoch': 3.70} 05/30/2024 20:46:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5650, 'learning_rate': 7.7173e-06, 'epoch': 3.71} 05/30/2024 20:47:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5706, 'learning_rate': 7.6112e-06, 'epoch': 3.72} 05/30/2024 20:48:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5693, 'learning_rate': 7.5056e-06, 'epoch': 3.73} 05/30/2024 20:49:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6010, 'learning_rate': 7.4006e-06, 'epoch': 3.74} 05/30/2024 20:50:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6066, 'learning_rate': 7.2963e-06, 'epoch': 3.75} 05/30/2024 20:50:33 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2000 05/30/2024 20:50:33 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2000/tokenizer_config.json 05/30/2024 20:50:33 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2000/special_tokens_map.json 05/30/2024 20:51:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6352, 'learning_rate': 7.1926e-06, 'epoch': 3.76} 05/30/2024 20:52:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5233, 'learning_rate': 7.0895e-06, 'epoch': 3.77} 05/30/2024 20:53:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5370, 'learning_rate': 6.9870e-06, 'epoch': 3.78} 05/30/2024 20:54:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5674, 'learning_rate': 6.8851e-06, 'epoch': 3.79} 05/30/2024 20:55:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5750, 'learning_rate': 6.7839e-06, 'epoch': 3.80} 05/30/2024 20:57:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5833, 'learning_rate': 6.6833e-06, 'epoch': 3.81} 05/30/2024 20:58:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5944, 'learning_rate': 6.5833e-06, 'epoch': 3.82} 05/30/2024 20:59:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5608, 'learning_rate': 6.4840e-06, 'epoch': 3.83} 05/30/2024 21:00:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5966, 'learning_rate': 6.3853e-06, 'epoch': 3.83} 05/30/2024 21:01:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5351, 'learning_rate': 6.2872e-06, 'epoch': 3.84} 05/30/2024 21:02:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6111, 'learning_rate': 6.1898e-06, 'epoch': 3.85} 05/30/2024 21:03:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5730, 'learning_rate': 6.0931e-06, 'epoch': 3.86} 05/30/2024 21:04:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5565, 'learning_rate': 5.9970e-06, 'epoch': 3.87} 05/30/2024 21:05:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5809, 'learning_rate': 5.9016e-06, 'epoch': 3.88} 05/30/2024 21:06:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6388, 'learning_rate': 5.8069e-06, 'epoch': 3.89} 05/30/2024 21:07:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5764, 'learning_rate': 5.7128e-06, 'epoch': 3.90} 05/30/2024 21:08:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5478, 'learning_rate': 5.6194e-06, 'epoch': 3.91} 05/30/2024 21:09:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5565, 'learning_rate': 5.5266e-06, 'epoch': 3.92} 05/30/2024 21:10:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6147, 'learning_rate': 5.4345e-06, 'epoch': 3.93} 05/30/2024 21:11:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6056, 'learning_rate': 5.3432e-06, 'epoch': 3.94} 05/30/2024 21:11:39 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2100 05/30/2024 21:11:40 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2100/tokenizer_config.json 05/30/2024 21:11:40 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2100/special_tokens_map.json 05/30/2024 21:12:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5551, 'learning_rate': 5.2524e-06, 'epoch': 3.95} 05/30/2024 21:13:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5860, 'learning_rate': 5.1624e-06, 'epoch': 3.96} 05/30/2024 21:14:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.5612, 'learning_rate': 5.0731e-06, 'epoch': 3.97} 05/30/2024 21:16:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6115, 'learning_rate': 4.9845e-06, 'epoch': 3.98} 05/30/2024 21:17:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5648, 'learning_rate': 4.8965e-06, 'epoch': 3.98} 05/30/2024 21:18:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5778, 'learning_rate': 4.8093e-06, 'epoch': 3.99} 05/30/2024 21:19:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5784, 'learning_rate': 4.7227e-06, 'epoch': 4.00} 05/30/2024 21:20:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6259, 'learning_rate': 4.6369e-06, 'epoch': 4.01} 05/30/2024 21:21:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5609, 'learning_rate': 4.5518e-06, 'epoch': 4.02} 05/30/2024 21:22:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5915, 'learning_rate': 4.4673e-06, 'epoch': 4.03} 05/30/2024 21:23:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5751, 'learning_rate': 4.3836e-06, 'epoch': 4.04} 05/30/2024 21:24:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5644, 'learning_rate': 4.3006e-06, 'epoch': 4.05} 05/30/2024 21:25:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5862, 'learning_rate': 4.2184e-06, 'epoch': 4.06} 05/30/2024 21:26:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5945, 'learning_rate': 4.1368e-06, 'epoch': 4.07} 05/30/2024 21:27:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5769, 'learning_rate': 4.0560e-06, 'epoch': 4.08} 05/30/2024 21:28:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5359, 'learning_rate': 3.9759e-06, 'epoch': 4.09} 05/30/2024 21:29:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5494, 'learning_rate': 3.8965e-06, 'epoch': 4.10} 05/30/2024 21:30:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5906, 'learning_rate': 3.8179e-06, 'epoch': 4.11} 05/30/2024 21:31:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5814, 'learning_rate': 3.7400e-06, 'epoch': 4.12} 05/30/2024 21:32:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5602, 'learning_rate': 3.6629e-06, 'epoch': 4.13} 05/30/2024 21:32:47 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2200 05/30/2024 21:32:47 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2200/tokenizer_config.json 05/30/2024 21:32:47 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2200/special_tokens_map.json 05/30/2024 21:33:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5594, 'learning_rate': 3.5864e-06, 'epoch': 4.14} 05/30/2024 21:34:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5381, 'learning_rate': 3.5108e-06, 'epoch': 4.14} 05/30/2024 21:35:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5692, 'learning_rate': 3.4358e-06, 'epoch': 4.15} 05/30/2024 21:36:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5712, 'learning_rate': 3.3617e-06, 'epoch': 4.16} 05/30/2024 21:37:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5560, 'learning_rate': 3.2882e-06, 'epoch': 4.17} 05/30/2024 21:38:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6062, 'learning_rate': 3.2156e-06, 'epoch': 4.18} 05/30/2024 21:40:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5812, 'learning_rate': 3.1436e-06, 'epoch': 4.19} 05/30/2024 21:41:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6006, 'learning_rate': 3.0725e-06, 'epoch': 4.20} 05/30/2024 21:42:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5371, 'learning_rate': 3.0021e-06, 'epoch': 4.21} 05/30/2024 21:43:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6149, 'learning_rate': 2.9325e-06, 'epoch': 4.22} 05/30/2024 21:44:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5559, 'learning_rate': 2.8636e-06, 'epoch': 4.23} 05/30/2024 21:45:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5532, 'learning_rate': 2.7955e-06, 'epoch': 4.24} 05/30/2024 21:46:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5868, 'learning_rate': 2.7282e-06, 'epoch': 4.25} 05/30/2024 21:47:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6365, 'learning_rate': 2.6616e-06, 'epoch': 4.26} 05/30/2024 21:48:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5630, 'learning_rate': 2.5959e-06, 'epoch': 4.27} 05/30/2024 21:49:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5658, 'learning_rate': 2.5309e-06, 'epoch': 4.28} 05/30/2024 21:50:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5686, 'learning_rate': 2.4667e-06, 'epoch': 4.29} 05/30/2024 21:51:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5953, 'learning_rate': 2.4032e-06, 'epoch': 4.29} 05/30/2024 21:52:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5366, 'learning_rate': 2.3406e-06, 'epoch': 4.30} 05/30/2024 21:53:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5635, 'learning_rate': 2.2787e-06, 'epoch': 4.31} 05/30/2024 21:53:48 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2300 05/30/2024 21:53:48 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2300/tokenizer_config.json 05/30/2024 21:53:48 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2300/special_tokens_map.json 05/30/2024 21:54:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5339, 'learning_rate': 2.2176e-06, 'epoch': 4.32} 05/30/2024 21:55:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5724, 'learning_rate': 2.1574e-06, 'epoch': 4.33} 05/30/2024 21:56:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5771, 'learning_rate': 2.0979e-06, 'epoch': 4.34} 05/30/2024 21:57:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5428, 'learning_rate': 2.0392e-06, 'epoch': 4.35} 05/30/2024 21:59:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5513, 'learning_rate': 1.9813e-06, 'epoch': 4.36} 05/30/2024 22:00:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5488, 'learning_rate': 1.9242e-06, 'epoch': 4.37} 05/30/2024 22:01:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5583, 'learning_rate': 1.8679e-06, 'epoch': 4.38} 05/30/2024 22:02:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5442, 'learning_rate': 1.8124e-06, 'epoch': 4.39} 05/30/2024 22:03:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5822, 'learning_rate': 1.7578e-06, 'epoch': 4.40} 05/30/2024 22:04:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5623, 'learning_rate': 1.7039e-06, 'epoch': 4.41} 05/30/2024 22:05:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6180, 'learning_rate': 1.6508e-06, 'epoch': 4.42} 05/30/2024 22:06:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5660, 'learning_rate': 1.5986e-06, 'epoch': 4.43} 05/30/2024 22:07:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6019, 'learning_rate': 1.5471e-06, 'epoch': 4.44} 05/30/2024 22:08:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5318, 'learning_rate': 1.4965e-06, 'epoch': 4.44} 05/30/2024 22:09:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5677, 'learning_rate': 1.4467e-06, 'epoch': 4.45} 05/30/2024 22:10:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5724, 'learning_rate': 1.3977e-06, 'epoch': 4.46} 05/30/2024 22:11:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5622, 'learning_rate': 1.3495e-06, 'epoch': 4.47} 05/30/2024 22:12:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5610, 'learning_rate': 1.3022e-06, 'epoch': 4.48} 05/30/2024 22:14:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6054, 'learning_rate': 1.2557e-06, 'epoch': 4.49} 05/30/2024 22:15:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5554, 'learning_rate': 1.2100e-06, 'epoch': 4.50} 05/30/2024 22:15:01 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2400 05/30/2024 22:15:01 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2400/tokenizer_config.json 05/30/2024 22:15:01 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2400/special_tokens_map.json 05/30/2024 22:16:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5349, 'learning_rate': 1.1651e-06, 'epoch': 4.51} 05/30/2024 22:17:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6766, 'learning_rate': 1.1210e-06, 'epoch': 4.52} 05/30/2024 22:18:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5531, 'learning_rate': 1.0778e-06, 'epoch': 4.53} 05/30/2024 22:19:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5525, 'learning_rate': 1.0354e-06, 'epoch': 4.54} 05/30/2024 22:20:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5768, 'learning_rate': 9.9389e-07, 'epoch': 4.55} 05/30/2024 22:21:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5758, 'learning_rate': 9.5317e-07, 'epoch': 4.56} 05/30/2024 22:22:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6414, 'learning_rate': 9.1329e-07, 'epoch': 4.57} 05/30/2024 22:23:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5698, 'learning_rate': 8.7424e-07, 'epoch': 4.58} 05/30/2024 22:24:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5753, 'learning_rate': 8.3604e-07, 'epoch': 4.59} 05/30/2024 22:25:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6198, 'learning_rate': 7.9867e-07, 'epoch': 4.59} 05/30/2024 22:26:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6005, 'learning_rate': 7.6214e-07, 'epoch': 4.60} 05/30/2024 22:27:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5518, 'learning_rate': 7.2645e-07, 'epoch': 4.61} 05/30/2024 22:28:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5640, 'learning_rate': 6.9161e-07, 'epoch': 4.62} 05/30/2024 22:29:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6191, 'learning_rate': 6.5761e-07, 'epoch': 4.63} 05/30/2024 22:30:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5623, 'learning_rate': 6.2446e-07, 'epoch': 4.64} 05/30/2024 22:31:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5714, 'learning_rate': 5.9216e-07, 'epoch': 4.65} 05/30/2024 22:33:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5719, 'learning_rate': 5.6070e-07, 'epoch': 4.66} 05/30/2024 22:34:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5760, 'learning_rate': 5.3009e-07, 'epoch': 4.67} 05/30/2024 22:35:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5652, 'learning_rate': 5.0033e-07, 'epoch': 4.68} 05/30/2024 22:36:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6043, 'learning_rate': 4.7143e-07, 'epoch': 4.69} 05/30/2024 22:36:10 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2500 05/30/2024 22:36:10 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2500/tokenizer_config.json 05/30/2024 22:36:10 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2500/special_tokens_map.json 05/30/2024 22:37:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5770, 'learning_rate': 4.4337e-07, 'epoch': 4.70} 05/30/2024 22:38:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6257, 'learning_rate': 4.1617e-07, 'epoch': 4.71} 05/30/2024 22:39:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5441, 'learning_rate': 3.8982e-07, 'epoch': 4.72} 05/30/2024 22:40:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6071, 'learning_rate': 3.6433e-07, 'epoch': 4.73} 05/30/2024 22:41:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5409, 'learning_rate': 3.3969e-07, 'epoch': 4.74} 05/30/2024 22:42:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5743, 'learning_rate': 3.1591e-07, 'epoch': 4.74} 05/30/2024 22:43:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5713, 'learning_rate': 2.9299e-07, 'epoch': 4.75} 05/30/2024 22:44:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5428, 'learning_rate': 2.7093e-07, 'epoch': 4.76} 05/30/2024 22:45:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6179, 'learning_rate': 2.4972e-07, 'epoch': 4.77} 05/30/2024 22:46:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5665, 'learning_rate': 2.2937e-07, 'epoch': 4.78} 05/30/2024 22:47:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6034, 'learning_rate': 2.0989e-07, 'epoch': 4.79} 05/30/2024 22:48:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6021, 'learning_rate': 1.9127e-07, 'epoch': 4.80} 05/30/2024 22:49:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5527, 'learning_rate': 1.7351e-07, 'epoch': 4.81} 05/30/2024 22:51:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5524, 'learning_rate': 1.5661e-07, 'epoch': 4.82} 05/30/2024 22:52:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5985, 'learning_rate': 1.4057e-07, 'epoch': 4.83} 05/30/2024 22:53:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5791, 'learning_rate': 1.2540e-07, 'epoch': 4.84} 05/30/2024 22:54:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5136, 'learning_rate': 1.1109e-07, 'epoch': 4.85} 05/30/2024 22:55:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5361, 'learning_rate': 9.7646e-08, 'epoch': 4.86} 05/30/2024 22:56:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5601, 'learning_rate': 8.5068e-08, 'epoch': 4.87} 05/30/2024 22:57:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5733, 'learning_rate': 7.3355e-08, 'epoch': 4.88} 05/30/2024 22:57:27 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2600 05/30/2024 22:57:27 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2600/tokenizer_config.json 05/30/2024 22:57:27 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/checkpoint-2600/special_tokens_map.json 05/30/2024 22:58:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6350, 'learning_rate': 6.2508e-08, 'epoch': 4.89} 05/30/2024 22:59:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5657, 'learning_rate': 5.2528e-08, 'epoch': 4.89} 05/30/2024 23:00:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5814, 'learning_rate': 4.3414e-08, 'epoch': 4.90} 05/30/2024 23:01:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5382, 'learning_rate': 3.5167e-08, 'epoch': 4.91} 05/30/2024 23:02:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5380, 'learning_rate': 2.7788e-08, 'epoch': 4.92} 05/30/2024 23:03:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5875, 'learning_rate': 2.1276e-08, 'epoch': 4.93} 05/30/2024 23:05:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5799, 'learning_rate': 1.5632e-08, 'epoch': 4.94} 05/30/2024 23:06:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5630, 'learning_rate': 1.0856e-08, 'epoch': 4.95} 05/30/2024 23:07:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5441, 'learning_rate': 6.9479e-09, 'epoch': 4.96} 05/30/2024 23:08:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5504, 'learning_rate': 3.9083e-09, 'epoch': 4.97} 05/30/2024 23:09:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6207, 'learning_rate': 1.7370e-09, 'epoch': 4.98} 05/30/2024 23:10:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5837, 'learning_rate': 4.3426e-10, 'epoch': 4.99} 05/30/2024 23:11:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5759, 'learning_rate': 0.0000e+00, 'epoch': 5.00} 05/30/2024 23:11:30 - INFO - transformers.trainer - Training completed. Do not forget to share your model on huggingface.co/models =) 05/30/2024 23:11:30 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat 05/30/2024 23:11:30 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/tokenizer_config.json 05/30/2024 23:11:30 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Yi-1.5-6B-Chat/special_tokens_map.json 05/30/2024 23:11:30 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}