|
05/29/2024 16:23:39 - INFO - transformers.tokenization_utils_base - loading file tokenizer.model |
|
|
|
05/29/2024 16:23:39 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json |
|
|
|
05/29/2024 16:23:39 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json |
|
|
|
05/29/2024 16:23:39 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json |
|
|
|
05/29/2024 16:23:39 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json |
|
|
|
05/29/2024 16:23:39 - INFO - llmtuner.data.template - Add pad token: </s> |
|
|
|
05/29/2024 16:23:39 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/LangGPT_community.jsonl... |
|
|
|
05/29/2024 16:23:39 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json. |
|
|
|
05/29/2024 16:23:42 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_alpaca.jsonl... |
|
|
|
05/29/2024 16:23:42 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json. |
|
|
|
05/29/2024 16:23:43 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_seed.jsonl... |
|
|
|
05/29/2024 16:23:43 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json. |
|
|
|
05/29/2024 16:23:59 - INFO - transformers.configuration_utils - loading configuration file /datas/huggingface/Mistral-7B-Instruct-v0.1/config.json |
|
|
|
05/29/2024 16:23:59 - INFO - transformers.configuration_utils - Model config MistralConfig { |
|
"_name_or_path": "/datas/huggingface/Mistral-7B-Instruct-v0.1", |
|
"architectures": [ |
|
"MistralForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 1, |
|
"eos_token_id": 2, |
|
"hidden_act": "silu", |
|
"hidden_size": 4096, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 14336, |
|
"max_position_embeddings": 32768, |
|
"model_type": "mistral", |
|
"num_attention_heads": 32, |
|
"num_hidden_layers": 32, |
|
"num_key_value_heads": 8, |
|
"rms_norm_eps": 1e-05, |
|
"rope_theta": 10000.0, |
|
"sliding_window": 4096, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.40.2", |
|
"use_cache": true, |
|
"vocab_size": 32000 |
|
} |
|
|
|
|
|
05/29/2024 16:23:59 - INFO - transformers.modeling_utils - loading weights file /datas/huggingface/Mistral-7B-Instruct-v0.1/pytorch_model.bin.index.json |
|
|
|
05/29/2024 16:23:59 - INFO - transformers.modeling_utils - Instantiating MistralForCausalLM model under default dtype torch.float16. |
|
|
|
05/29/2024 16:23:59 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { |
|
"bos_token_id": 1, |
|
"eos_token_id": 2 |
|
} |
|
|
|
|
|
05/29/2024 16:26:32 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing MistralForCausalLM. |
|
|
|
|
|
05/29/2024 16:26:32 - INFO - transformers.modeling_utils - All the weights of MistralForCausalLM were initialized from the model checkpoint at /datas/huggingface/Mistral-7B-Instruct-v0.1. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use MistralForCausalLM for predictions without further training. |
|
|
|
05/29/2024 16:26:32 - INFO - transformers.generation.configuration_utils - loading configuration file /datas/huggingface/Mistral-7B-Instruct-v0.1/generation_config.json |
|
|
|
05/29/2024 16:26:32 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { |
|
"bos_token_id": 1, |
|
"eos_token_id": 2 |
|
} |
|
|
|
|
|
05/29/2024 16:26:32 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
05/29/2024 16:26:32 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
05/29/2024 16:26:32 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA |
|
|
|
05/29/2024 16:26:33 - INFO - llmtuner.model.loader - trainable params: 3407872 || all params: 7245139968 || trainable%: 0.0470 |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - Using auto half precision backend |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - ***** Running training ***** |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - Num examples = 8,531 |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - Num Epochs = 10 |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - Instantaneous batch size per device = 2 |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - Gradient Accumulation steps = 8 |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - Total optimization steps = 5,330 |
|
|
|
05/29/2024 16:26:33 - INFO - transformers.trainer - Number of trainable parameters = 3,407,872 |
|
|
|
05/29/2024 16:28:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.8677, 'learning_rate': 5.0000e-05, 'epoch': 0.01} |
|
|
|
05/29/2024 16:30:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.8147, 'learning_rate': 5.0000e-05, 'epoch': 0.02} |
|
|
|
05/29/2024 16:32:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.7416, 'learning_rate': 4.9999e-05, 'epoch': 0.03} |
|
|
|
05/29/2024 16:33:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7843, 'learning_rate': 4.9998e-05, 'epoch': 0.04} |
|
|
|
05/29/2024 16:35:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.7553, 'learning_rate': 4.9997e-05, 'epoch': 0.05} |
|
|
|
05/29/2024 16:37:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.7018, 'learning_rate': 4.9996e-05, 'epoch': 0.06} |
|
|
|
05/29/2024 16:39:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.7141, 'learning_rate': 4.9995e-05, 'epoch': 0.07} |
|
|
|
05/29/2024 16:41:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7257, 'learning_rate': 4.9993e-05, 'epoch': 0.08} |
|
|
|
05/29/2024 16:43:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.7492, 'learning_rate': 4.9991e-05, 'epoch': 0.08} |
|
|
|
05/29/2024 16:44:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6998, 'learning_rate': 4.9989e-05, 'epoch': 0.09} |
|
|
|
05/29/2024 16:46:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.7014, 'learning_rate': 4.9987e-05, 'epoch': 0.10} |
|
|
|
05/29/2024 16:48:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6824, 'learning_rate': 4.9984e-05, 'epoch': 0.11} |
|
|
|
05/29/2024 16:50:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6789, 'learning_rate': 4.9982e-05, 'epoch': 0.12} |
|
|
|
05/29/2024 16:52:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6439, 'learning_rate': 4.9979e-05, 'epoch': 0.13} |
|
|
|
05/29/2024 16:54:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6833, 'learning_rate': 4.9976e-05, 'epoch': 0.14} |
|
|
|
05/29/2024 16:55:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6264, 'learning_rate': 4.9972e-05, 'epoch': 0.15} |
|
|
|
05/29/2024 16:57:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6510, 'learning_rate': 4.9969e-05, 'epoch': 0.16} |
|
|
|
05/29/2024 16:59:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6575, 'learning_rate': 4.9965e-05, 'epoch': 0.17} |
|
|
|
05/29/2024 17:01:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6336, 'learning_rate': 4.9961e-05, 'epoch': 0.18} |
|
|
|
05/29/2024 17:02:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6131, 'learning_rate': 4.9957e-05, 'epoch': 0.19} |
|
|
|
05/29/2024 17:02:59 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-100 |
|
|
|
05/29/2024 17:02:59 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-100/tokenizer_config.json |
|
|
|
05/29/2024 17:02:59 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-100/special_tokens_map.json |
|
|
|
05/29/2024 17:04:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6620, 'learning_rate': 4.9952e-05, 'epoch': 0.20} |
|
|
|
05/29/2024 17:06:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6428, 'learning_rate': 4.9947e-05, 'epoch': 0.21} |
|
|
|
05/29/2024 17:08:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6605, 'learning_rate': 4.9943e-05, 'epoch': 0.22} |
|
|
|
05/29/2024 17:10:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6526, 'learning_rate': 4.9937e-05, 'epoch': 0.23} |
|
|
|
05/29/2024 17:12:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6534, 'learning_rate': 4.9932e-05, 'epoch': 0.23} |
|
|
|
05/29/2024 17:14:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6343, 'learning_rate': 4.9927e-05, 'epoch': 0.24} |
|
|
|
05/29/2024 17:15:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6664, 'learning_rate': 4.9921e-05, 'epoch': 0.25} |
|
|
|
05/29/2024 17:17:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6110, 'learning_rate': 4.9915e-05, 'epoch': 0.26} |
|
|
|
05/29/2024 17:19:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6259, 'learning_rate': 4.9909e-05, 'epoch': 0.27} |
|
|
|
05/29/2024 17:21:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6398, 'learning_rate': 4.9902e-05, 'epoch': 0.28} |
|
|
|
05/29/2024 17:23:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6155, 'learning_rate': 4.9896e-05, 'epoch': 0.29} |
|
|
|
05/29/2024 17:25:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6121, 'learning_rate': 4.9889e-05, 'epoch': 0.30} |
|
|
|
05/29/2024 17:26:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6582, 'learning_rate': 4.9882e-05, 'epoch': 0.31} |
|
|
|
05/29/2024 17:28:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6403, 'learning_rate': 4.9875e-05, 'epoch': 0.32} |
|
|
|
05/29/2024 17:30:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6339, 'learning_rate': 4.9867e-05, 'epoch': 0.33} |
|
|
|
05/29/2024 17:32:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6206, 'learning_rate': 4.9859e-05, 'epoch': 0.34} |
|
|
|
05/29/2024 17:34:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6357, 'learning_rate': 4.9852e-05, 'epoch': 0.35} |
|
|
|
05/29/2024 17:36:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5994, 'learning_rate': 4.9843e-05, 'epoch': 0.36} |
|
|
|
05/29/2024 17:38:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6596, 'learning_rate': 4.9835e-05, 'epoch': 0.37} |
|
|
|
05/29/2024 17:40:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6399, 'learning_rate': 4.9826e-05, 'epoch': 0.38} |
|
|
|
05/29/2024 17:40:01 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-200 |
|
|
|
05/29/2024 17:40:01 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-200/tokenizer_config.json |
|
|
|
05/29/2024 17:40:01 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-200/special_tokens_map.json |
|
|
|
05/29/2024 17:42:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6146, 'learning_rate': 4.9818e-05, 'epoch': 0.38} |
|
|
|
05/29/2024 17:43:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6333, 'learning_rate': 4.9809e-05, 'epoch': 0.39} |
|
|
|
05/29/2024 17:45:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6117, 'learning_rate': 4.9800e-05, 'epoch': 0.40} |
|
|
|
05/29/2024 17:47:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5990, 'learning_rate': 4.9790e-05, 'epoch': 0.41} |
|
|
|
05/29/2024 17:49:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5964, 'learning_rate': 4.9780e-05, 'epoch': 0.42} |
|
|
|
05/29/2024 17:51:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5823, 'learning_rate': 4.9771e-05, 'epoch': 0.43} |
|
|
|
05/29/2024 17:53:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6277, 'learning_rate': 4.9761e-05, 'epoch': 0.44} |
|
|
|
05/29/2024 17:54:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6065, 'learning_rate': 4.9750e-05, 'epoch': 0.45} |
|
|
|
05/29/2024 17:56:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6246, 'learning_rate': 4.9740e-05, 'epoch': 0.46} |
|
|
|
05/29/2024 17:58:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5986, 'learning_rate': 4.9729e-05, 'epoch': 0.47} |
|
|
|
05/29/2024 18:00:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6198, 'learning_rate': 4.9718e-05, 'epoch': 0.48} |
|
|
|
05/29/2024 18:02:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5762, 'learning_rate': 4.9707e-05, 'epoch': 0.49} |
|
|
|
05/29/2024 18:04:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5778, 'learning_rate': 4.9696e-05, 'epoch': 0.50} |
|
|
|
05/29/2024 18:05:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6195, 'learning_rate': 4.9684e-05, 'epoch': 0.51} |
|
|
|
05/29/2024 18:07:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5750, 'learning_rate': 4.9672e-05, 'epoch': 0.52} |
|
|
|
05/29/2024 18:09:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6162, 'learning_rate': 4.9660e-05, 'epoch': 0.53} |
|
|
|
05/29/2024 18:11:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5918, 'learning_rate': 4.9648e-05, 'epoch': 0.53} |
|
|
|
05/29/2024 18:13:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5740, 'learning_rate': 4.9636e-05, 'epoch': 0.54} |
|
|
|
05/29/2024 18:14:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6111, 'learning_rate': 4.9623e-05, 'epoch': 0.55} |
|
|
|
05/29/2024 18:16:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6199, 'learning_rate': 4.9610e-05, 'epoch': 0.56} |
|
|
|
05/29/2024 18:16:46 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-300 |
|
|
|
05/29/2024 18:16:46 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-300/tokenizer_config.json |
|
|
|
05/29/2024 18:16:46 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-300/special_tokens_map.json |
|
|
|
05/29/2024 18:18:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5866, 'learning_rate': 4.9597e-05, 'epoch': 0.57} |
|
|
|
05/29/2024 18:20:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5915, 'learning_rate': 4.9584e-05, 'epoch': 0.58} |
|
|
|
05/29/2024 18:22:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5865, 'learning_rate': 4.9570e-05, 'epoch': 0.59} |
|
|
|
05/29/2024 18:24:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6528, 'learning_rate': 4.9557e-05, 'epoch': 0.60} |
|
|
|
05/29/2024 18:25:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5969, 'learning_rate': 4.9543e-05, 'epoch': 0.61} |
|
|
|
05/29/2024 18:27:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5814, 'learning_rate': 4.9529e-05, 'epoch': 0.62} |
|
|
|
05/29/2024 18:29:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6264, 'learning_rate': 4.9514e-05, 'epoch': 0.63} |
|
|
|
05/29/2024 18:31:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5847, 'learning_rate': 4.9500e-05, 'epoch': 0.64} |
|
|
|
05/29/2024 18:33:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6255, 'learning_rate': 4.9485e-05, 'epoch': 0.65} |
|
|
|
05/29/2024 18:35:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6465, 'learning_rate': 4.9470e-05, 'epoch': 0.66} |
|
|
|
05/29/2024 18:36:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5590, 'learning_rate': 4.9455e-05, 'epoch': 0.67} |
|
|
|
05/29/2024 18:38:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6202, 'learning_rate': 4.9439e-05, 'epoch': 0.68} |
|
|
|
05/29/2024 18:40:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5514, 'learning_rate': 4.9424e-05, 'epoch': 0.68} |
|
|
|
05/29/2024 18:42:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5831, 'learning_rate': 4.9408e-05, 'epoch': 0.69} |
|
|
|
05/29/2024 18:44:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6020, 'learning_rate': 4.9392e-05, 'epoch': 0.70} |
|
|
|
05/29/2024 18:46:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5955, 'learning_rate': 4.9376e-05, 'epoch': 0.71} |
|
|
|
05/29/2024 18:47:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5610, 'learning_rate': 4.9359e-05, 'epoch': 0.72} |
|
|
|
05/29/2024 18:49:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5894, 'learning_rate': 4.9342e-05, 'epoch': 0.73} |
|
|
|
05/29/2024 18:51:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5954, 'learning_rate': 4.9325e-05, 'epoch': 0.74} |
|
|
|
05/29/2024 18:53:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6117, 'learning_rate': 4.9308e-05, 'epoch': 0.75} |
|
|
|
05/29/2024 18:53:15 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-400 |
|
|
|
05/29/2024 18:53:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-400/tokenizer_config.json |
|
|
|
05/29/2024 18:53:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-400/special_tokens_map.json |
|
|
|
05/29/2024 18:55:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5808, 'learning_rate': 4.9291e-05, 'epoch': 0.76} |
|
|
|
05/29/2024 18:56:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5858, 'learning_rate': 4.9274e-05, 'epoch': 0.77} |
|
|
|
05/29/2024 18:58:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5747, 'learning_rate': 4.9256e-05, 'epoch': 0.78} |
|
|
|
05/29/2024 19:00:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6057, 'learning_rate': 4.9238e-05, 'epoch': 0.79} |
|
|
|
05/29/2024 19:02:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5915, 'learning_rate': 4.9220e-05, 'epoch': 0.80} |
|
|
|
05/29/2024 19:04:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5727, 'learning_rate': 4.9201e-05, 'epoch': 0.81} |
|
|
|
05/29/2024 19:06:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5274, 'learning_rate': 4.9183e-05, 'epoch': 0.82} |
|
|
|
05/29/2024 19:08:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5488, 'learning_rate': 4.9164e-05, 'epoch': 0.83} |
|
|
|
05/29/2024 19:09:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5947, 'learning_rate': 4.9145e-05, 'epoch': 0.83} |
|
|
|
05/29/2024 19:11:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5809, 'learning_rate': 4.9126e-05, 'epoch': 0.84} |
|
|
|
05/29/2024 19:13:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5694, 'learning_rate': 4.9106e-05, 'epoch': 0.85} |
|
|
|
05/29/2024 19:15:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6110, 'learning_rate': 4.9087e-05, 'epoch': 0.86} |
|
|
|
05/29/2024 19:17:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5837, 'learning_rate': 4.9067e-05, 'epoch': 0.87} |
|
|
|
05/29/2024 19:18:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6068, 'learning_rate': 4.9047e-05, 'epoch': 0.88} |
|
|
|
05/29/2024 19:20:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6247, 'learning_rate': 4.9027e-05, 'epoch': 0.89} |
|
|
|
05/29/2024 19:22:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5827, 'learning_rate': 4.9006e-05, 'epoch': 0.90} |
|
|
|
05/29/2024 19:24:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5582, 'learning_rate': 4.8985e-05, 'epoch': 0.91} |
|
|
|
05/29/2024 19:26:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5915, 'learning_rate': 4.8965e-05, 'epoch': 0.92} |
|
|
|
05/29/2024 19:28:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6365, 'learning_rate': 4.8943e-05, 'epoch': 0.93} |
|
|
|
05/29/2024 19:29:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5784, 'learning_rate': 4.8922e-05, 'epoch': 0.94} |
|
|
|
05/29/2024 19:29:49 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-500 |
|
|
|
05/29/2024 19:29:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-500/tokenizer_config.json |
|
|
|
05/29/2024 19:29:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-500/special_tokens_map.json |
|
|
|
05/29/2024 19:31:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5929, 'learning_rate': 4.8901e-05, 'epoch': 0.95} |
|
|
|
05/29/2024 19:33:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6260, 'learning_rate': 4.8879e-05, 'epoch': 0.96} |
|
|
|
05/29/2024 19:35:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5658, 'learning_rate': 4.8857e-05, 'epoch': 0.97} |
|
|
|
05/29/2024 19:37:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5631, 'learning_rate': 4.8835e-05, 'epoch': 0.98} |
|
|
|
05/29/2024 19:39:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5816, 'learning_rate': 4.8813e-05, 'epoch': 0.98} |
|
|
|
05/29/2024 19:40:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5810, 'learning_rate': 4.8790e-05, 'epoch': 0.99} |
|
|
|
05/29/2024 19:42:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5467, 'learning_rate': 4.8767e-05, 'epoch': 1.00} |
|
|
|
05/29/2024 19:44:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5380, 'learning_rate': 4.8744e-05, 'epoch': 1.01} |
|
|
|
05/29/2024 19:46:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5274, 'learning_rate': 4.8721e-05, 'epoch': 1.02} |
|
|
|
05/29/2024 19:48:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5977, 'learning_rate': 4.8698e-05, 'epoch': 1.03} |
|
|
|
05/29/2024 19:50:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5568, 'learning_rate': 4.8674e-05, 'epoch': 1.04} |
|
|
|
05/29/2024 19:52:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5378, 'learning_rate': 4.8650e-05, 'epoch': 1.05} |
|
|
|
05/29/2024 19:54:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5836, 'learning_rate': 4.8626e-05, 'epoch': 1.06} |
|
|
|
05/29/2024 19:55:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5818, 'learning_rate': 4.8602e-05, 'epoch': 1.07} |
|
|
|
05/29/2024 19:57:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5647, 'learning_rate': 4.8578e-05, 'epoch': 1.08} |
|
|
|
05/29/2024 19:59:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5702, 'learning_rate': 4.8553e-05, 'epoch': 1.09} |
|
|
|
05/29/2024 20:01:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5631, 'learning_rate': 4.8529e-05, 'epoch': 1.10} |
|
|
|
05/29/2024 20:02:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5508, 'learning_rate': 4.8503e-05, 'epoch': 1.11} |
|
|
|
05/29/2024 20:04:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5659, 'learning_rate': 4.8478e-05, 'epoch': 1.12} |
|
|
|
05/29/2024 20:06:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5501, 'learning_rate': 4.8453e-05, 'epoch': 1.13} |
|
|
|
05/29/2024 20:06:44 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-600 |
|
|
|
05/29/2024 20:06:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-600/tokenizer_config.json |
|
|
|
05/29/2024 20:06:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-600/special_tokens_map.json |
|
|
|
05/29/2024 20:08:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5629, 'learning_rate': 4.8427e-05, 'epoch': 1.13} |
|
|
|
05/29/2024 20:10:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5359, 'learning_rate': 4.8401e-05, 'epoch': 1.14} |
|
|
|
05/29/2024 20:12:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5294, 'learning_rate': 4.8375e-05, 'epoch': 1.15} |
|
|
|
05/29/2024 20:14:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5286, 'learning_rate': 4.8349e-05, 'epoch': 1.16} |
|
|
|
05/29/2024 20:15:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5511, 'learning_rate': 4.8323e-05, 'epoch': 1.17} |
|
|
|
05/29/2024 20:17:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5687, 'learning_rate': 4.8296e-05, 'epoch': 1.18} |
|
|
|
05/29/2024 20:19:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5589, 'learning_rate': 4.8269e-05, 'epoch': 1.19} |
|
|
|
05/29/2024 20:21:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5294, 'learning_rate': 4.8242e-05, 'epoch': 1.20} |
|
|
|
05/29/2024 20:23:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5316, 'learning_rate': 4.8215e-05, 'epoch': 1.21} |
|
|
|
05/29/2024 20:24:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5839, 'learning_rate': 4.8188e-05, 'epoch': 1.22} |
|
|
|
05/29/2024 20:26:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5622, 'learning_rate': 4.8160e-05, 'epoch': 1.23} |
|
|
|
05/29/2024 20:28:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5794, 'learning_rate': 4.8132e-05, 'epoch': 1.24} |
|
|
|
05/29/2024 20:30:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5670, 'learning_rate': 4.8104e-05, 'epoch': 1.25} |
|
|
|
05/29/2024 20:32:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6111, 'learning_rate': 4.8076e-05, 'epoch': 1.26} |
|
|
|
05/29/2024 20:34:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6040, 'learning_rate': 4.8047e-05, 'epoch': 1.27} |
|
|
|
05/29/2024 20:35:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5297, 'learning_rate': 4.8019e-05, 'epoch': 1.28} |
|
|
|
05/29/2024 20:37:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5701, 'learning_rate': 4.7990e-05, 'epoch': 1.28} |
|
|
|
05/29/2024 20:39:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5849, 'learning_rate': 4.7961e-05, 'epoch': 1.29} |
|
|
|
05/29/2024 20:41:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5795, 'learning_rate': 4.7932e-05, 'epoch': 1.30} |
|
|
|
05/29/2024 20:43:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5386, 'learning_rate': 4.7902e-05, 'epoch': 1.31} |
|
|
|
05/29/2024 20:43:12 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-700 |
|
|
|
05/29/2024 20:43:12 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-700/tokenizer_config.json |
|
|
|
05/29/2024 20:43:12 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-700/special_tokens_map.json |
|
|
|
05/29/2024 20:45:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5619, 'learning_rate': 4.7872e-05, 'epoch': 1.32} |
|
|
|
05/29/2024 20:46:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5533, 'learning_rate': 4.7843e-05, 'epoch': 1.33} |
|
|
|
05/29/2024 20:48:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5125, 'learning_rate': 4.7813e-05, 'epoch': 1.34} |
|
|
|
05/29/2024 20:50:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5577, 'learning_rate': 4.7782e-05, 'epoch': 1.35} |
|
|
|
05/29/2024 20:52:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5614, 'learning_rate': 4.7752e-05, 'epoch': 1.36} |
|
|
|
05/29/2024 20:54:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5648, 'learning_rate': 4.7721e-05, 'epoch': 1.37} |
|
|
|
05/29/2024 20:56:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5572, 'learning_rate': 4.7690e-05, 'epoch': 1.38} |
|
|
|
05/29/2024 20:57:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5295, 'learning_rate': 4.7659e-05, 'epoch': 1.39} |
|
|
|
05/29/2024 20:59:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5770, 'learning_rate': 4.7628e-05, 'epoch': 1.40} |
|
|
|
05/29/2024 21:01:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5495, 'learning_rate': 4.7597e-05, 'epoch': 1.41} |
|
|
|
05/29/2024 21:03:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5662, 'learning_rate': 4.7565e-05, 'epoch': 1.42} |
|
|
|
05/29/2024 21:05:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5910, 'learning_rate': 4.7533e-05, 'epoch': 1.43} |
|
|
|
05/29/2024 21:06:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.5720, 'learning_rate': 4.7501e-05, 'epoch': 1.43} |
|
|
|
05/29/2024 21:08:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5458, 'learning_rate': 4.7469e-05, 'epoch': 1.44} |
|
|
|
05/29/2024 21:10:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5458, 'learning_rate': 4.7437e-05, 'epoch': 1.45} |
|
|
|
05/29/2024 21:12:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5633, 'learning_rate': 4.7404e-05, 'epoch': 1.46} |
|
|
|
05/29/2024 21:14:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5552, 'learning_rate': 4.7371e-05, 'epoch': 1.47} |
|
|
|
05/29/2024 21:16:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6061, 'learning_rate': 4.7338e-05, 'epoch': 1.48} |
|
|
|
05/29/2024 21:17:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.5525, 'learning_rate': 4.7305e-05, 'epoch': 1.49} |
|
|
|
05/29/2024 21:19:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6171, 'learning_rate': 4.7272e-05, 'epoch': 1.50} |
|
|
|
05/29/2024 21:19:44 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-800 |
|
|
|
05/29/2024 21:19:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-800/tokenizer_config.json |
|
|
|
05/29/2024 21:19:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-800/special_tokens_map.json |
|
|
|
05/29/2024 21:21:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5442, 'learning_rate': 4.7238e-05, 'epoch': 1.51} |
|
|
|
05/29/2024 21:23:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5293, 'learning_rate': 4.7204e-05, 'epoch': 1.52} |
|
|
|
05/29/2024 21:24:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5362, 'learning_rate': 4.7171e-05, 'epoch': 1.53} |
|
|
|
05/29/2024 21:26:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5805, 'learning_rate': 4.7136e-05, 'epoch': 1.54} |
|
|
|
05/29/2024 21:28:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5540, 'learning_rate': 4.7102e-05, 'epoch': 1.55} |
|
|
|
05/29/2024 21:30:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5680, 'learning_rate': 4.7068e-05, 'epoch': 1.56} |
|
|
|
05/29/2024 21:32:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5074, 'learning_rate': 4.7033e-05, 'epoch': 1.57} |
|
|
|
05/29/2024 21:34:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5672, 'learning_rate': 4.6998e-05, 'epoch': 1.58} |
|
|
|
05/29/2024 21:35:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5534, 'learning_rate': 4.6963e-05, 'epoch': 1.58} |
|
|
|
05/29/2024 21:37:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5753, 'learning_rate': 4.6928e-05, 'epoch': 1.59} |
|
|
|
05/29/2024 21:39:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5789, 'learning_rate': 4.6892e-05, 'epoch': 1.60} |
|
|
|
05/29/2024 21:41:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5512, 'learning_rate': 4.6856e-05, 'epoch': 1.61} |
|
|
|
05/29/2024 21:43:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5319, 'learning_rate': 4.6820e-05, 'epoch': 1.62} |
|
|
|
05/29/2024 21:45:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5730, 'learning_rate': 4.6784e-05, 'epoch': 1.63} |
|
|
|
05/29/2024 21:46:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5372, 'learning_rate': 4.6748e-05, 'epoch': 1.64} |
|
|
|
05/29/2024 21:48:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5389, 'learning_rate': 4.6712e-05, 'epoch': 1.65} |
|
|
|
05/29/2024 21:50:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5828, 'learning_rate': 4.6675e-05, 'epoch': 1.66} |
|
|
|
05/29/2024 21:52:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5193, 'learning_rate': 4.6638e-05, 'epoch': 1.67} |
|
|
|
05/29/2024 21:54:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5626, 'learning_rate': 4.6601e-05, 'epoch': 1.68} |
|
|
|
05/29/2024 21:56:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5980, 'learning_rate': 4.6564e-05, 'epoch': 1.69} |
|
|
|
05/29/2024 21:56:17 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-900 |
|
|
|
05/29/2024 21:56:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-900/tokenizer_config.json |
|
|
|
05/29/2024 21:56:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-900/special_tokens_map.json |
|
|
|
05/29/2024 21:58:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5661, 'learning_rate': 4.6527e-05, 'epoch': 1.70} |
|
|
|
05/29/2024 22:00:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5996, 'learning_rate': 4.6489e-05, 'epoch': 1.71} |
|
|
|
05/29/2024 22:02:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5589, 'learning_rate': 4.6451e-05, 'epoch': 1.72} |
|
|
|
05/29/2024 22:04:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6435, 'learning_rate': 4.6414e-05, 'epoch': 1.73} |
|
|
|
05/29/2024 22:05:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5335, 'learning_rate': 4.6375e-05, 'epoch': 1.73} |
|
|
|
05/29/2024 22:07:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5713, 'learning_rate': 4.6337e-05, 'epoch': 1.74} |
|
|
|
05/29/2024 22:09:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5544, 'learning_rate': 4.6299e-05, 'epoch': 1.75} |
|
|
|
05/29/2024 22:11:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6001, 'learning_rate': 4.6260e-05, 'epoch': 1.76} |
|
|
|
05/29/2024 22:13:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5393, 'learning_rate': 4.6221e-05, 'epoch': 1.77} |
|
|
|
05/29/2024 22:15:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5749, 'learning_rate': 4.6182e-05, 'epoch': 1.78} |
|
|
|
05/29/2024 22:16:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5495, 'learning_rate': 4.6143e-05, 'epoch': 1.79} |
|
|
|
05/29/2024 22:18:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6157, 'learning_rate': 4.6103e-05, 'epoch': 1.80} |
|
|
|
05/29/2024 22:20:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5457, 'learning_rate': 4.6064e-05, 'epoch': 1.81} |
|
|
|
05/29/2024 22:22:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5589, 'learning_rate': 4.6024e-05, 'epoch': 1.82} |
|
|
|
05/29/2024 22:24:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5396, 'learning_rate': 4.5984e-05, 'epoch': 1.83} |
|
|
|
05/29/2024 22:26:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5272, 'learning_rate': 4.5944e-05, 'epoch': 1.84} |
|
|
|
05/29/2024 22:28:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5361, 'learning_rate': 4.5904e-05, 'epoch': 1.85} |
|
|
|
05/29/2024 22:29:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5275, 'learning_rate': 4.5863e-05, 'epoch': 1.86} |
|
|
|
05/29/2024 22:31:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5352, 'learning_rate': 4.5822e-05, 'epoch': 1.87} |
|
|
|
05/29/2024 22:33:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5327, 'learning_rate': 4.5782e-05, 'epoch': 1.88} |
|
|
|
05/29/2024 22:33:28 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1000 |
|
|
|
05/29/2024 22:33:28 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1000/tokenizer_config.json |
|
|
|
05/29/2024 22:33:28 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1000/special_tokens_map.json |
|
|
|
05/29/2024 22:35:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5183, 'learning_rate': 4.5741e-05, 'epoch': 1.88} |
|
|
|
05/29/2024 22:37:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5301, 'learning_rate': 4.5699e-05, 'epoch': 1.89} |
|
|
|
05/29/2024 22:38:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5641, 'learning_rate': 4.5658e-05, 'epoch': 1.90} |
|
|
|
05/29/2024 22:40:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5391, 'learning_rate': 4.5616e-05, 'epoch': 1.91} |
|
|
|
05/29/2024 22:42:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5367, 'learning_rate': 4.5575e-05, 'epoch': 1.92} |
|
|
|
05/29/2024 22:44:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5418, 'learning_rate': 4.5533e-05, 'epoch': 1.93} |
|
|
|
05/29/2024 22:46:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5466, 'learning_rate': 4.5491e-05, 'epoch': 1.94} |
|
|
|
05/29/2024 22:47:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5408, 'learning_rate': 4.5448e-05, 'epoch': 1.95} |
|
|
|
05/29/2024 22:49:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5790, 'learning_rate': 4.5406e-05, 'epoch': 1.96} |
|
|
|
05/29/2024 22:51:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5185, 'learning_rate': 4.5363e-05, 'epoch': 1.97} |
|
|
|
05/29/2024 22:53:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5230, 'learning_rate': 4.5320e-05, 'epoch': 1.98} |
|
|
|
05/29/2024 22:55:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5206, 'learning_rate': 4.5277e-05, 'epoch': 1.99} |
|
|
|
05/29/2024 22:57:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5457, 'learning_rate': 4.5234e-05, 'epoch': 2.00} |
|
|
|
05/29/2024 22:58:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4943, 'learning_rate': 4.5191e-05, 'epoch': 2.01} |
|
|
|
05/29/2024 23:00:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5032, 'learning_rate': 4.5147e-05, 'epoch': 2.02} |
|
|
|
05/29/2024 23:02:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5602, 'learning_rate': 4.5103e-05, 'epoch': 2.03} |
|
|
|
05/29/2024 23:04:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5449, 'learning_rate': 4.5060e-05, 'epoch': 2.03} |
|
|
|
05/29/2024 23:06:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5494, 'learning_rate': 4.5016e-05, 'epoch': 2.04} |
|
|
|
05/29/2024 23:08:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5545, 'learning_rate': 4.4971e-05, 'epoch': 2.05} |
|
|
|
05/29/2024 23:10:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5208, 'learning_rate': 4.4927e-05, 'epoch': 2.06} |
|
|
|
05/29/2024 23:10:10 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1100 |
|
|
|
05/29/2024 23:10:10 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1100/tokenizer_config.json |
|
|
|
05/29/2024 23:10:10 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1100/special_tokens_map.json |
|
|
|
05/29/2024 23:12:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4887, 'learning_rate': 4.4882e-05, 'epoch': 2.07} |
|
|
|
05/29/2024 23:13:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5130, 'learning_rate': 4.4838e-05, 'epoch': 2.08} |
|
|
|
05/29/2024 23:15:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5304, 'learning_rate': 4.4793e-05, 'epoch': 2.09} |
|
|
|
05/29/2024 23:17:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5380, 'learning_rate': 4.4748e-05, 'epoch': 2.10} |
|
|
|
05/29/2024 23:19:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5371, 'learning_rate': 4.4702e-05, 'epoch': 2.11} |
|
|
|
05/29/2024 23:21:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5467, 'learning_rate': 4.4657e-05, 'epoch': 2.12} |
|
|
|
05/29/2024 23:22:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5449, 'learning_rate': 4.4611e-05, 'epoch': 2.13} |
|
|
|
05/29/2024 23:24:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6024, 'learning_rate': 4.4565e-05, 'epoch': 2.14} |
|
|
|
05/29/2024 23:26:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5108, 'learning_rate': 4.4520e-05, 'epoch': 2.15} |
|
|
|
05/29/2024 23:28:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5024, 'learning_rate': 4.4473e-05, 'epoch': 2.16} |
|
|
|
05/29/2024 23:30:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5101, 'learning_rate': 4.4427e-05, 'epoch': 2.17} |
|
|
|
05/29/2024 23:32:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5270, 'learning_rate': 4.4381e-05, 'epoch': 2.18} |
|
|
|
05/29/2024 23:33:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5446, 'learning_rate': 4.4334e-05, 'epoch': 2.18} |
|
|
|
05/29/2024 23:35:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5749, 'learning_rate': 4.4287e-05, 'epoch': 2.19} |
|
|
|
05/29/2024 23:37:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5384, 'learning_rate': 4.4240e-05, 'epoch': 2.20} |
|
|
|
05/29/2024 23:39:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5335, 'learning_rate': 4.4193e-05, 'epoch': 2.21} |
|
|
|
05/29/2024 23:41:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5318, 'learning_rate': 4.4146e-05, 'epoch': 2.22} |
|
|
|
05/29/2024 23:43:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5133, 'learning_rate': 4.4098e-05, 'epoch': 2.23} |
|
|
|
05/29/2024 23:45:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5075, 'learning_rate': 4.4051e-05, 'epoch': 2.24} |
|
|
|
05/29/2024 23:46:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5445, 'learning_rate': 4.4003e-05, 'epoch': 2.25} |
|
|
|
05/29/2024 23:46:52 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1200 |
|
|
|
05/29/2024 23:46:52 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1200/tokenizer_config.json |
|
|
|
05/29/2024 23:46:52 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1200/special_tokens_map.json |
|
|
|
05/29/2024 23:48:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5349, 'learning_rate': 4.3955e-05, 'epoch': 2.26} |
|
|
|
05/29/2024 23:50:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5690, 'learning_rate': 4.3907e-05, 'epoch': 2.27} |
|
|
|
05/29/2024 23:52:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5112, 'learning_rate': 4.3859e-05, 'epoch': 2.28} |
|
|
|
05/29/2024 23:54:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5034, 'learning_rate': 4.3810e-05, 'epoch': 2.29} |
|
|
|
05/29/2024 23:56:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5288, 'learning_rate': 4.3762e-05, 'epoch': 2.30} |
|
|
|
05/29/2024 23:57:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5044, 'learning_rate': 4.3713e-05, 'epoch': 2.31} |
|
|
|
05/29/2024 23:59:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5372, 'learning_rate': 4.3664e-05, 'epoch': 2.32} |
|
|
|
05/30/2024 00:01:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5404, 'learning_rate': 4.3615e-05, 'epoch': 2.33} |
|
|
|
05/30/2024 00:03:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5335, 'learning_rate': 4.3565e-05, 'epoch': 2.33} |
|
|
|
05/30/2024 00:05:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5499, 'learning_rate': 4.3516e-05, 'epoch': 2.34} |
|
|
|
05/30/2024 00:07:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5526, 'learning_rate': 4.3466e-05, 'epoch': 2.35} |
|
|
|
05/30/2024 00:08:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5169, 'learning_rate': 4.3417e-05, 'epoch': 2.36} |
|
|
|
05/30/2024 00:10:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5258, 'learning_rate': 4.3367e-05, 'epoch': 2.37} |
|
|
|
05/30/2024 00:12:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5842, 'learning_rate': 4.3317e-05, 'epoch': 2.38} |
|
|
|
05/30/2024 00:14:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5026, 'learning_rate': 4.3267e-05, 'epoch': 2.39} |
|
|
|
05/30/2024 00:16:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5736, 'learning_rate': 4.3216e-05, 'epoch': 2.40} |
|
|
|
05/30/2024 00:18:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5505, 'learning_rate': 4.3166e-05, 'epoch': 2.41} |
|
|
|
05/30/2024 00:19:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5362, 'learning_rate': 4.3115e-05, 'epoch': 2.42} |
|
|
|
05/30/2024 00:21:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4937, 'learning_rate': 4.3064e-05, 'epoch': 2.43} |
|
|
|
05/30/2024 00:23:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5365, 'learning_rate': 4.3013e-05, 'epoch': 2.44} |
|
|
|
05/30/2024 00:23:34 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1300 |
|
|
|
05/30/2024 00:23:34 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1300/tokenizer_config.json |
|
|
|
05/30/2024 00:23:34 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1300/special_tokens_map.json |
|
|
|
05/30/2024 00:25:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5906, 'learning_rate': 4.2962e-05, 'epoch': 2.45} |
|
|
|
05/30/2024 00:27:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5520, 'learning_rate': 4.2911e-05, 'epoch': 2.46} |
|
|
|
05/30/2024 00:29:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5239, 'learning_rate': 4.2859e-05, 'epoch': 2.47} |
|
|
|
05/30/2024 00:31:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5117, 'learning_rate': 4.2807e-05, 'epoch': 2.48} |
|
|
|
05/30/2024 00:32:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5448, 'learning_rate': 4.2756e-05, 'epoch': 2.48} |
|
|
|
05/30/2024 00:34:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5245, 'learning_rate': 4.2704e-05, 'epoch': 2.49} |
|
|
|
05/30/2024 00:36:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5185, 'learning_rate': 4.2652e-05, 'epoch': 2.50} |
|
|
|
05/30/2024 00:38:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5493, 'learning_rate': 4.2599e-05, 'epoch': 2.51} |
|
|
|
05/30/2024 00:40:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5021, 'learning_rate': 4.2547e-05, 'epoch': 2.52} |
|
|
|
05/30/2024 00:41:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5017, 'learning_rate': 4.2494e-05, 'epoch': 2.53} |
|
|
|
05/30/2024 00:43:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5441, 'learning_rate': 4.2442e-05, 'epoch': 2.54} |
|
|
|
05/30/2024 00:45:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5431, 'learning_rate': 4.2389e-05, 'epoch': 2.55} |
|
|
|
05/30/2024 00:47:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5010, 'learning_rate': 4.2336e-05, 'epoch': 2.56} |
|
|
|
05/30/2024 00:49:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5005, 'learning_rate': 4.2283e-05, 'epoch': 2.57} |
|
|
|
05/30/2024 00:51:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5459, 'learning_rate': 4.2229e-05, 'epoch': 2.58} |
|
|
|
05/30/2024 00:53:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5321, 'learning_rate': 4.2176e-05, 'epoch': 2.59} |
|
|
|
05/30/2024 00:54:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5123, 'learning_rate': 4.2122e-05, 'epoch': 2.60} |
|
|
|
05/30/2024 00:56:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4994, 'learning_rate': 4.2069e-05, 'epoch': 2.61} |
|
|
|
05/30/2024 00:58:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5599, 'learning_rate': 4.2015e-05, 'epoch': 2.62} |
|
|
|
05/30/2024 01:00:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5245, 'learning_rate': 4.1961e-05, 'epoch': 2.63} |
|
|
|
05/30/2024 01:00:26 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1400 |
|
|
|
05/30/2024 01:00:26 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1400/tokenizer_config.json |
|
|
|
05/30/2024 01:00:26 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1400/special_tokens_map.json |
|
|
|
05/30/2024 01:02:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5195, 'learning_rate': 4.1906e-05, 'epoch': 2.63} |
|
|
|
05/30/2024 01:04:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5026, 'learning_rate': 4.1852e-05, 'epoch': 2.64} |
|
|
|
05/30/2024 01:05:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5048, 'learning_rate': 4.1798e-05, 'epoch': 2.65} |
|
|
|
05/30/2024 01:07:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5008, 'learning_rate': 4.1743e-05, 'epoch': 2.66} |
|
|
|
05/30/2024 01:09:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5291, 'learning_rate': 4.1688e-05, 'epoch': 2.67} |
|
|
|
05/30/2024 01:11:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5285, 'learning_rate': 4.1633e-05, 'epoch': 2.68} |
|
|
|
05/30/2024 01:12:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5208, 'learning_rate': 4.1578e-05, 'epoch': 2.69} |
|
|
|
05/30/2024 01:14:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5200, 'learning_rate': 4.1523e-05, 'epoch': 2.70} |
|
|
|
05/30/2024 01:16:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5586, 'learning_rate': 4.1467e-05, 'epoch': 2.71} |
|
|
|
05/30/2024 01:18:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5330, 'learning_rate': 4.1412e-05, 'epoch': 2.72} |
|
|
|
05/30/2024 01:20:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5146, 'learning_rate': 4.1356e-05, 'epoch': 2.73} |
|
|
|
05/30/2024 01:21:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5406, 'learning_rate': 4.1301e-05, 'epoch': 2.74} |
|
|
|
05/30/2024 01:23:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5477, 'learning_rate': 4.1245e-05, 'epoch': 2.75} |
|
|
|
05/30/2024 01:25:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5082, 'learning_rate': 4.1189e-05, 'epoch': 2.76} |
|
|
|
05/30/2024 01:27:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5116, 'learning_rate': 4.1132e-05, 'epoch': 2.77} |
|
|
|
05/30/2024 01:29:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5229, 'learning_rate': 4.1076e-05, 'epoch': 2.78} |
|
|
|
05/30/2024 01:31:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5392, 'learning_rate': 4.1019e-05, 'epoch': 2.78} |
|
|
|
05/30/2024 01:32:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5350, 'learning_rate': 4.0963e-05, 'epoch': 2.79} |
|
|
|
05/30/2024 01:34:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5483, 'learning_rate': 4.0906e-05, 'epoch': 2.80} |
|
|
|
05/30/2024 01:36:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5065, 'learning_rate': 4.0849e-05, 'epoch': 2.81} |
|
|
|
05/30/2024 01:36:37 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1500 |
|
|
|
05/30/2024 01:36:37 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1500/tokenizer_config.json |
|
|
|
05/30/2024 01:36:37 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1500/special_tokens_map.json |
|
|
|
05/30/2024 01:38:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4962, 'learning_rate': 4.0792e-05, 'epoch': 2.82} |
|
|
|
05/30/2024 01:40:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5153, 'learning_rate': 4.0735e-05, 'epoch': 2.83} |
|
|
|
05/30/2024 01:42:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5478, 'learning_rate': 4.0678e-05, 'epoch': 2.84} |
|
|
|
05/30/2024 01:43:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5172, 'learning_rate': 4.0620e-05, 'epoch': 2.85} |
|
|
|
05/30/2024 01:45:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5502, 'learning_rate': 4.0563e-05, 'epoch': 2.86} |
|
|
|
05/30/2024 01:47:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5052, 'learning_rate': 4.0505e-05, 'epoch': 2.87} |
|
|
|
05/30/2024 01:49:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5309, 'learning_rate': 4.0447e-05, 'epoch': 2.88} |
|
|
|
05/30/2024 01:51:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4996, 'learning_rate': 4.0389e-05, 'epoch': 2.89} |
|
|
|
05/30/2024 01:53:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5690, 'learning_rate': 4.0331e-05, 'epoch': 2.90} |
|
|
|
05/30/2024 01:54:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5168, 'learning_rate': 4.0273e-05, 'epoch': 2.91} |
|
|
|
05/30/2024 01:56:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5172, 'learning_rate': 4.0214e-05, 'epoch': 2.92} |
|
|
|
05/30/2024 01:58:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5251, 'learning_rate': 4.0156e-05, 'epoch': 2.93} |
|
|
|
05/30/2024 02:00:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5499, 'learning_rate': 4.0097e-05, 'epoch': 2.93} |
|
|
|
05/30/2024 02:02:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5518, 'learning_rate': 4.0038e-05, 'epoch': 2.94} |
|
|
|
05/30/2024 02:04:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5367, 'learning_rate': 3.9979e-05, 'epoch': 2.95} |
|
|
|
05/30/2024 02:05:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5245, 'learning_rate': 3.9920e-05, 'epoch': 2.96} |
|
|
|
05/30/2024 02:07:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4939, 'learning_rate': 3.9861e-05, 'epoch': 2.97} |
|
|
|
05/30/2024 02:09:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5723, 'learning_rate': 3.9802e-05, 'epoch': 2.98} |
|
|
|
05/30/2024 02:11:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5510, 'learning_rate': 3.9742e-05, 'epoch': 2.99} |
|
|
|
05/30/2024 02:13:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5302, 'learning_rate': 3.9683e-05, 'epoch': 3.00} |
|
|
|
05/30/2024 02:13:02 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1600 |
|
|
|
05/30/2024 02:13:02 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1600/tokenizer_config.json |
|
|
|
05/30/2024 02:13:02 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1600/special_tokens_map.json |
|
|
|
05/30/2024 02:14:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5514, 'learning_rate': 3.9623e-05, 'epoch': 3.01} |
|
|
|
05/30/2024 02:16:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5004, 'learning_rate': 3.9563e-05, 'epoch': 3.02} |
|
|
|
05/30/2024 02:18:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5292, 'learning_rate': 3.9503e-05, 'epoch': 3.03} |
|
|
|
05/30/2024 02:20:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5275, 'learning_rate': 3.9443e-05, 'epoch': 3.04} |
|
|
|
05/30/2024 02:22:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5223, 'learning_rate': 3.9383e-05, 'epoch': 3.05} |
|
|
|
05/30/2024 02:24:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4827, 'learning_rate': 3.9323e-05, 'epoch': 3.06} |
|
|
|
05/30/2024 02:25:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4863, 'learning_rate': 3.9262e-05, 'epoch': 3.07} |
|
|
|
05/30/2024 02:27:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5363, 'learning_rate': 3.9202e-05, 'epoch': 3.08} |
|
|
|
05/30/2024 02:29:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4711, 'learning_rate': 3.9141e-05, 'epoch': 3.08} |
|
|
|
05/30/2024 02:31:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5237, 'learning_rate': 3.9080e-05, 'epoch': 3.09} |
|
|
|
05/30/2024 02:33:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5261, 'learning_rate': 3.9019e-05, 'epoch': 3.10} |
|
|
|
05/30/2024 02:34:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4975, 'learning_rate': 3.8958e-05, 'epoch': 3.11} |
|
|
|
05/30/2024 02:36:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5053, 'learning_rate': 3.8897e-05, 'epoch': 3.12} |
|
|
|
05/30/2024 02:38:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4911, 'learning_rate': 3.8836e-05, 'epoch': 3.13} |
|
|
|
05/30/2024 02:40:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4567, 'learning_rate': 3.8774e-05, 'epoch': 3.14} |
|
|
|
05/30/2024 02:42:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5374, 'learning_rate': 3.8713e-05, 'epoch': 3.15} |
|
|
|
05/30/2024 02:43:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4974, 'learning_rate': 3.8651e-05, 'epoch': 3.16} |
|
|
|
05/30/2024 02:45:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5184, 'learning_rate': 3.8589e-05, 'epoch': 3.17} |
|
|
|
05/30/2024 02:47:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4811, 'learning_rate': 3.8527e-05, 'epoch': 3.18} |
|
|
|
05/30/2024 02:49:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5137, 'learning_rate': 3.8465e-05, 'epoch': 3.19} |
|
|
|
05/30/2024 02:49:25 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1700 |
|
|
|
05/30/2024 02:49:25 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1700/tokenizer_config.json |
|
|
|
05/30/2024 02:49:25 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1700/special_tokens_map.json |
|
|
|
05/30/2024 02:51:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4630, 'learning_rate': 3.8403e-05, 'epoch': 3.20} |
|
|
|
05/30/2024 02:52:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5014, 'learning_rate': 3.8341e-05, 'epoch': 3.21} |
|
|
|
05/30/2024 02:54:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5360, 'learning_rate': 3.8279e-05, 'epoch': 3.22} |
|
|
|
05/30/2024 02:56:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5505, 'learning_rate': 3.8216e-05, 'epoch': 3.23} |
|
|
|
05/30/2024 02:58:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5114, 'learning_rate': 3.8153e-05, 'epoch': 3.23} |
|
|
|
05/30/2024 03:00:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4733, 'learning_rate': 3.8091e-05, 'epoch': 3.24} |
|
|
|
05/30/2024 03:02:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4584, 'learning_rate': 3.8028e-05, 'epoch': 3.25} |
|
|
|
05/30/2024 03:04:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5423, 'learning_rate': 3.7965e-05, 'epoch': 3.26} |
|
|
|
05/30/2024 03:05:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5127, 'learning_rate': 3.7902e-05, 'epoch': 3.27} |
|
|
|
05/30/2024 03:07:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4913, 'learning_rate': 3.7839e-05, 'epoch': 3.28} |
|
|
|
05/30/2024 03:09:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5296, 'learning_rate': 3.7775e-05, 'epoch': 3.29} |
|
|
|
05/30/2024 03:11:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4661, 'learning_rate': 3.7712e-05, 'epoch': 3.30} |
|
|
|
05/30/2024 03:13:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4864, 'learning_rate': 3.7649e-05, 'epoch': 3.31} |
|
|
|
05/30/2024 03:14:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5037, 'learning_rate': 3.7585e-05, 'epoch': 3.32} |
|
|
|
05/30/2024 03:16:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5081, 'learning_rate': 3.7521e-05, 'epoch': 3.33} |
|
|
|
05/30/2024 03:18:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5379, 'learning_rate': 3.7457e-05, 'epoch': 3.34} |
|
|
|
05/30/2024 03:20:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4975, 'learning_rate': 3.7394e-05, 'epoch': 3.35} |
|
|
|
05/30/2024 03:22:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5219, 'learning_rate': 3.7329e-05, 'epoch': 3.36} |
|
|
|
05/30/2024 03:24:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5423, 'learning_rate': 3.7265e-05, 'epoch': 3.37} |
|
|
|
05/30/2024 03:25:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5189, 'learning_rate': 3.7201e-05, 'epoch': 3.38} |
|
|
|
05/30/2024 03:25:50 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1800 |
|
|
|
05/30/2024 03:25:50 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1800/tokenizer_config.json |
|
|
|
05/30/2024 03:25:50 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1800/special_tokens_map.json |
|
|
|
05/30/2024 03:27:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5173, 'learning_rate': 3.7137e-05, 'epoch': 3.38} |
|
|
|
05/30/2024 03:29:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5167, 'learning_rate': 3.7072e-05, 'epoch': 3.39} |
|
|
|
05/30/2024 03:31:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5309, 'learning_rate': 3.7008e-05, 'epoch': 3.40} |
|
|
|
05/30/2024 03:33:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5469, 'learning_rate': 3.6943e-05, 'epoch': 3.41} |
|
|
|
05/30/2024 03:34:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4952, 'learning_rate': 3.6878e-05, 'epoch': 3.42} |
|
|
|
05/30/2024 03:36:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4927, 'learning_rate': 3.6813e-05, 'epoch': 3.43} |
|
|
|
05/30/2024 03:38:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5248, 'learning_rate': 3.6748e-05, 'epoch': 3.44} |
|
|
|
05/30/2024 03:40:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4653, 'learning_rate': 3.6683e-05, 'epoch': 3.45} |
|
|
|
05/30/2024 03:42:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4843, 'learning_rate': 3.6618e-05, 'epoch': 3.46} |
|
|
|
05/30/2024 03:44:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5148, 'learning_rate': 3.6553e-05, 'epoch': 3.47} |
|
|
|
05/30/2024 03:46:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4963, 'learning_rate': 3.6487e-05, 'epoch': 3.48} |
|
|
|
05/30/2024 03:47:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4976, 'learning_rate': 3.6422e-05, 'epoch': 3.49} |
|
|
|
05/30/2024 03:49:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4649, 'learning_rate': 3.6356e-05, 'epoch': 3.50} |
|
|
|
05/30/2024 03:51:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5120, 'learning_rate': 3.6291e-05, 'epoch': 3.51} |
|
|
|
05/30/2024 03:53:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5213, 'learning_rate': 3.6225e-05, 'epoch': 3.52} |
|
|
|
05/30/2024 03:55:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4847, 'learning_rate': 3.6159e-05, 'epoch': 3.53} |
|
|
|
05/30/2024 03:56:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5436, 'learning_rate': 3.6093e-05, 'epoch': 3.53} |
|
|
|
05/30/2024 03:58:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5019, 'learning_rate': 3.6027e-05, 'epoch': 3.54} |
|
|
|
05/30/2024 04:00:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5178, 'learning_rate': 3.5961e-05, 'epoch': 3.55} |
|
|
|
05/30/2024 04:02:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5126, 'learning_rate': 3.5894e-05, 'epoch': 3.56} |
|
|
|
05/30/2024 04:02:24 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1900 |
|
|
|
05/30/2024 04:02:24 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1900/tokenizer_config.json |
|
|
|
05/30/2024 04:02:24 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-1900/special_tokens_map.json |
|
|
|
05/30/2024 04:04:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5216, 'learning_rate': 3.5828e-05, 'epoch': 3.57} |
|
|
|
05/30/2024 04:06:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5305, 'learning_rate': 3.5762e-05, 'epoch': 3.58} |
|
|
|
05/30/2024 04:07:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5053, 'learning_rate': 3.5695e-05, 'epoch': 3.59} |
|
|
|
05/30/2024 04:09:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5080, 'learning_rate': 3.5628e-05, 'epoch': 3.60} |
|
|
|
05/30/2024 04:11:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4804, 'learning_rate': 3.5562e-05, 'epoch': 3.61} |
|
|
|
05/30/2024 04:13:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5413, 'learning_rate': 3.5495e-05, 'epoch': 3.62} |
|
|
|
05/30/2024 04:15:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5451, 'learning_rate': 3.5428e-05, 'epoch': 3.63} |
|
|
|
05/30/2024 04:17:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5362, 'learning_rate': 3.5361e-05, 'epoch': 3.64} |
|
|
|
05/30/2024 04:19:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5377, 'learning_rate': 3.5294e-05, 'epoch': 3.65} |
|
|
|
05/30/2024 04:20:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5314, 'learning_rate': 3.5227e-05, 'epoch': 3.66} |
|
|
|
05/30/2024 04:22:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5168, 'learning_rate': 3.5159e-05, 'epoch': 3.67} |
|
|
|
05/30/2024 04:24:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5270, 'learning_rate': 3.5092e-05, 'epoch': 3.68} |
|
|
|
05/30/2024 04:26:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4951, 'learning_rate': 3.5025e-05, 'epoch': 3.68} |
|
|
|
05/30/2024 04:28:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5925, 'learning_rate': 3.4957e-05, 'epoch': 3.69} |
|
|
|
05/30/2024 04:30:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5096, 'learning_rate': 3.4889e-05, 'epoch': 3.70} |
|
|
|
05/30/2024 04:31:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.5127, 'learning_rate': 3.4822e-05, 'epoch': 3.71} |
|
|
|
05/30/2024 04:33:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5134, 'learning_rate': 3.4754e-05, 'epoch': 3.72} |
|
|
|
05/30/2024 04:35:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5144, 'learning_rate': 3.4686e-05, 'epoch': 3.73} |
|
|
|
05/30/2024 04:37:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5211, 'learning_rate': 3.4618e-05, 'epoch': 3.74} |
|
|
|
05/30/2024 04:39:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5263, 'learning_rate': 3.4550e-05, 'epoch': 3.75} |
|
|
|
05/30/2024 04:39:44 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2000 |
|
|
|
05/30/2024 04:39:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2000/tokenizer_config.json |
|
|
|
05/30/2024 04:39:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2000/special_tokens_map.json |
|
|
|
05/30/2024 04:41:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5419, 'learning_rate': 3.4482e-05, 'epoch': 3.76} |
|
|
|
05/30/2024 04:43:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4625, 'learning_rate': 3.4414e-05, 'epoch': 3.77} |
|
|
|
05/30/2024 04:45:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4627, 'learning_rate': 3.4345e-05, 'epoch': 3.78} |
|
|
|
05/30/2024 04:47:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5012, 'learning_rate': 3.4277e-05, 'epoch': 3.79} |
|
|
|
05/30/2024 04:49:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5192, 'learning_rate': 3.4209e-05, 'epoch': 3.80} |
|
|
|
05/30/2024 04:50:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5196, 'learning_rate': 3.4140e-05, 'epoch': 3.81} |
|
|
|
05/30/2024 04:52:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5301, 'learning_rate': 3.4071e-05, 'epoch': 3.82} |
|
|
|
05/30/2024 04:54:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4940, 'learning_rate': 3.4003e-05, 'epoch': 3.83} |
|
|
|
05/30/2024 04:56:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5295, 'learning_rate': 3.3934e-05, 'epoch': 3.83} |
|
|
|
05/30/2024 04:58:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4740, 'learning_rate': 3.3865e-05, 'epoch': 3.84} |
|
|
|
05/30/2024 04:59:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5360, 'learning_rate': 3.3796e-05, 'epoch': 3.85} |
|
|
|
05/30/2024 05:01:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5191, 'learning_rate': 3.3727e-05, 'epoch': 3.86} |
|
|
|
05/30/2024 05:03:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4945, 'learning_rate': 3.3658e-05, 'epoch': 3.87} |
|
|
|
05/30/2024 05:05:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5127, 'learning_rate': 3.3589e-05, 'epoch': 3.88} |
|
|
|
05/30/2024 05:07:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5580, 'learning_rate': 3.3520e-05, 'epoch': 3.89} |
|
|
|
05/30/2024 05:09:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5032, 'learning_rate': 3.3450e-05, 'epoch': 3.90} |
|
|
|
05/30/2024 05:10:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5007, 'learning_rate': 3.3381e-05, 'epoch': 3.91} |
|
|
|
05/30/2024 05:12:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4874, 'learning_rate': 3.3312e-05, 'epoch': 3.92} |
|
|
|
05/30/2024 05:14:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5300, 'learning_rate': 3.3242e-05, 'epoch': 3.93} |
|
|
|
05/30/2024 05:16:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5267, 'learning_rate': 3.3172e-05, 'epoch': 3.94} |
|
|
|
05/30/2024 05:16:13 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2100 |
|
|
|
05/30/2024 05:16:13 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2100/tokenizer_config.json |
|
|
|
05/30/2024 05:16:13 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2100/special_tokens_map.json |
|
|
|
05/30/2024 05:18:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4980, 'learning_rate': 3.3103e-05, 'epoch': 3.95} |
|
|
|
05/30/2024 05:19:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5103, 'learning_rate': 3.3033e-05, 'epoch': 3.96} |
|
|
|
05/30/2024 05:21:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4943, 'learning_rate': 3.2963e-05, 'epoch': 3.97} |
|
|
|
05/30/2024 05:23:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5308, 'learning_rate': 3.2893e-05, 'epoch': 3.98} |
|
|
|
05/30/2024 05:25:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5103, 'learning_rate': 3.2823e-05, 'epoch': 3.98} |
|
|
|
05/30/2024 05:27:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5076, 'learning_rate': 3.2753e-05, 'epoch': 3.99} |
|
|
|
05/30/2024 05:29:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4976, 'learning_rate': 3.2683e-05, 'epoch': 4.00} |
|
|
|
05/30/2024 05:30:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5194, 'learning_rate': 3.2613e-05, 'epoch': 4.01} |
|
|
|
05/30/2024 05:32:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5000, 'learning_rate': 3.2543e-05, 'epoch': 4.02} |
|
|
|
05/30/2024 05:34:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5130, 'learning_rate': 3.2473e-05, 'epoch': 4.03} |
|
|
|
05/30/2024 05:36:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4840, 'learning_rate': 3.2402e-05, 'epoch': 4.04} |
|
|
|
05/30/2024 05:38:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4915, 'learning_rate': 3.2332e-05, 'epoch': 4.05} |
|
|
|
05/30/2024 05:39:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5173, 'learning_rate': 3.2262e-05, 'epoch': 4.06} |
|
|
|
05/30/2024 05:41:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5144, 'learning_rate': 3.2191e-05, 'epoch': 4.07} |
|
|
|
05/30/2024 05:43:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4967, 'learning_rate': 3.2120e-05, 'epoch': 4.08} |
|
|
|
05/30/2024 05:45:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4679, 'learning_rate': 3.2050e-05, 'epoch': 4.09} |
|
|
|
05/30/2024 05:47:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4765, 'learning_rate': 3.1979e-05, 'epoch': 4.10} |
|
|
|
05/30/2024 05:48:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5320, 'learning_rate': 3.1908e-05, 'epoch': 4.11} |
|
|
|
05/30/2024 05:50:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4973, 'learning_rate': 3.1837e-05, 'epoch': 4.12} |
|
|
|
05/30/2024 05:52:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4680, 'learning_rate': 3.1767e-05, 'epoch': 4.13} |
|
|
|
05/30/2024 05:52:40 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2200 |
|
|
|
05/30/2024 05:52:40 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2200/tokenizer_config.json |
|
|
|
05/30/2024 05:52:40 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2200/special_tokens_map.json |
|
|
|
05/30/2024 05:54:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4801, 'learning_rate': 3.1696e-05, 'epoch': 4.14} |
|
|
|
05/30/2024 05:56:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4751, 'learning_rate': 3.1625e-05, 'epoch': 4.14} |
|
|
|
05/30/2024 05:58:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4857, 'learning_rate': 3.1553e-05, 'epoch': 4.15} |
|
|
|
05/30/2024 05:59:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4850, 'learning_rate': 3.1482e-05, 'epoch': 4.16} |
|
|
|
05/30/2024 06:01:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4817, 'learning_rate': 3.1411e-05, 'epoch': 4.17} |
|
|
|
05/30/2024 06:03:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.5237, 'learning_rate': 3.1340e-05, 'epoch': 4.18} |
|
|
|
05/30/2024 06:05:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5139, 'learning_rate': 3.1269e-05, 'epoch': 4.19} |
|
|
|
05/30/2024 06:07:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5199, 'learning_rate': 3.1197e-05, 'epoch': 4.20} |
|
|
|
05/30/2024 06:08:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4637, 'learning_rate': 3.1126e-05, 'epoch': 4.21} |
|
|
|
05/30/2024 06:10:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5248, 'learning_rate': 3.1054e-05, 'epoch': 4.22} |
|
|
|
05/30/2024 06:12:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4885, 'learning_rate': 3.0983e-05, 'epoch': 4.23} |
|
|
|
05/30/2024 06:14:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4893, 'learning_rate': 3.0911e-05, 'epoch': 4.24} |
|
|
|
05/30/2024 06:16:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4819, 'learning_rate': 3.0840e-05, 'epoch': 4.25} |
|
|
|
05/30/2024 06:18:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5539, 'learning_rate': 3.0768e-05, 'epoch': 4.26} |
|
|
|
05/30/2024 06:19:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4927, 'learning_rate': 3.0696e-05, 'epoch': 4.27} |
|
|
|
05/30/2024 06:21:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4914, 'learning_rate': 3.0625e-05, 'epoch': 4.28} |
|
|
|
05/30/2024 06:23:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5062, 'learning_rate': 3.0553e-05, 'epoch': 4.29} |
|
|
|
05/30/2024 06:25:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5349, 'learning_rate': 3.0481e-05, 'epoch': 4.29} |
|
|
|
05/30/2024 06:27:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4706, 'learning_rate': 3.0409e-05, 'epoch': 4.30} |
|
|
|
05/30/2024 06:29:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4896, 'learning_rate': 3.0337e-05, 'epoch': 4.31} |
|
|
|
05/30/2024 06:29:06 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2300 |
|
|
|
05/30/2024 06:29:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2300/tokenizer_config.json |
|
|
|
05/30/2024 06:29:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2300/special_tokens_map.json |
|
|
|
05/30/2024 06:30:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4542, 'learning_rate': 3.0265e-05, 'epoch': 4.32} |
|
|
|
05/30/2024 06:32:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4890, 'learning_rate': 3.0193e-05, 'epoch': 4.33} |
|
|
|
05/30/2024 06:34:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5012, 'learning_rate': 3.0121e-05, 'epoch': 4.34} |
|
|
|
05/30/2024 06:36:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4673, 'learning_rate': 3.0049e-05, 'epoch': 4.35} |
|
|
|
05/30/2024 06:38:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4686, 'learning_rate': 2.9977e-05, 'epoch': 4.36} |
|
|
|
05/30/2024 06:40:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4863, 'learning_rate': 2.9904e-05, 'epoch': 4.37} |
|
|
|
05/30/2024 06:41:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4852, 'learning_rate': 2.9832e-05, 'epoch': 4.38} |
|
|
|
05/30/2024 06:43:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4491, 'learning_rate': 2.9760e-05, 'epoch': 4.39} |
|
|
|
05/30/2024 06:45:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4961, 'learning_rate': 2.9687e-05, 'epoch': 4.40} |
|
|
|
05/30/2024 06:47:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4803, 'learning_rate': 2.9615e-05, 'epoch': 4.41} |
|
|
|
05/30/2024 06:49:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5293, 'learning_rate': 2.9543e-05, 'epoch': 4.42} |
|
|
|
05/30/2024 06:51:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4822, 'learning_rate': 2.9470e-05, 'epoch': 4.43} |
|
|
|
05/30/2024 06:52:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5287, 'learning_rate': 2.9398e-05, 'epoch': 4.44} |
|
|
|
05/30/2024 06:54:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4607, 'learning_rate': 2.9325e-05, 'epoch': 4.44} |
|
|
|
05/30/2024 06:56:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4919, 'learning_rate': 2.9252e-05, 'epoch': 4.45} |
|
|
|
05/30/2024 06:58:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4835, 'learning_rate': 2.9180e-05, 'epoch': 4.46} |
|
|
|
05/30/2024 07:00:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4930, 'learning_rate': 2.9107e-05, 'epoch': 4.47} |
|
|
|
05/30/2024 07:01:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4829, 'learning_rate': 2.9035e-05, 'epoch': 4.48} |
|
|
|
05/30/2024 07:03:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5174, 'learning_rate': 2.8962e-05, 'epoch': 4.49} |
|
|
|
05/30/2024 07:05:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4787, 'learning_rate': 2.8889e-05, 'epoch': 4.50} |
|
|
|
05/30/2024 07:05:45 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2400 |
|
|
|
05/30/2024 07:05:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2400/tokenizer_config.json |
|
|
|
05/30/2024 07:05:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2400/special_tokens_map.json |
|
|
|
05/30/2024 07:07:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4817, 'learning_rate': 2.8816e-05, 'epoch': 4.51} |
|
|
|
05/30/2024 07:09:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5553, 'learning_rate': 2.8743e-05, 'epoch': 4.52} |
|
|
|
05/30/2024 07:11:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4798, 'learning_rate': 2.8671e-05, 'epoch': 4.53} |
|
|
|
05/30/2024 07:12:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4799, 'learning_rate': 2.8598e-05, 'epoch': 4.54} |
|
|
|
05/30/2024 07:14:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5110, 'learning_rate': 2.8525e-05, 'epoch': 4.55} |
|
|
|
05/30/2024 07:16:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5036, 'learning_rate': 2.8452e-05, 'epoch': 4.56} |
|
|
|
05/30/2024 07:18:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5445, 'learning_rate': 2.8379e-05, 'epoch': 4.57} |
|
|
|
05/30/2024 07:20:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4820, 'learning_rate': 2.8306e-05, 'epoch': 4.58} |
|
|
|
05/30/2024 07:21:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5115, 'learning_rate': 2.8233e-05, 'epoch': 4.59} |
|
|
|
05/30/2024 07:23:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5376, 'learning_rate': 2.8160e-05, 'epoch': 4.59} |
|
|
|
05/30/2024 07:25:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.4949, 'learning_rate': 2.8087e-05, 'epoch': 4.60} |
|
|
|
05/30/2024 07:27:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4716, 'learning_rate': 2.8013e-05, 'epoch': 4.61} |
|
|
|
05/30/2024 07:29:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4845, 'learning_rate': 2.7940e-05, 'epoch': 4.62} |
|
|
|
05/30/2024 07:31:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5383, 'learning_rate': 2.7867e-05, 'epoch': 4.63} |
|
|
|
05/30/2024 07:33:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4851, 'learning_rate': 2.7794e-05, 'epoch': 4.64} |
|
|
|
05/30/2024 07:34:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4903, 'learning_rate': 2.7721e-05, 'epoch': 4.65} |
|
|
|
05/30/2024 07:36:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5113, 'learning_rate': 2.7647e-05, 'epoch': 4.66} |
|
|
|
05/30/2024 07:38:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5093, 'learning_rate': 2.7574e-05, 'epoch': 4.67} |
|
|
|
05/30/2024 07:40:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4943, 'learning_rate': 2.7501e-05, 'epoch': 4.68} |
|
|
|
05/30/2024 07:42:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5064, 'learning_rate': 2.7428e-05, 'epoch': 4.69} |
|
|
|
05/30/2024 07:42:17 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2500 |
|
|
|
05/30/2024 07:42:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2500/tokenizer_config.json |
|
|
|
05/30/2024 07:42:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2500/special_tokens_map.json |
|
|
|
05/30/2024 07:44:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4965, 'learning_rate': 2.7354e-05, 'epoch': 4.70} |
|
|
|
05/30/2024 07:45:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5535, 'learning_rate': 2.7281e-05, 'epoch': 4.71} |
|
|
|
05/30/2024 07:47:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4779, 'learning_rate': 2.7207e-05, 'epoch': 4.72} |
|
|
|
05/30/2024 07:49:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5130, 'learning_rate': 2.7134e-05, 'epoch': 4.73} |
|
|
|
05/30/2024 07:51:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4993, 'learning_rate': 2.7061e-05, 'epoch': 4.74} |
|
|
|
05/30/2024 07:53:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4977, 'learning_rate': 2.6987e-05, 'epoch': 4.74} |
|
|
|
05/30/2024 07:55:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4844, 'learning_rate': 2.6914e-05, 'epoch': 4.75} |
|
|
|
05/30/2024 07:57:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4829, 'learning_rate': 2.6840e-05, 'epoch': 4.76} |
|
|
|
05/30/2024 07:58:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5146, 'learning_rate': 2.6767e-05, 'epoch': 4.77} |
|
|
|
05/30/2024 08:00:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4873, 'learning_rate': 2.6693e-05, 'epoch': 4.78} |
|
|
|
05/30/2024 08:02:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5248, 'learning_rate': 2.6620e-05, 'epoch': 4.79} |
|
|
|
05/30/2024 08:04:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5243, 'learning_rate': 2.6546e-05, 'epoch': 4.80} |
|
|
|
05/30/2024 08:06:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4940, 'learning_rate': 2.6473e-05, 'epoch': 4.81} |
|
|
|
05/30/2024 08:08:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4730, 'learning_rate': 2.6399e-05, 'epoch': 4.82} |
|
|
|
05/30/2024 08:10:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5097, 'learning_rate': 2.6326e-05, 'epoch': 4.83} |
|
|
|
05/30/2024 08:11:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5153, 'learning_rate': 2.6252e-05, 'epoch': 4.84} |
|
|
|
05/30/2024 08:13:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4543, 'learning_rate': 2.6178e-05, 'epoch': 4.85} |
|
|
|
05/30/2024 08:15:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4527, 'learning_rate': 2.6105e-05, 'epoch': 4.86} |
|
|
|
05/30/2024 08:17:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4778, 'learning_rate': 2.6031e-05, 'epoch': 4.87} |
|
|
|
05/30/2024 08:19:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5012, 'learning_rate': 2.5958e-05, 'epoch': 4.88} |
|
|
|
05/30/2024 08:19:03 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2600 |
|
|
|
05/30/2024 08:19:03 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2600/tokenizer_config.json |
|
|
|
05/30/2024 08:19:03 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2600/special_tokens_map.json |
|
|
|
05/30/2024 08:20:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5295, 'learning_rate': 2.5884e-05, 'epoch': 4.89} |
|
|
|
05/30/2024 08:22:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4894, 'learning_rate': 2.5810e-05, 'epoch': 4.89} |
|
|
|
05/30/2024 08:24:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4943, 'learning_rate': 2.5737e-05, 'epoch': 4.90} |
|
|
|
05/30/2024 08:26:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4552, 'learning_rate': 2.5663e-05, 'epoch': 4.91} |
|
|
|
05/30/2024 08:28:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4625, 'learning_rate': 2.5589e-05, 'epoch': 4.92} |
|
|
|
05/30/2024 08:30:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5020, 'learning_rate': 2.5516e-05, 'epoch': 4.93} |
|
|
|
05/30/2024 08:31:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5138, 'learning_rate': 2.5442e-05, 'epoch': 4.94} |
|
|
|
05/30/2024 08:33:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4996, 'learning_rate': 2.5368e-05, 'epoch': 4.95} |
|
|
|
05/30/2024 08:35:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4681, 'learning_rate': 2.5295e-05, 'epoch': 4.96} |
|
|
|
05/30/2024 08:37:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4663, 'learning_rate': 2.5221e-05, 'epoch': 4.97} |
|
|
|
05/30/2024 08:39:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5197, 'learning_rate': 2.5147e-05, 'epoch': 4.98} |
|
|
|
05/30/2024 08:41:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4955, 'learning_rate': 2.5074e-05, 'epoch': 4.99} |
|
|
|
05/30/2024 08:43:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5063, 'learning_rate': 2.5000e-05, 'epoch': 5.00} |
|
|
|
05/30/2024 08:44:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4801, 'learning_rate': 2.4926e-05, 'epoch': 5.01} |
|
|
|
05/30/2024 08:46:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4690, 'learning_rate': 2.4853e-05, 'epoch': 5.02} |
|
|
|
05/30/2024 08:48:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4697, 'learning_rate': 2.4779e-05, 'epoch': 5.03} |
|
|
|
05/30/2024 08:50:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4674, 'learning_rate': 2.4705e-05, 'epoch': 5.04} |
|
|
|
05/30/2024 08:52:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4627, 'learning_rate': 2.4632e-05, 'epoch': 5.04} |
|
|
|
05/30/2024 08:53:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4592, 'learning_rate': 2.4558e-05, 'epoch': 5.05} |
|
|
|
05/30/2024 08:55:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4903, 'learning_rate': 2.4484e-05, 'epoch': 5.06} |
|
|
|
05/30/2024 08:55:41 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2700 |
|
|
|
05/30/2024 08:55:41 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2700/tokenizer_config.json |
|
|
|
05/30/2024 08:55:41 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2700/special_tokens_map.json |
|
|
|
05/30/2024 08:57:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5137, 'learning_rate': 2.4411e-05, 'epoch': 5.07} |
|
|
|
05/30/2024 08:59:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4808, 'learning_rate': 2.4337e-05, 'epoch': 5.08} |
|
|
|
05/30/2024 09:01:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4783, 'learning_rate': 2.4263e-05, 'epoch': 5.09} |
|
|
|
05/30/2024 09:02:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4733, 'learning_rate': 2.4190e-05, 'epoch': 5.10} |
|
|
|
05/30/2024 09:04:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4680, 'learning_rate': 2.4116e-05, 'epoch': 5.11} |
|
|
|
05/30/2024 09:06:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4530, 'learning_rate': 2.4042e-05, 'epoch': 5.12} |
|
|
|
05/30/2024 09:08:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4812, 'learning_rate': 2.3969e-05, 'epoch': 5.13} |
|
|
|
05/30/2024 09:10:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4931, 'learning_rate': 2.3895e-05, 'epoch': 5.14} |
|
|
|
05/30/2024 09:12:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5565, 'learning_rate': 2.3822e-05, 'epoch': 5.15} |
|
|
|
05/30/2024 09:14:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4738, 'learning_rate': 2.3748e-05, 'epoch': 5.16} |
|
|
|
05/30/2024 09:16:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4735, 'learning_rate': 2.3674e-05, 'epoch': 5.17} |
|
|
|
05/30/2024 09:17:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4980, 'learning_rate': 2.3601e-05, 'epoch': 5.18} |
|
|
|
05/30/2024 09:19:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4580, 'learning_rate': 2.3527e-05, 'epoch': 5.19} |
|
|
|
05/30/2024 09:21:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5213, 'learning_rate': 2.3454e-05, 'epoch': 5.19} |
|
|
|
05/30/2024 09:23:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5083, 'learning_rate': 2.3380e-05, 'epoch': 5.20} |
|
|
|
05/30/2024 09:25:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4538, 'learning_rate': 2.3307e-05, 'epoch': 5.21} |
|
|
|
05/30/2024 09:27:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4885, 'learning_rate': 2.3233e-05, 'epoch': 5.22} |
|
|
|
05/30/2024 09:28:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4810, 'learning_rate': 2.3160e-05, 'epoch': 5.23} |
|
|
|
05/30/2024 09:30:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4823, 'learning_rate': 2.3086e-05, 'epoch': 5.24} |
|
|
|
05/30/2024 09:32:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4658, 'learning_rate': 2.3013e-05, 'epoch': 5.25} |
|
|
|
05/30/2024 09:32:31 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2800 |
|
|
|
05/30/2024 09:32:31 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2800/tokenizer_config.json |
|
|
|
05/30/2024 09:32:31 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2800/special_tokens_map.json |
|
|
|
05/30/2024 09:34:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4903, 'learning_rate': 2.2939e-05, 'epoch': 5.26} |
|
|
|
05/30/2024 09:36:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4763, 'learning_rate': 2.2866e-05, 'epoch': 5.27} |
|
|
|
05/30/2024 09:38:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5250, 'learning_rate': 2.2793e-05, 'epoch': 5.28} |
|
|
|
05/30/2024 09:39:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4847, 'learning_rate': 2.2719e-05, 'epoch': 5.29} |
|
|
|
05/30/2024 09:41:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4677, 'learning_rate': 2.2646e-05, 'epoch': 5.30} |
|
|
|
05/30/2024 09:43:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4657, 'learning_rate': 2.2572e-05, 'epoch': 5.31} |
|
|
|
05/30/2024 09:45:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4795, 'learning_rate': 2.2499e-05, 'epoch': 5.32} |
|
|
|
05/30/2024 09:47:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4799, 'learning_rate': 2.2426e-05, 'epoch': 5.33} |
|
|
|
05/30/2024 09:48:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4756, 'learning_rate': 2.2353e-05, 'epoch': 5.34} |
|
|
|
05/30/2024 09:50:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4777, 'learning_rate': 2.2279e-05, 'epoch': 5.34} |
|
|
|
05/30/2024 09:52:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4928, 'learning_rate': 2.2206e-05, 'epoch': 5.35} |
|
|
|
05/30/2024 09:54:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5060, 'learning_rate': 2.2133e-05, 'epoch': 5.36} |
|
|
|
05/30/2024 09:56:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4698, 'learning_rate': 2.2060e-05, 'epoch': 5.37} |
|
|
|
05/30/2024 09:58:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4904, 'learning_rate': 2.1987e-05, 'epoch': 5.38} |
|
|
|
05/30/2024 09:59:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4970, 'learning_rate': 2.1913e-05, 'epoch': 5.39} |
|
|
|
05/30/2024 10:01:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4529, 'learning_rate': 2.1840e-05, 'epoch': 5.40} |
|
|
|
05/30/2024 10:03:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5213, 'learning_rate': 2.1767e-05, 'epoch': 5.41} |
|
|
|
05/30/2024 10:05:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5365, 'learning_rate': 2.1694e-05, 'epoch': 5.42} |
|
|
|
05/30/2024 10:07:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4519, 'learning_rate': 2.1621e-05, 'epoch': 5.43} |
|
|
|
05/30/2024 10:09:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4738, 'learning_rate': 2.1548e-05, 'epoch': 5.44} |
|
|
|
05/30/2024 10:09:11 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2900 |
|
|
|
05/30/2024 10:09:11 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2900/tokenizer_config.json |
|
|
|
05/30/2024 10:09:11 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-2900/special_tokens_map.json |
|
|
|
05/30/2024 10:10:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4530, 'learning_rate': 2.1475e-05, 'epoch': 5.45} |
|
|
|
05/30/2024 10:12:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4801, 'learning_rate': 2.1402e-05, 'epoch': 5.46} |
|
|
|
05/30/2024 10:14:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4819, 'learning_rate': 2.1329e-05, 'epoch': 5.47} |
|
|
|
05/30/2024 10:16:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4713, 'learning_rate': 2.1257e-05, 'epoch': 5.48} |
|
|
|
05/30/2024 10:18:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5022, 'learning_rate': 2.1184e-05, 'epoch': 5.49} |
|
|
|
05/30/2024 10:19:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4360, 'learning_rate': 2.1111e-05, 'epoch': 5.49} |
|
|
|
05/30/2024 10:21:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4927, 'learning_rate': 2.1038e-05, 'epoch': 5.50} |
|
|
|
05/30/2024 10:23:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4926, 'learning_rate': 2.0965e-05, 'epoch': 5.51} |
|
|
|
05/30/2024 10:25:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5321, 'learning_rate': 2.0893e-05, 'epoch': 5.52} |
|
|
|
05/30/2024 10:27:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4828, 'learning_rate': 2.0820e-05, 'epoch': 5.53} |
|
|
|
05/30/2024 10:29:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4816, 'learning_rate': 2.0748e-05, 'epoch': 5.54} |
|
|
|
05/30/2024 10:31:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4920, 'learning_rate': 2.0675e-05, 'epoch': 5.55} |
|
|
|
05/30/2024 10:33:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4834, 'learning_rate': 2.0602e-05, 'epoch': 5.56} |
|
|
|
05/30/2024 10:34:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4657, 'learning_rate': 2.0530e-05, 'epoch': 5.57} |
|
|
|
05/30/2024 10:36:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4920, 'learning_rate': 2.0457e-05, 'epoch': 5.58} |
|
|
|
05/30/2024 10:38:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5049, 'learning_rate': 2.0385e-05, 'epoch': 5.59} |
|
|
|
05/30/2024 10:40:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4933, 'learning_rate': 2.0313e-05, 'epoch': 5.60} |
|
|
|
05/30/2024 10:42:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5056, 'learning_rate': 2.0240e-05, 'epoch': 5.61} |
|
|
|
05/30/2024 10:44:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5095, 'learning_rate': 2.0168e-05, 'epoch': 5.62} |
|
|
|
05/30/2024 10:45:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4526, 'learning_rate': 2.0096e-05, 'epoch': 5.63} |
|
|
|
05/30/2024 10:45:48 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3000 |
|
|
|
05/30/2024 10:45:48 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3000/tokenizer_config.json |
|
|
|
05/30/2024 10:45:48 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3000/special_tokens_map.json |
|
|
|
05/30/2024 10:47:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5170, 'learning_rate': 2.0023e-05, 'epoch': 5.64} |
|
|
|
05/30/2024 10:49:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4845, 'learning_rate': 1.9951e-05, 'epoch': 5.64} |
|
|
|
05/30/2024 10:51:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4810, 'learning_rate': 1.9879e-05, 'epoch': 5.65} |
|
|
|
05/30/2024 10:53:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4779, 'learning_rate': 1.9807e-05, 'epoch': 5.66} |
|
|
|
05/30/2024 10:54:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4695, 'learning_rate': 1.9735e-05, 'epoch': 5.67} |
|
|
|
05/30/2024 10:56:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4825, 'learning_rate': 1.9663e-05, 'epoch': 5.68} |
|
|
|
05/30/2024 10:58:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4470, 'learning_rate': 1.9591e-05, 'epoch': 5.69} |
|
|
|
05/30/2024 11:00:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5317, 'learning_rate': 1.9519e-05, 'epoch': 5.70} |
|
|
|
05/30/2024 11:02:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4577, 'learning_rate': 1.9447e-05, 'epoch': 5.71} |
|
|
|
05/30/2024 11:04:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4658, 'learning_rate': 1.9375e-05, 'epoch': 5.72} |
|
|
|
05/30/2024 11:05:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4575, 'learning_rate': 1.9304e-05, 'epoch': 5.73} |
|
|
|
05/30/2024 11:07:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5074, 'learning_rate': 1.9232e-05, 'epoch': 5.74} |
|
|
|
05/30/2024 11:09:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5143, 'learning_rate': 1.9160e-05, 'epoch': 5.75} |
|
|
|
05/30/2024 11:11:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4909, 'learning_rate': 1.9089e-05, 'epoch': 5.76} |
|
|
|
05/30/2024 11:13:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4833, 'learning_rate': 1.9017e-05, 'epoch': 5.77} |
|
|
|
05/30/2024 11:15:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4889, 'learning_rate': 1.8946e-05, 'epoch': 5.78} |
|
|
|
05/30/2024 11:16:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4632, 'learning_rate': 1.8874e-05, 'epoch': 5.79} |
|
|
|
05/30/2024 11:18:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4631, 'learning_rate': 1.8803e-05, 'epoch': 5.79} |
|
|
|
05/30/2024 11:20:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4995, 'learning_rate': 1.8731e-05, 'epoch': 5.80} |
|
|
|
05/30/2024 11:22:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4892, 'learning_rate': 1.8660e-05, 'epoch': 5.81} |
|
|
|
05/30/2024 11:22:21 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3100 |
|
|
|
05/30/2024 11:22:21 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3100/tokenizer_config.json |
|
|
|
05/30/2024 11:22:21 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3100/special_tokens_map.json |
|
|
|
05/30/2024 11:24:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5213, 'learning_rate': 1.8589e-05, 'epoch': 5.82} |
|
|
|
05/30/2024 11:25:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4488, 'learning_rate': 1.8518e-05, 'epoch': 5.83} |
|
|
|
05/30/2024 11:27:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4850, 'learning_rate': 1.8447e-05, 'epoch': 5.84} |
|
|
|
05/30/2024 11:29:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4959, 'learning_rate': 1.8375e-05, 'epoch': 5.85} |
|
|
|
05/30/2024 11:31:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4824, 'learning_rate': 1.8304e-05, 'epoch': 5.86} |
|
|
|
05/30/2024 11:33:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4672, 'learning_rate': 1.8233e-05, 'epoch': 5.87} |
|
|
|
05/30/2024 11:35:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4611, 'learning_rate': 1.8163e-05, 'epoch': 5.88} |
|
|
|
05/30/2024 11:37:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4362, 'learning_rate': 1.8092e-05, 'epoch': 5.89} |
|
|
|
05/30/2024 11:39:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4675, 'learning_rate': 1.8021e-05, 'epoch': 5.90} |
|
|
|
05/30/2024 11:40:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4997, 'learning_rate': 1.7950e-05, 'epoch': 5.91} |
|
|
|
05/30/2024 11:42:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4983, 'learning_rate': 1.7880e-05, 'epoch': 5.92} |
|
|
|
05/30/2024 11:44:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4957, 'learning_rate': 1.7809e-05, 'epoch': 5.93} |
|
|
|
05/30/2024 11:46:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4618, 'learning_rate': 1.7738e-05, 'epoch': 5.94} |
|
|
|
05/30/2024 11:48:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5211, 'learning_rate': 1.7668e-05, 'epoch': 5.94} |
|
|
|
05/30/2024 11:50:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5053, 'learning_rate': 1.7598e-05, 'epoch': 5.95} |
|
|
|
05/30/2024 11:51:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4874, 'learning_rate': 1.7527e-05, 'epoch': 5.96} |
|
|
|
05/30/2024 11:53:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4907, 'learning_rate': 1.7457e-05, 'epoch': 5.97} |
|
|
|
05/30/2024 11:55:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5289, 'learning_rate': 1.7387e-05, 'epoch': 5.98} |
|
|
|
05/30/2024 11:57:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4907, 'learning_rate': 1.7317e-05, 'epoch': 5.99} |
|
|
|
05/30/2024 11:59:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4716, 'learning_rate': 1.7247e-05, 'epoch': 6.00} |
|
|
|
05/30/2024 11:59:17 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3200 |
|
|
|
05/30/2024 11:59:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3200/tokenizer_config.json |
|
|
|
05/30/2024 11:59:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3200/special_tokens_map.json |
|
|
|
05/30/2024 12:01:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4719, 'learning_rate': 1.7177e-05, 'epoch': 6.01} |
|
|
|
05/30/2024 12:03:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4488, 'learning_rate': 1.7107e-05, 'epoch': 6.02} |
|
|
|
05/30/2024 12:04:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4868, 'learning_rate': 1.7037e-05, 'epoch': 6.03} |
|
|
|
05/30/2024 12:07:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5243, 'learning_rate': 1.6967e-05, 'epoch': 6.04} |
|
|
|
05/30/2024 12:08:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4689, 'learning_rate': 1.6897e-05, 'epoch': 6.05} |
|
|
|
05/30/2024 12:10:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4991, 'learning_rate': 1.6828e-05, 'epoch': 6.06} |
|
|
|
05/30/2024 12:12:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4646, 'learning_rate': 1.6758e-05, 'epoch': 6.07} |
|
|
|
05/30/2024 12:14:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4256, 'learning_rate': 1.6688e-05, 'epoch': 6.08} |
|
|
|
05/30/2024 12:16:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4901, 'learning_rate': 1.6619e-05, 'epoch': 6.09} |
|
|
|
05/30/2024 12:18:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5018, 'learning_rate': 1.6550e-05, 'epoch': 6.09} |
|
|
|
05/30/2024 12:19:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4922, 'learning_rate': 1.6480e-05, 'epoch': 6.10} |
|
|
|
05/30/2024 12:21:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4572, 'learning_rate': 1.6411e-05, 'epoch': 6.11} |
|
|
|
05/30/2024 12:23:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5083, 'learning_rate': 1.6342e-05, 'epoch': 6.12} |
|
|
|
05/30/2024 12:25:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4781, 'learning_rate': 1.6273e-05, 'epoch': 6.13} |
|
|
|
05/30/2024 12:27:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4825, 'learning_rate': 1.6204e-05, 'epoch': 6.14} |
|
|
|
05/30/2024 12:28:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4693, 'learning_rate': 1.6135e-05, 'epoch': 6.15} |
|
|
|
05/30/2024 12:30:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4759, 'learning_rate': 1.6066e-05, 'epoch': 6.16} |
|
|
|
05/30/2024 12:32:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4976, 'learning_rate': 1.5997e-05, 'epoch': 6.17} |
|
|
|
05/30/2024 12:34:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4738, 'learning_rate': 1.5929e-05, 'epoch': 6.18} |
|
|
|
05/30/2024 12:36:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4884, 'learning_rate': 1.5860e-05, 'epoch': 6.19} |
|
|
|
05/30/2024 12:36:07 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3300 |
|
|
|
05/30/2024 12:36:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3300/tokenizer_config.json |
|
|
|
05/30/2024 12:36:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3300/special_tokens_map.json |
|
|
|
05/30/2024 12:38:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4881, 'learning_rate': 1.5791e-05, 'epoch': 6.20} |
|
|
|
05/30/2024 12:39:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4649, 'learning_rate': 1.5723e-05, 'epoch': 6.21} |
|
|
|
05/30/2024 12:41:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4996, 'learning_rate': 1.5655e-05, 'epoch': 6.22} |
|
|
|
05/30/2024 12:43:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4695, 'learning_rate': 1.5586e-05, 'epoch': 6.23} |
|
|
|
05/30/2024 12:45:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4603, 'learning_rate': 1.5518e-05, 'epoch': 6.24} |
|
|
|
05/30/2024 12:47:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4632, 'learning_rate': 1.5450e-05, 'epoch': 6.24} |
|
|
|
05/30/2024 12:48:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5189, 'learning_rate': 1.5382e-05, 'epoch': 6.25} |
|
|
|
05/30/2024 12:50:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4877, 'learning_rate': 1.5314e-05, 'epoch': 6.26} |
|
|
|
05/30/2024 12:52:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4778, 'learning_rate': 1.5246e-05, 'epoch': 6.27} |
|
|
|
05/30/2024 12:54:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4678, 'learning_rate': 1.5178e-05, 'epoch': 6.28} |
|
|
|
05/30/2024 12:56:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4589, 'learning_rate': 1.5111e-05, 'epoch': 6.29} |
|
|
|
05/30/2024 12:58:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4662, 'learning_rate': 1.5043e-05, 'epoch': 6.30} |
|
|
|
05/30/2024 12:59:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5021, 'learning_rate': 1.4975e-05, 'epoch': 6.31} |
|
|
|
05/30/2024 13:01:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4999, 'learning_rate': 1.4908e-05, 'epoch': 6.32} |
|
|
|
05/30/2024 13:03:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4635, 'learning_rate': 1.4841e-05, 'epoch': 6.33} |
|
|
|
05/30/2024 13:05:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4582, 'learning_rate': 1.4773e-05, 'epoch': 6.34} |
|
|
|
05/30/2024 13:07:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4462, 'learning_rate': 1.4706e-05, 'epoch': 6.35} |
|
|
|
05/30/2024 13:09:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4785, 'learning_rate': 1.4639e-05, 'epoch': 6.36} |
|
|
|
05/30/2024 13:11:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5108, 'learning_rate': 1.4572e-05, 'epoch': 6.37} |
|
|
|
05/30/2024 13:12:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4786, 'learning_rate': 1.4505e-05, 'epoch': 6.38} |
|
|
|
05/30/2024 13:12:52 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3400 |
|
|
|
05/30/2024 13:12:52 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3400/tokenizer_config.json |
|
|
|
05/30/2024 13:12:52 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3400/special_tokens_map.json |
|
|
|
05/30/2024 13:14:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4896, 'learning_rate': 1.4438e-05, 'epoch': 6.39} |
|
|
|
05/30/2024 13:16:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4571, 'learning_rate': 1.4372e-05, 'epoch': 6.39} |
|
|
|
05/30/2024 13:18:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4605, 'learning_rate': 1.4305e-05, 'epoch': 6.40} |
|
|
|
05/30/2024 13:20:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.4748, 'learning_rate': 1.4238e-05, 'epoch': 6.41} |
|
|
|
05/30/2024 13:22:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4698, 'learning_rate': 1.4172e-05, 'epoch': 6.42} |
|
|
|
05/30/2024 13:24:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4958, 'learning_rate': 1.4106e-05, 'epoch': 6.43} |
|
|
|
05/30/2024 13:25:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4650, 'learning_rate': 1.4039e-05, 'epoch': 6.44} |
|
|
|
05/30/2024 13:27:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4750, 'learning_rate': 1.3973e-05, 'epoch': 6.45} |
|
|
|
05/30/2024 13:29:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4765, 'learning_rate': 1.3907e-05, 'epoch': 6.46} |
|
|
|
05/30/2024 13:31:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4909, 'learning_rate': 1.3841e-05, 'epoch': 6.47} |
|
|
|
05/30/2024 13:33:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4789, 'learning_rate': 1.3775e-05, 'epoch': 6.48} |
|
|
|
05/30/2024 13:34:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4795, 'learning_rate': 1.3709e-05, 'epoch': 6.49} |
|
|
|
05/30/2024 13:36:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4390, 'learning_rate': 1.3644e-05, 'epoch': 6.50} |
|
|
|
05/30/2024 13:38:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4690, 'learning_rate': 1.3578e-05, 'epoch': 6.51} |
|
|
|
05/30/2024 13:40:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4596, 'learning_rate': 1.3513e-05, 'epoch': 6.52} |
|
|
|
05/30/2024 13:42:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4533, 'learning_rate': 1.3447e-05, 'epoch': 6.53} |
|
|
|
05/30/2024 13:44:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5055, 'learning_rate': 1.3382e-05, 'epoch': 6.54} |
|
|
|
05/30/2024 13:46:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4792, 'learning_rate': 1.3317e-05, 'epoch': 6.54} |
|
|
|
05/30/2024 13:47:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4855, 'learning_rate': 1.3252e-05, 'epoch': 6.55} |
|
|
|
05/30/2024 13:49:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4679, 'learning_rate': 1.3187e-05, 'epoch': 6.56} |
|
|
|
05/30/2024 13:49:44 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3500 |
|
|
|
05/30/2024 13:49:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3500/tokenizer_config.json |
|
|
|
05/30/2024 13:49:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3500/special_tokens_map.json |
|
|
|
05/30/2024 13:51:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4823, 'learning_rate': 1.3122e-05, 'epoch': 6.57} |
|
|
|
05/30/2024 13:53:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4600, 'learning_rate': 1.3057e-05, 'epoch': 6.58} |
|
|
|
05/30/2024 13:55:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4790, 'learning_rate': 1.2992e-05, 'epoch': 6.59} |
|
|
|
05/30/2024 13:57:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4779, 'learning_rate': 1.2928e-05, 'epoch': 6.60} |
|
|
|
05/30/2024 13:58:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4695, 'learning_rate': 1.2863e-05, 'epoch': 6.61} |
|
|
|
05/30/2024 14:00:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5207, 'learning_rate': 1.2799e-05, 'epoch': 6.62} |
|
|
|
05/30/2024 14:02:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4483, 'learning_rate': 1.2735e-05, 'epoch': 6.63} |
|
|
|
05/30/2024 14:04:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4859, 'learning_rate': 1.2671e-05, 'epoch': 6.64} |
|
|
|
05/30/2024 14:06:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4775, 'learning_rate': 1.2606e-05, 'epoch': 6.65} |
|
|
|
05/30/2024 14:07:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4590, 'learning_rate': 1.2543e-05, 'epoch': 6.66} |
|
|
|
05/30/2024 14:09:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4835, 'learning_rate': 1.2479e-05, 'epoch': 6.67} |
|
|
|
05/30/2024 14:11:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4854, 'learning_rate': 1.2415e-05, 'epoch': 6.68} |
|
|
|
05/30/2024 14:13:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5066, 'learning_rate': 1.2351e-05, 'epoch': 6.69} |
|
|
|
05/30/2024 14:15:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4483, 'learning_rate': 1.2288e-05, 'epoch': 6.69} |
|
|
|
05/30/2024 14:17:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4945, 'learning_rate': 1.2225e-05, 'epoch': 6.70} |
|
|
|
05/30/2024 14:19:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4709, 'learning_rate': 1.2161e-05, 'epoch': 6.71} |
|
|
|
05/30/2024 14:20:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4652, 'learning_rate': 1.2098e-05, 'epoch': 6.72} |
|
|
|
05/30/2024 14:22:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4783, 'learning_rate': 1.2035e-05, 'epoch': 6.73} |
|
|
|
05/30/2024 14:24:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4574, 'learning_rate': 1.1972e-05, 'epoch': 6.74} |
|
|
|
05/30/2024 14:26:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4643, 'learning_rate': 1.1909e-05, 'epoch': 6.75} |
|
|
|
05/30/2024 14:26:20 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3600 |
|
|
|
05/30/2024 14:26:21 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3600/tokenizer_config.json |
|
|
|
05/30/2024 14:26:21 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3600/special_tokens_map.json |
|
|
|
05/30/2024 14:28:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5495, 'learning_rate': 1.1847e-05, 'epoch': 6.76} |
|
|
|
05/30/2024 14:30:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4563, 'learning_rate': 1.1784e-05, 'epoch': 6.77} |
|
|
|
05/30/2024 14:32:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5021, 'learning_rate': 1.1721e-05, 'epoch': 6.78} |
|
|
|
05/30/2024 14:33:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4628, 'learning_rate': 1.1659e-05, 'epoch': 6.79} |
|
|
|
05/30/2024 14:35:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4432, 'learning_rate': 1.1597e-05, 'epoch': 6.80} |
|
|
|
05/30/2024 14:37:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5062, 'learning_rate': 1.1535e-05, 'epoch': 6.81} |
|
|
|
05/30/2024 14:39:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4682, 'learning_rate': 1.1473e-05, 'epoch': 6.82} |
|
|
|
05/30/2024 14:41:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4834, 'learning_rate': 1.1411e-05, 'epoch': 6.83} |
|
|
|
05/30/2024 14:42:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4335, 'learning_rate': 1.1349e-05, 'epoch': 6.84} |
|
|
|
05/30/2024 14:44:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4730, 'learning_rate': 1.1287e-05, 'epoch': 6.84} |
|
|
|
05/30/2024 14:46:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4659, 'learning_rate': 1.1226e-05, 'epoch': 6.85} |
|
|
|
05/30/2024 14:48:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4398, 'learning_rate': 1.1164e-05, 'epoch': 6.86} |
|
|
|
05/30/2024 14:49:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4731, 'learning_rate': 1.1103e-05, 'epoch': 6.87} |
|
|
|
05/30/2024 14:51:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4692, 'learning_rate': 1.1042e-05, 'epoch': 6.88} |
|
|
|
05/30/2024 14:53:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4649, 'learning_rate': 1.0981e-05, 'epoch': 6.89} |
|
|
|
05/30/2024 14:55:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4574, 'learning_rate': 1.0920e-05, 'epoch': 6.90} |
|
|
|
05/30/2024 14:57:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4478, 'learning_rate': 1.0859e-05, 'epoch': 6.91} |
|
|
|
05/30/2024 14:59:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5066, 'learning_rate': 1.0798e-05, 'epoch': 6.92} |
|
|
|
05/30/2024 15:00:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4928, 'learning_rate': 1.0738e-05, 'epoch': 6.93} |
|
|
|
05/30/2024 15:02:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4555, 'learning_rate': 1.0677e-05, 'epoch': 6.94} |
|
|
|
05/30/2024 15:02:40 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3700 |
|
|
|
05/30/2024 15:02:40 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3700/tokenizer_config.json |
|
|
|
05/30/2024 15:02:40 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3700/special_tokens_map.json |
|
|
|
05/30/2024 15:04:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4612, 'learning_rate': 1.0617e-05, 'epoch': 6.95} |
|
|
|
05/30/2024 15:06:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4623, 'learning_rate': 1.0557e-05, 'epoch': 6.96} |
|
|
|
05/30/2024 15:08:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4773, 'learning_rate': 1.0497e-05, 'epoch': 6.97} |
|
|
|
05/30/2024 15:09:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5141, 'learning_rate': 1.0437e-05, 'epoch': 6.98} |
|
|
|
05/30/2024 15:11:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4389, 'learning_rate': 1.0377e-05, 'epoch': 6.99} |
|
|
|
05/30/2024 15:13:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4674, 'learning_rate': 1.0317e-05, 'epoch': 6.99} |
|
|
|
05/30/2024 15:15:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4593, 'learning_rate': 1.0258e-05, 'epoch': 7.00} |
|
|
|
05/30/2024 15:17:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4549, 'learning_rate': 1.0198e-05, 'epoch': 7.01} |
|
|
|
05/30/2024 15:19:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4889, 'learning_rate': 1.0139e-05, 'epoch': 7.02} |
|
|
|
05/30/2024 15:21:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4611, 'learning_rate': 1.0080e-05, 'epoch': 7.03} |
|
|
|
05/30/2024 15:22:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4634, 'learning_rate': 1.0021e-05, 'epoch': 7.04} |
|
|
|
05/30/2024 15:24:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4778, 'learning_rate': 9.9618e-06, 'epoch': 7.05} |
|
|
|
05/30/2024 15:26:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4878, 'learning_rate': 9.9030e-06, 'epoch': 7.06} |
|
|
|
05/30/2024 15:28:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4548, 'learning_rate': 9.8444e-06, 'epoch': 7.07} |
|
|
|
05/30/2024 15:29:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4826, 'learning_rate': 9.7858e-06, 'epoch': 7.08} |
|
|
|
05/30/2024 15:31:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4825, 'learning_rate': 9.7274e-06, 'epoch': 7.09} |
|
|
|
05/30/2024 15:33:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5366, 'learning_rate': 9.6692e-06, 'epoch': 7.10} |
|
|
|
05/30/2024 15:35:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4573, 'learning_rate': 9.6110e-06, 'epoch': 7.11} |
|
|
|
05/30/2024 15:37:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4612, 'learning_rate': 9.5530e-06, 'epoch': 7.12} |
|
|
|
05/30/2024 15:39:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4467, 'learning_rate': 9.4952e-06, 'epoch': 7.13} |
|
|
|
05/30/2024 15:39:14 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3800 |
|
|
|
05/30/2024 15:39:14 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3800/tokenizer_config.json |
|
|
|
05/30/2024 15:39:14 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3800/special_tokens_map.json |
|
|
|
05/30/2024 15:41:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5063, 'learning_rate': 9.4375e-06, 'epoch': 7.14} |
|
|
|
05/30/2024 15:43:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4540, 'learning_rate': 9.3799e-06, 'epoch': 7.14} |
|
|
|
05/30/2024 15:44:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4322, 'learning_rate': 9.3224e-06, 'epoch': 7.15} |
|
|
|
05/30/2024 15:46:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4567, 'learning_rate': 9.2651e-06, 'epoch': 7.16} |
|
|
|
05/30/2024 15:48:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4753, 'learning_rate': 9.2079e-06, 'epoch': 7.17} |
|
|
|
05/30/2024 15:50:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4622, 'learning_rate': 9.1508e-06, 'epoch': 7.18} |
|
|
|
05/30/2024 15:52:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5124, 'learning_rate': 9.0939e-06, 'epoch': 7.19} |
|
|
|
05/30/2024 15:54:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5021, 'learning_rate': 9.0372e-06, 'epoch': 7.20} |
|
|
|
05/30/2024 15:55:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4429, 'learning_rate': 8.9805e-06, 'epoch': 7.21} |
|
|
|
05/30/2024 15:57:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4885, 'learning_rate': 8.9240e-06, 'epoch': 7.22} |
|
|
|
05/30/2024 15:59:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5183, 'learning_rate': 8.8677e-06, 'epoch': 7.23} |
|
|
|
05/30/2024 16:01:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4651, 'learning_rate': 8.8115e-06, 'epoch': 7.24} |
|
|
|
05/30/2024 16:03:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5209, 'learning_rate': 8.7554e-06, 'epoch': 7.25} |
|
|
|
05/30/2024 16:04:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4355, 'learning_rate': 8.6995e-06, 'epoch': 7.26} |
|
|
|
05/30/2024 16:06:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5222, 'learning_rate': 8.6437e-06, 'epoch': 7.27} |
|
|
|
05/30/2024 16:08:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4599, 'learning_rate': 8.5880e-06, 'epoch': 7.28} |
|
|
|
05/30/2024 16:10:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4725, 'learning_rate': 8.5325e-06, 'epoch': 7.29} |
|
|
|
05/30/2024 16:12:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4482, 'learning_rate': 8.4772e-06, 'epoch': 7.29} |
|
|
|
05/30/2024 16:14:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.4436, 'learning_rate': 8.4219e-06, 'epoch': 7.30} |
|
|
|
05/30/2024 16:15:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4540, 'learning_rate': 8.3669e-06, 'epoch': 7.31} |
|
|
|
05/30/2024 16:15:50 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3900 |
|
|
|
05/30/2024 16:15:50 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3900/tokenizer_config.json |
|
|
|
05/30/2024 16:15:50 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-3900/special_tokens_map.json |
|
|
|
05/30/2024 16:17:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4618, 'learning_rate': 8.3119e-06, 'epoch': 7.32} |
|
|
|
05/30/2024 16:19:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4821, 'learning_rate': 8.2571e-06, 'epoch': 7.33} |
|
|
|
05/30/2024 16:21:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4742, 'learning_rate': 8.2025e-06, 'epoch': 7.34} |
|
|
|
05/30/2024 16:23:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4353, 'learning_rate': 8.1480e-06, 'epoch': 7.35} |
|
|
|
05/30/2024 16:25:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4559, 'learning_rate': 8.0937e-06, 'epoch': 7.36} |
|
|
|
05/30/2024 16:27:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4618, 'learning_rate': 8.0395e-06, 'epoch': 7.37} |
|
|
|
05/30/2024 16:28:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5106, 'learning_rate': 7.9854e-06, 'epoch': 7.38} |
|
|
|
05/30/2024 16:30:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4424, 'learning_rate': 7.9315e-06, 'epoch': 7.39} |
|
|
|
05/30/2024 16:32:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4505, 'learning_rate': 7.8777e-06, 'epoch': 7.40} |
|
|
|
05/30/2024 16:34:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4930, 'learning_rate': 7.8241e-06, 'epoch': 7.41} |
|
|
|
05/30/2024 16:36:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4625, 'learning_rate': 7.7707e-06, 'epoch': 7.42} |
|
|
|
05/30/2024 16:37:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.4802, 'learning_rate': 7.7173e-06, 'epoch': 7.43} |
|
|
|
05/30/2024 16:39:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4531, 'learning_rate': 7.6642e-06, 'epoch': 7.44} |
|
|
|
05/30/2024 16:41:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4634, 'learning_rate': 7.6112e-06, 'epoch': 7.44} |
|
|
|
05/30/2024 16:43:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4735, 'learning_rate': 7.5583e-06, 'epoch': 7.45} |
|
|
|
05/30/2024 16:45:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4823, 'learning_rate': 7.5056e-06, 'epoch': 7.46} |
|
|
|
05/30/2024 16:46:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4431, 'learning_rate': 7.4531e-06, 'epoch': 7.47} |
|
|
|
05/30/2024 16:48:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4913, 'learning_rate': 7.4006e-06, 'epoch': 7.48} |
|
|
|
05/30/2024 16:50:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5012, 'learning_rate': 7.3484e-06, 'epoch': 7.49} |
|
|
|
05/30/2024 16:52:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4804, 'learning_rate': 7.2963e-06, 'epoch': 7.50} |
|
|
|
05/30/2024 16:52:36 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4000 |
|
|
|
05/30/2024 16:52:37 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4000/tokenizer_config.json |
|
|
|
05/30/2024 16:52:37 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4000/special_tokens_map.json |
|
|
|
05/30/2024 16:54:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4285, 'learning_rate': 7.2444e-06, 'epoch': 7.51} |
|
|
|
05/30/2024 16:56:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4710, 'learning_rate': 7.1926e-06, 'epoch': 7.52} |
|
|
|
05/30/2024 16:58:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4749, 'learning_rate': 7.1409e-06, 'epoch': 7.53} |
|
|
|
05/30/2024 17:00:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4834, 'learning_rate': 7.0895e-06, 'epoch': 7.54} |
|
|
|
05/30/2024 17:01:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4667, 'learning_rate': 7.0381e-06, 'epoch': 7.55} |
|
|
|
05/30/2024 17:03:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4688, 'learning_rate': 6.9870e-06, 'epoch': 7.56} |
|
|
|
05/30/2024 17:05:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4733, 'learning_rate': 6.9359e-06, 'epoch': 7.57} |
|
|
|
05/30/2024 17:07:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4715, 'learning_rate': 6.8851e-06, 'epoch': 7.58} |
|
|
|
05/30/2024 17:09:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4597, 'learning_rate': 6.8344e-06, 'epoch': 7.59} |
|
|
|
05/30/2024 17:10:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4745, 'learning_rate': 6.7839e-06, 'epoch': 7.59} |
|
|
|
05/30/2024 17:12:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4610, 'learning_rate': 6.7335e-06, 'epoch': 7.60} |
|
|
|
05/30/2024 17:14:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4434, 'learning_rate': 6.6833e-06, 'epoch': 7.61} |
|
|
|
05/30/2024 17:16:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4923, 'learning_rate': 6.6332e-06, 'epoch': 7.62} |
|
|
|
05/30/2024 17:18:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4516, 'learning_rate': 6.5833e-06, 'epoch': 7.63} |
|
|
|
05/30/2024 17:20:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4835, 'learning_rate': 6.5335e-06, 'epoch': 7.64} |
|
|
|
05/30/2024 17:21:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4544, 'learning_rate': 6.4840e-06, 'epoch': 7.65} |
|
|
|
05/30/2024 17:23:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4759, 'learning_rate': 6.4345e-06, 'epoch': 7.66} |
|
|
|
05/30/2024 17:25:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4414, 'learning_rate': 6.3853e-06, 'epoch': 7.67} |
|
|
|
05/30/2024 17:27:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4473, 'learning_rate': 6.3362e-06, 'epoch': 7.68} |
|
|
|
05/30/2024 17:29:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5081, 'learning_rate': 6.2872e-06, 'epoch': 7.69} |
|
|
|
05/30/2024 17:29:26 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4100 |
|
|
|
05/30/2024 17:29:26 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4100/tokenizer_config.json |
|
|
|
05/30/2024 17:29:26 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4100/special_tokens_map.json |
|
|
|
05/30/2024 17:31:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4727, 'learning_rate': 6.2385e-06, 'epoch': 7.70} |
|
|
|
05/30/2024 17:33:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4345, 'learning_rate': 6.1898e-06, 'epoch': 7.71} |
|
|
|
05/30/2024 17:34:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4512, 'learning_rate': 6.1414e-06, 'epoch': 7.72} |
|
|
|
05/30/2024 17:36:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4419, 'learning_rate': 6.0931e-06, 'epoch': 7.73} |
|
|
|
05/30/2024 17:38:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4436, 'learning_rate': 6.0450e-06, 'epoch': 7.74} |
|
|
|
05/30/2024 17:40:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4834, 'learning_rate': 5.9970e-06, 'epoch': 7.74} |
|
|
|
05/30/2024 17:42:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4311, 'learning_rate': 5.9492e-06, 'epoch': 7.75} |
|
|
|
05/30/2024 17:44:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5012, 'learning_rate': 5.9016e-06, 'epoch': 7.76} |
|
|
|
05/30/2024 17:46:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4896, 'learning_rate': 5.8542e-06, 'epoch': 7.77} |
|
|
|
05/30/2024 17:47:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4757, 'learning_rate': 5.8069e-06, 'epoch': 7.78} |
|
|
|
05/30/2024 17:49:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4583, 'learning_rate': 5.7597e-06, 'epoch': 7.79} |
|
|
|
05/30/2024 17:51:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4696, 'learning_rate': 5.7128e-06, 'epoch': 7.80} |
|
|
|
05/30/2024 17:53:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4897, 'learning_rate': 5.6660e-06, 'epoch': 7.81} |
|
|
|
05/30/2024 17:55:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4490, 'learning_rate': 5.6194e-06, 'epoch': 7.82} |
|
|
|
05/30/2024 17:56:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4666, 'learning_rate': 5.5729e-06, 'epoch': 7.83} |
|
|
|
05/30/2024 17:58:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4782, 'learning_rate': 5.5266e-06, 'epoch': 7.84} |
|
|
|
05/30/2024 18:00:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5104, 'learning_rate': 5.4805e-06, 'epoch': 7.85} |
|
|
|
05/30/2024 18:02:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4603, 'learning_rate': 5.4345e-06, 'epoch': 7.86} |
|
|
|
05/30/2024 18:04:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4539, 'learning_rate': 5.3888e-06, 'epoch': 7.87} |
|
|
|
05/30/2024 18:06:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5024, 'learning_rate': 5.3432e-06, 'epoch': 7.88} |
|
|
|
05/30/2024 18:06:11 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4200 |
|
|
|
05/30/2024 18:06:11 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4200/tokenizer_config.json |
|
|
|
05/30/2024 18:06:11 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4200/special_tokens_map.json |
|
|
|
05/30/2024 18:07:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4666, 'learning_rate': 5.2977e-06, 'epoch': 7.89} |
|
|
|
05/30/2024 18:09:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4396, 'learning_rate': 5.2524e-06, 'epoch': 7.89} |
|
|
|
05/30/2024 18:11:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4670, 'learning_rate': 5.2074e-06, 'epoch': 7.90} |
|
|
|
05/30/2024 18:13:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4677, 'learning_rate': 5.1624e-06, 'epoch': 7.91} |
|
|
|
05/30/2024 18:15:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5007, 'learning_rate': 5.1177e-06, 'epoch': 7.92} |
|
|
|
05/30/2024 18:17:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4522, 'learning_rate': 5.0731e-06, 'epoch': 7.93} |
|
|
|
05/30/2024 18:18:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4750, 'learning_rate': 5.0287e-06, 'epoch': 7.94} |
|
|
|
05/30/2024 18:20:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4216, 'learning_rate': 4.9845e-06, 'epoch': 7.95} |
|
|
|
05/30/2024 18:22:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4799, 'learning_rate': 4.9404e-06, 'epoch': 7.96} |
|
|
|
05/30/2024 18:24:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4964, 'learning_rate': 4.8965e-06, 'epoch': 7.97} |
|
|
|
05/30/2024 18:26:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4637, 'learning_rate': 4.8528e-06, 'epoch': 7.98} |
|
|
|
05/30/2024 18:27:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4787, 'learning_rate': 4.8093e-06, 'epoch': 7.99} |
|
|
|
05/30/2024 18:29:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5012, 'learning_rate': 4.7659e-06, 'epoch': 8.00} |
|
|
|
05/30/2024 18:31:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4882, 'learning_rate': 4.7227e-06, 'epoch': 8.01} |
|
|
|
05/30/2024 18:33:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4834, 'learning_rate': 4.6797e-06, 'epoch': 8.02} |
|
|
|
05/30/2024 18:35:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4774, 'learning_rate': 4.6369e-06, 'epoch': 8.03} |
|
|
|
05/30/2024 18:37:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4667, 'learning_rate': 4.5942e-06, 'epoch': 8.04} |
|
|
|
05/30/2024 18:39:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4927, 'learning_rate': 4.5518e-06, 'epoch': 8.05} |
|
|
|
05/30/2024 18:41:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4911, 'learning_rate': 4.5095e-06, 'epoch': 8.05} |
|
|
|
05/30/2024 18:42:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4546, 'learning_rate': 4.4673e-06, 'epoch': 8.06} |
|
|
|
05/30/2024 18:42:48 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4300 |
|
|
|
05/30/2024 18:42:48 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4300/tokenizer_config.json |
|
|
|
05/30/2024 18:42:48 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4300/special_tokens_map.json |
|
|
|
05/30/2024 18:44:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4451, 'learning_rate': 4.4254e-06, 'epoch': 8.07} |
|
|
|
05/30/2024 18:46:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4917, 'learning_rate': 4.3836e-06, 'epoch': 8.08} |
|
|
|
05/30/2024 18:48:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4624, 'learning_rate': 4.3421e-06, 'epoch': 8.09} |
|
|
|
05/30/2024 18:50:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4680, 'learning_rate': 4.3006e-06, 'epoch': 8.10} |
|
|
|
05/30/2024 18:51:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4399, 'learning_rate': 4.2594e-06, 'epoch': 8.11} |
|
|
|
05/30/2024 18:53:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4713, 'learning_rate': 4.2184e-06, 'epoch': 8.12} |
|
|
|
05/30/2024 18:55:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4594, 'learning_rate': 4.1775e-06, 'epoch': 8.13} |
|
|
|
05/30/2024 18:57:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4343, 'learning_rate': 4.1368e-06, 'epoch': 8.14} |
|
|
|
05/30/2024 18:59:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4792, 'learning_rate': 4.0963e-06, 'epoch': 8.15} |
|
|
|
05/30/2024 19:01:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4790, 'learning_rate': 4.0560e-06, 'epoch': 8.16} |
|
|
|
05/30/2024 19:03:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4490, 'learning_rate': 4.0159e-06, 'epoch': 8.17} |
|
|
|
05/30/2024 19:04:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4678, 'learning_rate': 3.9759e-06, 'epoch': 8.18} |
|
|
|
05/30/2024 19:06:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5058, 'learning_rate': 3.9361e-06, 'epoch': 8.19} |
|
|
|
05/30/2024 19:08:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4468, 'learning_rate': 3.8965e-06, 'epoch': 8.20} |
|
|
|
05/30/2024 19:10:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5227, 'learning_rate': 3.8571e-06, 'epoch': 8.20} |
|
|
|
05/30/2024 19:12:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4476, 'learning_rate': 3.8179e-06, 'epoch': 8.21} |
|
|
|
05/30/2024 19:13:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4478, 'learning_rate': 3.7789e-06, 'epoch': 8.22} |
|
|
|
05/30/2024 19:15:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4615, 'learning_rate': 3.7400e-06, 'epoch': 8.23} |
|
|
|
05/30/2024 19:17:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4739, 'learning_rate': 3.7013e-06, 'epoch': 8.24} |
|
|
|
05/30/2024 19:19:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4600, 'learning_rate': 3.6629e-06, 'epoch': 8.25} |
|
|
|
05/30/2024 19:19:27 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4400 |
|
|
|
05/30/2024 19:19:27 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4400/tokenizer_config.json |
|
|
|
05/30/2024 19:19:27 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4400/special_tokens_map.json |
|
|
|
05/30/2024 19:21:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4620, 'learning_rate': 3.6245e-06, 'epoch': 8.26} |
|
|
|
05/30/2024 19:23:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4619, 'learning_rate': 3.5864e-06, 'epoch': 8.27} |
|
|
|
05/30/2024 19:24:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4997, 'learning_rate': 3.5485e-06, 'epoch': 8.28} |
|
|
|
05/30/2024 19:26:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4313, 'learning_rate': 3.5108e-06, 'epoch': 8.29} |
|
|
|
05/30/2024 19:28:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4627, 'learning_rate': 3.4732e-06, 'epoch': 8.30} |
|
|
|
05/30/2024 19:30:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4752, 'learning_rate': 3.4358e-06, 'epoch': 8.31} |
|
|
|
05/30/2024 19:32:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4480, 'learning_rate': 3.3987e-06, 'epoch': 8.32} |
|
|
|
05/30/2024 19:33:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4445, 'learning_rate': 3.3617e-06, 'epoch': 8.33} |
|
|
|
05/30/2024 19:35:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5237, 'learning_rate': 3.3248e-06, 'epoch': 8.34} |
|
|
|
05/30/2024 19:37:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4536, 'learning_rate': 3.2882e-06, 'epoch': 8.35} |
|
|
|
05/30/2024 19:39:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4902, 'learning_rate': 3.2518e-06, 'epoch': 8.35} |
|
|
|
05/30/2024 19:41:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4647, 'learning_rate': 3.2156e-06, 'epoch': 8.36} |
|
|
|
05/30/2024 19:43:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4788, 'learning_rate': 3.1795e-06, 'epoch': 8.37} |
|
|
|
05/30/2024 19:45:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4537, 'learning_rate': 3.1436e-06, 'epoch': 8.38} |
|
|
|
05/30/2024 19:47:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4691, 'learning_rate': 3.1080e-06, 'epoch': 8.39} |
|
|
|
05/30/2024 19:48:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4293, 'learning_rate': 3.0725e-06, 'epoch': 8.40} |
|
|
|
05/30/2024 19:50:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4705, 'learning_rate': 3.0372e-06, 'epoch': 8.41} |
|
|
|
05/30/2024 19:52:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4945, 'learning_rate': 3.0021e-06, 'epoch': 8.42} |
|
|
|
05/30/2024 19:54:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4426, 'learning_rate': 2.9672e-06, 'epoch': 8.43} |
|
|
|
05/30/2024 19:56:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4601, 'learning_rate': 2.9325e-06, 'epoch': 8.44} |
|
|
|
05/30/2024 19:56:07 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4500 |
|
|
|
05/30/2024 19:56:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4500/tokenizer_config.json |
|
|
|
05/30/2024 19:56:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4500/special_tokens_map.json |
|
|
|
05/30/2024 19:57:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4699, 'learning_rate': 2.8979e-06, 'epoch': 8.45} |
|
|
|
05/30/2024 19:59:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4467, 'learning_rate': 2.8636e-06, 'epoch': 8.46} |
|
|
|
05/30/2024 20:01:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4313, 'learning_rate': 2.8295e-06, 'epoch': 8.47} |
|
|
|
05/30/2024 20:03:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4342, 'learning_rate': 2.7955e-06, 'epoch': 8.48} |
|
|
|
05/30/2024 20:05:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5055, 'learning_rate': 2.7617e-06, 'epoch': 8.49} |
|
|
|
05/30/2024 20:07:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4588, 'learning_rate': 2.7282e-06, 'epoch': 8.50} |
|
|
|
05/30/2024 20:08:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4596, 'learning_rate': 2.6948e-06, 'epoch': 8.50} |
|
|
|
05/30/2024 20:10:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4124, 'learning_rate': 2.6616e-06, 'epoch': 8.51} |
|
|
|
05/30/2024 20:12:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4617, 'learning_rate': 2.6287e-06, 'epoch': 8.52} |
|
|
|
05/30/2024 20:14:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4806, 'learning_rate': 2.5959e-06, 'epoch': 8.53} |
|
|
|
05/30/2024 20:16:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4430, 'learning_rate': 2.5633e-06, 'epoch': 8.54} |
|
|
|
05/30/2024 20:18:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4374, 'learning_rate': 2.5309e-06, 'epoch': 8.55} |
|
|
|
05/30/2024 20:19:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4929, 'learning_rate': 2.4987e-06, 'epoch': 8.56} |
|
|
|
05/30/2024 20:21:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4436, 'learning_rate': 2.4667e-06, 'epoch': 8.57} |
|
|
|
05/30/2024 20:23:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4618, 'learning_rate': 2.4348e-06, 'epoch': 8.58} |
|
|
|
05/30/2024 20:25:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4450, 'learning_rate': 2.4032e-06, 'epoch': 8.59} |
|
|
|
05/30/2024 20:27:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4817, 'learning_rate': 2.3718e-06, 'epoch': 8.60} |
|
|
|
05/30/2024 20:29:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4967, 'learning_rate': 2.3406e-06, 'epoch': 8.61} |
|
|
|
05/30/2024 20:30:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4940, 'learning_rate': 2.3095e-06, 'epoch': 8.62} |
|
|
|
05/30/2024 20:32:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4938, 'learning_rate': 2.2787e-06, 'epoch': 8.63} |
|
|
|
05/30/2024 20:32:49 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4600 |
|
|
|
05/30/2024 20:32:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4600/tokenizer_config.json |
|
|
|
05/30/2024 20:32:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4600/special_tokens_map.json |
|
|
|
05/30/2024 20:34:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5367, 'learning_rate': 2.2481e-06, 'epoch': 8.64} |
|
|
|
05/30/2024 20:36:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4752, 'learning_rate': 2.2176e-06, 'epoch': 8.65} |
|
|
|
05/30/2024 20:38:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4566, 'learning_rate': 2.1874e-06, 'epoch': 8.65} |
|
|
|
05/30/2024 20:40:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5024, 'learning_rate': 2.1574e-06, 'epoch': 8.66} |
|
|
|
05/30/2024 20:41:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4544, 'learning_rate': 2.1275e-06, 'epoch': 8.67} |
|
|
|
05/30/2024 20:43:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4804, 'learning_rate': 2.0979e-06, 'epoch': 8.68} |
|
|
|
05/30/2024 20:45:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4394, 'learning_rate': 2.0684e-06, 'epoch': 8.69} |
|
|
|
05/30/2024 20:47:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4428, 'learning_rate': 2.0392e-06, 'epoch': 8.70} |
|
|
|
05/30/2024 20:49:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4277, 'learning_rate': 2.0102e-06, 'epoch': 8.71} |
|
|
|
05/30/2024 20:51:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4202, 'learning_rate': 1.9813e-06, 'epoch': 8.72} |
|
|
|
05/30/2024 20:53:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4236, 'learning_rate': 1.9527e-06, 'epoch': 8.73} |
|
|
|
05/30/2024 20:54:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5411, 'learning_rate': 1.9242e-06, 'epoch': 8.74} |
|
|
|
05/30/2024 20:56:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4761, 'learning_rate': 1.8960e-06, 'epoch': 8.75} |
|
|
|
05/30/2024 20:58:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4762, 'learning_rate': 1.8679e-06, 'epoch': 8.76} |
|
|
|
05/30/2024 21:00:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4814, 'learning_rate': 1.8401e-06, 'epoch': 8.77} |
|
|
|
05/30/2024 21:02:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4529, 'learning_rate': 1.8124e-06, 'epoch': 8.78} |
|
|
|
05/30/2024 21:04:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4391, 'learning_rate': 1.7850e-06, 'epoch': 8.79} |
|
|
|
05/30/2024 21:05:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4894, 'learning_rate': 1.7578e-06, 'epoch': 8.80} |
|
|
|
05/30/2024 21:07:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4825, 'learning_rate': 1.7307e-06, 'epoch': 8.80} |
|
|
|
05/30/2024 21:09:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4902, 'learning_rate': 1.7039e-06, 'epoch': 8.81} |
|
|
|
05/30/2024 21:09:49 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4700 |
|
|
|
05/30/2024 21:09:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4700/tokenizer_config.json |
|
|
|
05/30/2024 21:09:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4700/special_tokens_map.json |
|
|
|
05/30/2024 21:11:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4537, 'learning_rate': 1.6773e-06, 'epoch': 8.82} |
|
|
|
05/30/2024 21:13:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4396, 'learning_rate': 1.6508e-06, 'epoch': 8.83} |
|
|
|
05/30/2024 21:15:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4696, 'learning_rate': 1.6246e-06, 'epoch': 8.84} |
|
|
|
05/30/2024 21:17:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.4470, 'learning_rate': 1.5986e-06, 'epoch': 8.85} |
|
|
|
05/30/2024 21:18:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.4521, 'learning_rate': 1.5727e-06, 'epoch': 8.86} |
|
|
|
05/30/2024 21:20:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4531, 'learning_rate': 1.5471e-06, 'epoch': 8.87} |
|
|
|
05/30/2024 21:22:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4411, 'learning_rate': 1.5217e-06, 'epoch': 8.88} |
|
|
|
05/30/2024 21:24:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.4536, 'learning_rate': 1.4965e-06, 'epoch': 8.89} |
|
|
|
05/30/2024 21:26:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4556, 'learning_rate': 1.4715e-06, 'epoch': 8.90} |
|
|
|
05/30/2024 21:27:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4710, 'learning_rate': 1.4467e-06, 'epoch': 8.91} |
|
|
|
05/30/2024 21:29:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.4636, 'learning_rate': 1.4221e-06, 'epoch': 8.92} |
|
|
|
05/30/2024 21:31:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4869, 'learning_rate': 1.3977e-06, 'epoch': 8.93} |
|
|
|
05/30/2024 21:33:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.4603, 'learning_rate': 1.3735e-06, 'epoch': 8.94} |
|
|
|
05/30/2024 21:35:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4551, 'learning_rate': 1.3495e-06, 'epoch': 8.95} |
|
|
|
05/30/2024 21:36:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4682, 'learning_rate': 1.3258e-06, 'epoch': 8.95} |
|
|
|
05/30/2024 21:38:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4869, 'learning_rate': 1.3022e-06, 'epoch': 8.96} |
|
|
|
05/30/2024 21:40:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.4708, 'learning_rate': 1.2788e-06, 'epoch': 8.97} |
|
|
|
05/30/2024 21:42:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4763, 'learning_rate': 1.2557e-06, 'epoch': 8.98} |
|
|
|
05/30/2024 21:44:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4321, 'learning_rate': 1.2327e-06, 'epoch': 8.99} |
|
|
|
05/30/2024 21:45:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4568, 'learning_rate': 1.2100e-06, 'epoch': 9.00} |
|
|
|
05/30/2024 21:45:55 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4800 |
|
|
|
05/30/2024 21:45:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4800/tokenizer_config.json |
|
|
|
05/30/2024 21:45:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4800/special_tokens_map.json |
|
|
|
05/30/2024 21:47:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4319, 'learning_rate': 1.1874e-06, 'epoch': 9.01} |
|
|
|
05/30/2024 21:49:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4318, 'learning_rate': 1.1651e-06, 'epoch': 9.02} |
|
|
|
05/30/2024 21:51:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4779, 'learning_rate': 1.1430e-06, 'epoch': 9.03} |
|
|
|
05/30/2024 21:53:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4774, 'learning_rate': 1.1210e-06, 'epoch': 9.04} |
|
|
|
05/30/2024 21:55:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.4502, 'learning_rate': 1.0993e-06, 'epoch': 9.05} |
|
|
|
05/30/2024 21:56:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.4914, 'learning_rate': 1.0778e-06, 'epoch': 9.06} |
|
|
|
05/30/2024 21:58:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5026, 'learning_rate': 1.0565e-06, 'epoch': 9.07} |
|
|
|
05/30/2024 22:00:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4502, 'learning_rate': 1.0354e-06, 'epoch': 9.08} |
|
|
|
05/30/2024 22:02:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4581, 'learning_rate': 1.0146e-06, 'epoch': 9.09} |
|
|
|
05/30/2024 22:04:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4844, 'learning_rate': 9.9389e-07, 'epoch': 9.10} |
|
|
|
05/30/2024 22:05:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4665, 'learning_rate': 9.7343e-07, 'epoch': 9.10} |
|
|
|
05/30/2024 22:07:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4676, 'learning_rate': 9.5317e-07, 'epoch': 9.11} |
|
|
|
05/30/2024 22:09:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4515, 'learning_rate': 9.3313e-07, 'epoch': 9.12} |
|
|
|
05/30/2024 22:11:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4952, 'learning_rate': 9.1329e-07, 'epoch': 9.13} |
|
|
|
05/30/2024 22:13:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4212, 'learning_rate': 8.9366e-07, 'epoch': 9.14} |
|
|
|
05/30/2024 22:15:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4894, 'learning_rate': 8.7424e-07, 'epoch': 9.15} |
|
|
|
05/30/2024 22:16:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4994, 'learning_rate': 8.5504e-07, 'epoch': 9.16} |
|
|
|
05/30/2024 22:18:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.4901, 'learning_rate': 8.3604e-07, 'epoch': 9.17} |
|
|
|
05/30/2024 22:20:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.4390, 'learning_rate': 8.1725e-07, 'epoch': 9.18} |
|
|
|
05/30/2024 22:22:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4916, 'learning_rate': 7.9867e-07, 'epoch': 9.19} |
|
|
|
05/30/2024 22:22:22 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4900 |
|
|
|
05/30/2024 22:22:22 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4900/tokenizer_config.json |
|
|
|
05/30/2024 22:22:22 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-4900/special_tokens_map.json |
|
|
|
05/30/2024 22:24:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4551, 'learning_rate': 7.8030e-07, 'epoch': 9.20} |
|
|
|
05/30/2024 22:25:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.4374, 'learning_rate': 7.6214e-07, 'epoch': 9.21} |
|
|
|
05/30/2024 22:27:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.4394, 'learning_rate': 7.4419e-07, 'epoch': 9.22} |
|
|
|
05/30/2024 22:29:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4776, 'learning_rate': 7.2645e-07, 'epoch': 9.23} |
|
|
|
05/30/2024 22:31:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.4765, 'learning_rate': 7.0893e-07, 'epoch': 9.24} |
|
|
|
05/30/2024 22:33:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.4483, 'learning_rate': 6.9161e-07, 'epoch': 9.25} |
|
|
|
05/30/2024 22:34:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4666, 'learning_rate': 6.7451e-07, 'epoch': 9.25} |
|
|
|
05/30/2024 22:36:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4642, 'learning_rate': 6.5761e-07, 'epoch': 9.26} |
|
|
|
05/30/2024 22:38:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4778, 'learning_rate': 6.4093e-07, 'epoch': 9.27} |
|
|
|
05/30/2024 22:40:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5415, 'learning_rate': 6.2446e-07, 'epoch': 9.28} |
|
|
|
05/30/2024 22:42:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4408, 'learning_rate': 6.0820e-07, 'epoch': 9.29} |
|
|
|
05/30/2024 22:44:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4813, 'learning_rate': 5.9216e-07, 'epoch': 9.30} |
|
|
|
05/30/2024 22:45:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.4678, 'learning_rate': 5.7632e-07, 'epoch': 9.31} |
|
|
|
05/30/2024 22:47:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.4045, 'learning_rate': 5.6070e-07, 'epoch': 9.32} |
|
|
|
05/30/2024 22:49:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4684, 'learning_rate': 5.4529e-07, 'epoch': 9.33} |
|
|
|
05/30/2024 22:51:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5042, 'learning_rate': 5.3009e-07, 'epoch': 9.34} |
|
|
|
05/30/2024 22:53:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4415, 'learning_rate': 5.1511e-07, 'epoch': 9.35} |
|
|
|
05/30/2024 22:55:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5308, 'learning_rate': 5.0033e-07, 'epoch': 9.36} |
|
|
|
05/30/2024 22:56:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4722, 'learning_rate': 4.8577e-07, 'epoch': 9.37} |
|
|
|
05/30/2024 22:58:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4407, 'learning_rate': 4.7143e-07, 'epoch': 9.38} |
|
|
|
05/30/2024 22:58:41 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5000 |
|
|
|
05/30/2024 22:58:41 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5000/tokenizer_config.json |
|
|
|
05/30/2024 22:58:41 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5000/special_tokens_map.json |
|
|
|
05/30/2024 23:00:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.4575, 'learning_rate': 4.5729e-07, 'epoch': 9.39} |
|
|
|
05/30/2024 23:02:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4909, 'learning_rate': 4.4337e-07, 'epoch': 9.40} |
|
|
|
05/30/2024 23:04:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5180, 'learning_rate': 4.2966e-07, 'epoch': 9.40} |
|
|
|
05/30/2024 23:06:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4587, 'learning_rate': 4.1617e-07, 'epoch': 9.41} |
|
|
|
05/30/2024 23:08:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4571, 'learning_rate': 4.0289e-07, 'epoch': 9.42} |
|
|
|
05/30/2024 23:09:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.4470, 'learning_rate': 3.8982e-07, 'epoch': 9.43} |
|
|
|
05/30/2024 23:11:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.4907, 'learning_rate': 3.7697e-07, 'epoch': 9.44} |
|
|
|
05/30/2024 23:13:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4578, 'learning_rate': 3.6433e-07, 'epoch': 9.45} |
|
|
|
05/30/2024 23:15:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5003, 'learning_rate': 3.5190e-07, 'epoch': 9.46} |
|
|
|
05/30/2024 23:17:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4720, 'learning_rate': 3.3969e-07, 'epoch': 9.47} |
|
|
|
05/30/2024 23:19:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4534, 'learning_rate': 3.2769e-07, 'epoch': 9.48} |
|
|
|
05/30/2024 23:20:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4678, 'learning_rate': 3.1591e-07, 'epoch': 9.49} |
|
|
|
05/30/2024 23:22:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.4331, 'learning_rate': 3.0434e-07, 'epoch': 9.50} |
|
|
|
05/30/2024 23:24:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.4439, 'learning_rate': 2.9299e-07, 'epoch': 9.51} |
|
|
|
05/30/2024 23:26:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.4496, 'learning_rate': 2.8185e-07, 'epoch': 9.52} |
|
|
|
05/30/2024 23:28:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.4530, 'learning_rate': 2.7093e-07, 'epoch': 9.53} |
|
|
|
05/30/2024 23:30:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5147, 'learning_rate': 2.6022e-07, 'epoch': 9.54} |
|
|
|
05/30/2024 23:31:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.4778, 'learning_rate': 2.4972e-07, 'epoch': 9.55} |
|
|
|
05/30/2024 23:33:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.4641, 'learning_rate': 2.3944e-07, 'epoch': 9.55} |
|
|
|
05/30/2024 23:35:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.4418, 'learning_rate': 2.2937e-07, 'epoch': 9.56} |
|
|
|
05/30/2024 23:35:45 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5100 |
|
|
|
05/30/2024 23:35:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5100/tokenizer_config.json |
|
|
|
05/30/2024 23:35:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5100/special_tokens_map.json |
|
|
|
05/30/2024 23:37:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5068, 'learning_rate': 2.1952e-07, 'epoch': 9.57} |
|
|
|
05/30/2024 23:39:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.4200, 'learning_rate': 2.0989e-07, 'epoch': 9.58} |
|
|
|
05/30/2024 23:41:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4307, 'learning_rate': 2.0047e-07, 'epoch': 9.59} |
|
|
|
05/30/2024 23:43:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.4639, 'learning_rate': 1.9127e-07, 'epoch': 9.60} |
|
|
|
05/30/2024 23:45:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.4664, 'learning_rate': 1.8228e-07, 'epoch': 9.61} |
|
|
|
05/30/2024 23:46:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.4853, 'learning_rate': 1.7351e-07, 'epoch': 9.62} |
|
|
|
05/30/2024 23:48:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.4494, 'learning_rate': 1.6495e-07, 'epoch': 9.63} |
|
|
|
05/30/2024 23:50:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5302, 'learning_rate': 1.5661e-07, 'epoch': 9.64} |
|
|
|
05/30/2024 23:52:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.4483, 'learning_rate': 1.4848e-07, 'epoch': 9.65} |
|
|
|
05/30/2024 23:54:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.4800, 'learning_rate': 1.4057e-07, 'epoch': 9.66} |
|
|
|
05/30/2024 23:56:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.4586, 'learning_rate': 1.3288e-07, 'epoch': 9.67} |
|
|
|
05/30/2024 23:58:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.4381, 'learning_rate': 1.2540e-07, 'epoch': 9.68} |
|
|
|
05/30/2024 23:59:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4319, 'learning_rate': 1.1814e-07, 'epoch': 9.69} |
|
|
|
05/31/2024 00:01:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.4772, 'learning_rate': 1.1109e-07, 'epoch': 9.70} |
|
|
|
05/31/2024 00:03:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5125, 'learning_rate': 1.0426e-07, 'epoch': 9.70} |
|
|
|
05/31/2024 00:05:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.4311, 'learning_rate': 9.7646e-08, 'epoch': 9.71} |
|
|
|
05/31/2024 00:07:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5005, 'learning_rate': 9.1249e-08, 'epoch': 9.72} |
|
|
|
05/31/2024 00:08:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4248, 'learning_rate': 8.5068e-08, 'epoch': 9.73} |
|
|
|
05/31/2024 00:10:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.4287, 'learning_rate': 7.9103e-08, 'epoch': 9.74} |
|
|
|
05/31/2024 00:12:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.4535, 'learning_rate': 7.3355e-08, 'epoch': 9.75} |
|
|
|
05/31/2024 00:12:31 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5200 |
|
|
|
05/31/2024 00:12:31 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5200/tokenizer_config.json |
|
|
|
05/31/2024 00:12:31 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5200/special_tokens_map.json |
|
|
|
05/31/2024 00:14:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.4731, 'learning_rate': 6.7823e-08, 'epoch': 9.76} |
|
|
|
05/31/2024 00:16:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.4730, 'learning_rate': 6.2508e-08, 'epoch': 9.77} |
|
|
|
05/31/2024 00:17:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4412, 'learning_rate': 5.7410e-08, 'epoch': 9.78} |
|
|
|
05/31/2024 00:19:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.4863, 'learning_rate': 5.2528e-08, 'epoch': 9.79} |
|
|
|
05/31/2024 00:21:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4684, 'learning_rate': 4.7862e-08, 'epoch': 9.80} |
|
|
|
05/31/2024 00:23:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.4320, 'learning_rate': 4.3414e-08, 'epoch': 9.81} |
|
|
|
05/31/2024 00:25:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.4606, 'learning_rate': 3.9182e-08, 'epoch': 9.82} |
|
|
|
05/31/2024 00:27:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.4296, 'learning_rate': 3.5167e-08, 'epoch': 9.83} |
|
|
|
05/31/2024 00:29:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.4437, 'learning_rate': 3.1369e-08, 'epoch': 9.84} |
|
|
|
05/31/2024 00:30:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4616, 'learning_rate': 2.7788e-08, 'epoch': 9.85} |
|
|
|
05/31/2024 00:32:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.4561, 'learning_rate': 2.4423e-08, 'epoch': 9.85} |
|
|
|
05/31/2024 00:34:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.4436, 'learning_rate': 2.1276e-08, 'epoch': 9.86} |
|
|
|
05/31/2024 00:36:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4756, 'learning_rate': 1.8345e-08, 'epoch': 9.87} |
|
|
|
05/31/2024 00:38:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4342, 'learning_rate': 1.5632e-08, 'epoch': 9.88} |
|
|
|
05/31/2024 00:40:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5266, 'learning_rate': 1.3135e-08, 'epoch': 9.89} |
|
|
|
05/31/2024 00:41:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.4431, 'learning_rate': 1.0856e-08, 'epoch': 9.90} |
|
|
|
05/31/2024 00:43:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.4554, 'learning_rate': 8.7934e-09, 'epoch': 9.91} |
|
|
|
05/31/2024 00:45:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.4767, 'learning_rate': 6.9479e-09, 'epoch': 9.92} |
|
|
|
05/31/2024 00:47:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.4756, 'learning_rate': 5.3196e-09, 'epoch': 9.93} |
|
|
|
05/31/2024 00:49:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.4476, 'learning_rate': 3.9083e-09, 'epoch': 9.94} |
|
|
|
05/31/2024 00:49:14 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5300 |
|
|
|
05/31/2024 00:49:14 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5300/tokenizer_config.json |
|
|
|
05/31/2024 00:49:14 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/checkpoint-5300/special_tokens_map.json |
|
|
|
05/31/2024 00:51:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.4690, 'learning_rate': 2.7141e-09, 'epoch': 9.95} |
|
|
|
05/31/2024 00:52:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.4784, 'learning_rate': 1.7370e-09, 'epoch': 9.96} |
|
|
|
05/31/2024 00:54:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.4766, 'learning_rate': 9.7709e-10, 'epoch': 9.97} |
|
|
|
05/31/2024 00:56:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.4395, 'learning_rate': 4.3426e-10, 'epoch': 9.98} |
|
|
|
05/31/2024 00:58:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.4473, 'learning_rate': 1.0857e-10, 'epoch': 9.99} |
|
|
|
05/31/2024 01:00:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.4661, 'learning_rate': 0.0000e+00, 'epoch': 10.00} |
|
|
|
05/31/2024 01:00:09 - INFO - transformers.trainer - |
|
|
|
Training completed. Do not forget to share your model on huggingface.co/models =) |
|
|
|
|
|
|
|
05/31/2024 01:00:09 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1 |
|
|
|
05/31/2024 01:00:09 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/tokenizer_config.json |
|
|
|
05/31/2024 01:00:09 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Mistral-7B-Instruct-v0.1/special_tokens_map.json |
|
|
|
05/31/2024 01:00:09 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields: |
|
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}} |
|
|
|
|