emoji_Mistral7B_v2_lora / running_log.txt
svjack's picture
Upload folder using huggingface_hub
e9083cb verified
raw
history blame contribute delete
No virus
24.1 kB
05/15/2024 20:34:24 - INFO - transformers.tokenization_utils_base - loading file tokenizer.model from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/tokenizer.model
05/15/2024 20:34:24 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/tokenizer.json
05/15/2024 20:34:24 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None
05/15/2024 20:34:24 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/special_tokens_map.json
05/15/2024 20:34:24 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/tokenizer_config.json
05/15/2024 20:34:24 - WARNING - transformers.models.llama.tokenization_llama_fast - You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
05/15/2024 20:34:25 - INFO - llmtuner.data.template - Add pad token: </s>
05/15/2024 20:34:25 - INFO - llmtuner.data.loader - Loading dataset svjack/emoji_add_instruction_zh...
05/15/2024 20:34:35 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/config.json
05/15/2024 20:34:35 - INFO - transformers.configuration_utils - Model config MistralConfig {
"_name_or_path": "alpindale/Mistral-7B-v0.2-hf",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.40.2",
"use_cache": true,
"vocab_size": 32000
}
05/15/2024 20:34:35 - INFO - llmtuner.model.utils.quantization - Quantizing model to 4 bit.
05/15/2024 20:34:38 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/model.safetensors.index.json
05/15/2024 20:35:08 - INFO - transformers.modeling_utils - Instantiating MistralForCausalLM model under default dtype torch.float16.
05/15/2024 20:35:08 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2
}
05/15/2024 20:35:18 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing MistralForCausalLM.
05/15/2024 20:35:18 - INFO - transformers.modeling_utils - All the weights of MistralForCausalLM were initialized from the model checkpoint at alpindale/Mistral-7B-v0.2-hf.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MistralForCausalLM for predictions without further training.
05/15/2024 20:35:19 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/generation_config.json
05/15/2024 20:35:19 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2
}
05/15/2024 20:35:20 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled.
05/15/2024 20:35:20 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference.
05/15/2024 20:35:20 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA
05/15/2024 20:35:20 - INFO - llmtuner.model.loader - trainable params: 3407872 || all params: 7245139968 || trainable%: 0.0470
05/15/2024 20:35:20 - WARNING - accelerate.utils.other - Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
05/15/2024 20:35:20 - INFO - transformers.trainer - Using auto half precision backend
05/15/2024 20:35:20 - INFO - transformers.trainer - ***** Running training *****
05/15/2024 20:35:20 - INFO - transformers.trainer - Num examples = 2,449
05/15/2024 20:35:20 - INFO - transformers.trainer - Num Epochs = 3
05/15/2024 20:35:20 - INFO - transformers.trainer - Instantaneous batch size per device = 2
05/15/2024 20:35:20 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16
05/15/2024 20:35:20 - INFO - transformers.trainer - Gradient Accumulation steps = 8
05/15/2024 20:35:20 - INFO - transformers.trainer - Total optimization steps = 459
05/15/2024 20:35:20 - INFO - transformers.trainer - Number of trainable parameters = 3,407,872
05/15/2024 20:37:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.1654, 'learning_rate': 4.9985e-05, 'epoch': 0.03}
05/15/2024 20:38:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.1369, 'learning_rate': 4.9941e-05, 'epoch': 0.07}
05/15/2024 20:40:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.1259, 'learning_rate': 4.9868e-05, 'epoch': 0.10}
05/15/2024 20:42:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.1041, 'learning_rate': 4.9766e-05, 'epoch': 0.13}
05/15/2024 20:43:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.1004, 'learning_rate': 4.9635e-05, 'epoch': 0.16}
05/15/2024 20:45:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.0941, 'learning_rate': 4.9475e-05, 'epoch': 0.20}
05/15/2024 20:47:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.0867, 'learning_rate': 4.9286e-05, 'epoch': 0.23}
05/15/2024 20:48:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.0853, 'learning_rate': 4.9069e-05, 'epoch': 0.26}
05/15/2024 20:50:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.0863, 'learning_rate': 4.8824e-05, 'epoch': 0.29}
05/15/2024 20:52:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.0874, 'learning_rate': 4.8550e-05, 'epoch': 0.33}
05/15/2024 20:53:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.0733, 'learning_rate': 4.8249e-05, 'epoch': 0.36}
05/15/2024 20:55:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.0719, 'learning_rate': 4.7921e-05, 'epoch': 0.39}
05/15/2024 20:57:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.0824, 'learning_rate': 4.7566e-05, 'epoch': 0.42}
05/15/2024 20:58:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.0731, 'learning_rate': 4.7185e-05, 'epoch': 0.46}
05/15/2024 21:00:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.0725, 'learning_rate': 4.6778e-05, 'epoch': 0.49}
05/15/2024 21:02:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.0694, 'learning_rate': 4.6345e-05, 'epoch': 0.52}
05/15/2024 21:03:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.0720, 'learning_rate': 4.5887e-05, 'epoch': 0.56}
05/15/2024 21:05:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.0646, 'learning_rate': 4.5405e-05, 'epoch': 0.59}
05/15/2024 21:07:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.0732, 'learning_rate': 4.4899e-05, 'epoch': 0.62}
05/15/2024 21:08:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.0680, 'learning_rate': 4.4369e-05, 'epoch': 0.65}
05/15/2024 21:08:54 - INFO - transformers.trainer - Saving model checkpoint to saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-100
05/15/2024 21:08:55 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/config.json
05/15/2024 21:08:55 - INFO - transformers.configuration_utils - Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.40.2",
"use_cache": true,
"vocab_size": 32000
}
05/15/2024 21:08:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-100/tokenizer_config.json
05/15/2024 21:08:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-100/special_tokens_map.json
05/15/2024 21:10:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.0749, 'learning_rate': 4.3817e-05, 'epoch': 0.69}
05/15/2024 21:12:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.0652, 'learning_rate': 4.3243e-05, 'epoch': 0.72}
05/15/2024 21:14:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.0562, 'learning_rate': 4.2647e-05, 'epoch': 0.75}
05/15/2024 21:15:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.0602, 'learning_rate': 4.2031e-05, 'epoch': 0.78}
05/15/2024 21:17:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.0639, 'learning_rate': 4.1395e-05, 'epoch': 0.82}
05/15/2024 21:19:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.0651, 'learning_rate': 4.0740e-05, 'epoch': 0.85}
05/15/2024 21:20:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.0676, 'learning_rate': 4.0066e-05, 'epoch': 0.88}
05/15/2024 21:22:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.0669, 'learning_rate': 3.9374e-05, 'epoch': 0.91}
05/15/2024 21:24:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.0631, 'learning_rate': 3.8666e-05, 'epoch': 0.95}
05/15/2024 21:25:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.0649, 'learning_rate': 3.7942e-05, 'epoch': 0.98}
05/15/2024 21:27:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.0590, 'learning_rate': 3.7202e-05, 'epoch': 1.01}
05/15/2024 21:29:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.0620, 'learning_rate': 3.6449e-05, 'epoch': 1.04}
05/15/2024 21:30:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.0565, 'learning_rate': 3.5682e-05, 'epoch': 1.08}
05/15/2024 21:32:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.0575, 'learning_rate': 3.4902e-05, 'epoch': 1.11}
05/15/2024 21:34:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.0589, 'learning_rate': 3.4111e-05, 'epoch': 1.14}
05/15/2024 21:35:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.0611, 'learning_rate': 3.3309e-05, 'epoch': 1.18}
05/15/2024 21:37:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.0620, 'learning_rate': 3.2497e-05, 'epoch': 1.21}
05/15/2024 21:39:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.0616, 'learning_rate': 3.1677e-05, 'epoch': 1.24}
05/15/2024 21:40:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.0586, 'learning_rate': 3.0849e-05, 'epoch': 1.27}
05/15/2024 21:42:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.0578, 'learning_rate': 3.0014e-05, 'epoch': 1.31}
05/15/2024 21:42:32 - INFO - transformers.trainer - Saving model checkpoint to saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-200
05/15/2024 21:42:35 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/config.json
05/15/2024 21:42:35 - INFO - transformers.configuration_utils - Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.40.2",
"use_cache": true,
"vocab_size": 32000
}
05/15/2024 21:42:35 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-200/tokenizer_config.json
05/15/2024 21:42:35 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-200/special_tokens_map.json
05/15/2024 21:44:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.0547, 'learning_rate': 2.9173e-05, 'epoch': 1.34}
05/15/2024 21:45:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.0581, 'learning_rate': 2.8327e-05, 'epoch': 1.37}
05/15/2024 21:47:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.0535, 'learning_rate': 2.7477e-05, 'epoch': 1.40}
05/15/2024 21:49:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.0666, 'learning_rate': 2.6624e-05, 'epoch': 1.44}
05/15/2024 21:51:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.0530, 'learning_rate': 2.5770e-05, 'epoch': 1.47}
05/15/2024 21:52:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.0602, 'learning_rate': 2.4914e-05, 'epoch': 1.50}
05/15/2024 21:54:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.0573, 'learning_rate': 2.4059e-05, 'epoch': 1.53}
05/15/2024 21:56:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.0544, 'learning_rate': 2.3205e-05, 'epoch': 1.57}
05/15/2024 21:57:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.0552, 'learning_rate': 2.2353e-05, 'epoch': 1.60}
05/15/2024 21:59:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.0543, 'learning_rate': 2.1504e-05, 'epoch': 1.63}
05/15/2024 22:01:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.0481, 'learning_rate': 2.0659e-05, 'epoch': 1.67}
05/15/2024 22:02:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.0520, 'learning_rate': 1.9819e-05, 'epoch': 1.70}
05/15/2024 22:04:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.0516, 'learning_rate': 1.8985e-05, 'epoch': 1.73}
05/15/2024 22:06:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.0559, 'learning_rate': 1.8158e-05, 'epoch': 1.76}
05/15/2024 22:07:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.0554, 'learning_rate': 1.7340e-05, 'epoch': 1.80}
05/15/2024 22:09:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.0576, 'learning_rate': 1.6530e-05, 'epoch': 1.83}
05/15/2024 22:11:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.0546, 'learning_rate': 1.5730e-05, 'epoch': 1.86}
05/15/2024 22:12:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.0539, 'learning_rate': 1.4941e-05, 'epoch': 1.89}
05/15/2024 22:14:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.0537, 'learning_rate': 1.4164e-05, 'epoch': 1.93}
05/15/2024 22:16:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.0493, 'learning_rate': 1.3399e-05, 'epoch': 1.96}
05/15/2024 22:16:18 - INFO - transformers.trainer - Saving model checkpoint to saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-300
05/15/2024 22:16:20 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/config.json
05/15/2024 22:16:20 - INFO - transformers.configuration_utils - Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.40.2",
"use_cache": true,
"vocab_size": 32000
}
05/15/2024 22:16:20 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-300/tokenizer_config.json
05/15/2024 22:16:20 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-300/special_tokens_map.json
05/15/2024 22:18:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.0514, 'learning_rate': 1.2648e-05, 'epoch': 1.99}
05/15/2024 22:19:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.0526, 'learning_rate': 1.1912e-05, 'epoch': 2.02}
05/15/2024 22:21:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.0521, 'learning_rate': 1.1191e-05, 'epoch': 2.06}
05/15/2024 22:23:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.0506, 'learning_rate': 1.0486e-05, 'epoch': 2.09}
05/15/2024 22:24:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.0549, 'learning_rate': 9.7979e-06, 'epoch': 2.12}
05/15/2024 22:26:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.0522, 'learning_rate': 9.1278e-06, 'epoch': 2.16}
05/15/2024 22:28:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.0499, 'learning_rate': 8.4762e-06, 'epoch': 2.19}
05/15/2024 22:29:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.0556, 'learning_rate': 7.8440e-06, 'epoch': 2.22}
05/15/2024 22:31:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.0514, 'learning_rate': 7.2318e-06, 'epoch': 2.25}
05/15/2024 22:33:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.0539, 'learning_rate': 6.6405e-06, 'epoch': 2.29}
05/15/2024 22:34:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.0506, 'learning_rate': 6.0707e-06, 'epoch': 2.32}
05/15/2024 22:36:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.0494, 'learning_rate': 5.5230e-06, 'epoch': 2.35}
05/15/2024 22:38:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.0502, 'learning_rate': 4.9981e-06, 'epoch': 2.38}
05/15/2024 22:39:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.0503, 'learning_rate': 4.4967e-06, 'epoch': 2.42}
05/15/2024 22:41:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.0471, 'learning_rate': 4.0193e-06, 'epoch': 2.45}
05/15/2024 22:43:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.0510, 'learning_rate': 3.5664e-06, 'epoch': 2.48}
05/15/2024 22:45:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.0468, 'learning_rate': 3.1387e-06, 'epoch': 2.51}
05/15/2024 22:46:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.0480, 'learning_rate': 2.7365e-06, 'epoch': 2.55}
05/15/2024 22:48:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.0508, 'learning_rate': 2.3604e-06, 'epoch': 2.58}
05/15/2024 22:50:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.0495, 'learning_rate': 2.0108e-06, 'epoch': 2.61}
05/15/2024 22:50:05 - INFO - transformers.trainer - Saving model checkpoint to saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-400
05/15/2024 22:50:06 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/config.json
05/15/2024 22:50:06 - INFO - transformers.configuration_utils - Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.40.2",
"use_cache": true,
"vocab_size": 32000
}
05/15/2024 22:50:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-400/tokenizer_config.json
05/15/2024 22:50:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/checkpoint-400/special_tokens_map.json
05/15/2024 22:51:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.0511, 'learning_rate': 1.6882e-06, 'epoch': 2.64}
05/15/2024 22:53:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.0486, 'learning_rate': 1.3928e-06, 'epoch': 2.68}
05/15/2024 22:55:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.0552, 'learning_rate': 1.1251e-06, 'epoch': 2.71}
05/15/2024 22:56:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.0523, 'learning_rate': 8.8539e-07, 'epoch': 2.74}
05/15/2024 22:58:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.0517, 'learning_rate': 6.7388e-07, 'epoch': 2.78}
05/15/2024 23:00:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.0515, 'learning_rate': 4.9086e-07, 'epoch': 2.81}
05/15/2024 23:01:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.0528, 'learning_rate': 3.3653e-07, 'epoch': 2.84}
05/15/2024 23:03:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.0459, 'learning_rate': 2.1110e-07, 'epoch': 2.87}
05/15/2024 23:05:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.0556, 'learning_rate': 1.1469e-07, 'epoch': 2.91}
05/15/2024 23:06:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.0520, 'learning_rate': 4.7417e-08, 'epoch': 2.94}
05/15/2024 23:08:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.0532, 'learning_rate': 9.3687e-09, 'epoch': 2.97}
05/15/2024 23:09:56 - INFO - transformers.trainer -
Training completed. Do not forget to share your model on huggingface.co/models =)
05/15/2024 23:09:56 - INFO - transformers.trainer - Saving model checkpoint to saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30
05/15/2024 23:09:57 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/featurize/.cache/huggingface/hub/models--alpindale--Mistral-7B-v0.2-hf/snapshots/2c3e624962b1a3f3fbf52e15969565caa7bc064a/config.json
05/15/2024 23:09:57 - INFO - transformers.configuration_utils - Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.40.2",
"use_cache": true,
"vocab_size": 32000
}
05/15/2024 23:09:57 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/tokenizer_config.json
05/15/2024 23:09:57 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Mistral-7B-v0.2/lora/train_2024-05-15-20-33-30/special_tokens_map.json
05/15/2024 23:09:58 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}