2024-10-15 18:28:08.741 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-28-08_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:28:08.741 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:28:08.824 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-28-08_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:28:08.825 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:28:08.828 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-28-08_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:28:08.829 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:28:08.861 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-28-08_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:28:08.862 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:28:08.944 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:28:08.944 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:28:08.944 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:28:09.034 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:28:09.035 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:28:09.035 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:28:09.042 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:28:09.043 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:28:09.043 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:28:09.087 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:28:09.088 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:28:09.088 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:28:22.096 | INFO | __main__:load_model:331 - Total model params: 0.00M 2024-10-15 18:28:22.097 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:28:22.097 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:28:22.097 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:28:22.098 | INFO | __main__:load_model:331 - Total model params: 0.00M 2024-10-15 18:28:22.098 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:28:22.099 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:28:22.099 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:28:22.108 | INFO | __main__:load_model:331 - Total model params: 0.00M 2024-10-15 18:28:22.108 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:28:22.108 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:28:22.108 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:28:22.159 | INFO | __main__:load_model:331 - Total model params: 0.00M 2024-10-15 18:28:22.160 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:28:22.160 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:28:22.160 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:28:22.719 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:28:22.719 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:28:22.719 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:28:22.719 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:28:22.719 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:28:22.719 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:28:22.772 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:28:22.772 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:28:22.772 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:28:22.787 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:28:22.788 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:28:22.837 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:35:42.560 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-35-42_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:35:42.561 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:35:42.694 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-35-42_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:35:42.694 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-35-42_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:35:42.694 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-35-42_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:35:42.695 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:35:42.695 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:35:42.695 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:35:42.766 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:35:42.766 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:35:42.767 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:35:42.911 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:35:42.913 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:35:42.913 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:35:42.959 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:35:42.960 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:35:42.960 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:35:42.966 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:35:42.967 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:35:42.967 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:35:56.225 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:35:56.226 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:35:56.226 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:35:56.226 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:35:56.672 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:35:56.672 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:35:56.672 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:35:56.672 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:35:56.804 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:35:56.805 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:35:56.805 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:35:56.805 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:35:56.811 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:35:56.811 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:35:56.811 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:35:56.811 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:35:57.048 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:35:57.049 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:35:57.088 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:35:57.289 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:35:57.290 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:35:57.327 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:35:57.447 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:35:57.447 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:35:57.447 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:35:57.447 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:35:57.485 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:35:57.494 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:36:42.183 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-36-41_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:36:42.185 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:36:42.186 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-36-41_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:36:42.187 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:36:42.375 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:36:42.376 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:36:42.376 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:36:42.377 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:36:42.377 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:36:42.377 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:36:42.838 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-36-41_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:36:42.840 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:36:43.063 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-36-41_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:36:43.064 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:36:43.254 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:36:43.254 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:36:43.255 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:36:43.620 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:36:43.620 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:36:43.621 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:36:53.146 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:36:53.146 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:36:53.146 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:36:53.146 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:36:53.648 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:36:53.648 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:36:53.648 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:36:53.648 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:36:54.129 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:36:54.129 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:36:54.183 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:36:54.747 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:36:54.748 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:36:54.808 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:36:54.891 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:36:54.891 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:36:54.891 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:36:54.891 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:36:55.314 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:36:55.314 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:36:55.314 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:36:55.314 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:36:55.534 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:36:55.534 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:36:55.575 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:36:55.951 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:36:55.951 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:36:55.989 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:37:47.346 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-37-46_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:37:47.346 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-37-46_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:37:47.347 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:37:47.347 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:37:47.543 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:37:47.544 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:37:47.544 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:37:47.549 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:37:47.549 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:37:47.549 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:37:48.141 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-37-46_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:37:48.142 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:37:48.365 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-37-46_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0, ) 2024-10-15 18:37:48.366 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:37:48.572 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:37:48.572 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:37:48.572 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:37:48.928 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:37:48.929 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:37:48.929 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:37:58.544 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:37:58.544 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:37:58.544 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:37:58.544 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:37:59.270 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:37:59.270 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:37:59.271 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:37:59.271 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:37:59.638 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:37:59.638 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:37:59.679 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:37:59.845 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:37:59.846 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:37:59.846 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:37:59.846 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:38:00.024 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:38:00.025 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:38:00.025 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:38:00.025 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:38:00.097 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:38:00.097 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:38:00.139 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:38:00.483 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:38:00.483 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:38:00.522 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:38:00.666 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:38:00.666 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:38:00.713 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:49:53.593 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-49-53_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:49:53.595 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:49:53.645 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-49-53_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:49:53.646 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:49:53.653 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-49-53_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:49:53.654 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:49:53.812 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:49:53.813 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:49:53.813 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:49:53.847 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:49:53.848 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:49:53.848 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:49:53.860 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:49:53.860 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:49:53.861 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:50:08.979 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-50-08_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:50:08.980 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:50:08.980 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-50-08_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:50:08.981 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:50:09.114 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-50-08_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:50:09.115 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:50:09.184 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:50:09.184 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:50:09.184 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:50:09.185 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:50:09.186 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:50:09.186 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:50:09.434 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:50:09.435 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:50:09.435 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:50:57.093 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-50-55_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:50:57.094 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:50:57.241 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-50-55_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:50:57.241 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-50-55_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:50:57.242 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:50:57.242 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:50:57.243 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct15_18-50-55_autodl-container-98fc43be4e-aa4b00e3, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-15 18:50:57.244 | INFO | __main__:init_components:369 - Initializing components... 2024-10-15 18:50:57.298 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:50:57.298 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:50:57.298 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:50:57.445 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:50:57.445 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:50:57.445 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:50:57.445 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:50:57.445 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:50:57.445 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:50:57.510 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-15 18:50:57.510 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-15 18:50:57.510 | INFO | __main__:load_model:257 - Train model with full 2024-10-15 18:51:09.099 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:51:09.099 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:51:09.100 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:51:09.100 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:51:09.584 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:51:09.584 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:51:09.584 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:51:09.584 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:51:09.728 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:51:09.728 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:51:09.728 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:51:09.728 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:51:10.114 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:51:10.114 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:51:10.152 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:51:10.171 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-15 18:51:10.171 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-15 18:51:10.171 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-15 18:51:10.171 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-15 18:51:10.246 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:51:10.247 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:51:10.295 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:51:10.371 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:51:10.371 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:51:10.410 | INFO | __main__:main:426 - *** starting training *** 2024-10-15 18:51:10.806 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-15 18:51:10.806 | INFO | component.dataset:__init__:23 - There are 43303 data in dataset 2024-10-15 18:51:10.845 | INFO | __main__:main:426 - *** starting training *** 2024-10-16 19:40:18.366 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct16_19-40-18_autodl-container-84724297e8-2f1aa90b, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-16 19:40:18.367 | INFO | __main__:init_components:369 - Initializing components... 2024-10-16 19:40:18.606 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-16 19:40:18.606 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-16 19:40:18.607 | INFO | __main__:load_model:257 - Train model with full 2024-10-16 19:40:19.211 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct16_19-40-18_autodl-container-84724297e8-2f1aa90b, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-16 19:40:19.212 | INFO | __main__:init_components:369 - Initializing components... 2024-10-16 19:40:19.413 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct16_19-40-18_autodl-container-84724297e8-2f1aa90b, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-16 19:40:19.413 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct16_19-40-18_autodl-container-84724297e8-2f1aa90b, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-16 19:40:19.414 | INFO | __main__:init_components:369 - Initializing components... 2024-10-16 19:40:19.414 | INFO | __main__:init_components:369 - Initializing components... 2024-10-16 19:40:19.433 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-16 19:40:19.434 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-16 19:40:19.434 | INFO | __main__:load_model:257 - Train model with full 2024-10-16 19:40:19.769 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-16 19:40:19.770 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-16 19:40:19.770 | INFO | __main__:load_model:257 - Train model with full 2024-10-16 19:40:19.775 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-16 19:40:19.775 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-16 19:40:19.775 | INFO | __main__:load_model:257 - Train model with full 2024-10-16 19:40:27.391 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-16 19:40:27.392 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-16 19:40:27.392 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-16 19:40:27.392 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-16 19:40:28.362 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-16 19:40:28.362 | INFO | component.dataset:__init__:23 - There are 40954 data in dataset 2024-10-16 19:40:28.408 | INFO | __main__:main:426 - *** starting training *** 2024-10-16 19:40:30.430 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-16 19:40:30.431 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-16 19:40:30.431 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-16 19:40:30.431 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-16 19:40:30.447 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-16 19:40:30.447 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-16 19:40:30.447 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-16 19:40:30.447 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-16 19:40:30.974 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-16 19:40:30.974 | INFO | component.dataset:__init__:23 - There are 40954 data in dataset 2024-10-16 19:40:30.993 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-16 19:40:30.994 | INFO | component.dataset:__init__:23 - There are 40954 data in dataset 2024-10-16 19:40:31.015 | INFO | __main__:main:426 - *** starting training *** 2024-10-16 19:40:31.033 | INFO | __main__:main:426 - *** starting training *** 2024-10-16 19:44:01.018 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct16_19-43-59_autodl-container-84724297e8-2f1aa90b, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-16 19:44:01.019 | INFO | __main__:init_components:369 - Initializing components... 2024-10-16 19:44:01.164 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct16_19-43-59_autodl-container-84724297e8-2f1aa90b, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-16 19:44:01.165 | INFO | __main__:init_components:369 - Initializing components... 2024-10-16 19:44:01.169 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct16_19-43-59_autodl-container-84724297e8-2f1aa90b, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-16 19:44:01.170 | INFO | __main__:init_components:369 - Initializing components... 2024-10-16 19:44:01.172 | INFO | __main__:setup_everything:57 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=./train_args/ds_z3_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=1e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/root/autodl-tmp/liwugpt/runs/Oct16_19-43-59_autodl-container-84724297e8-2f1aa90b, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=2, optim=adamw_hf, optim_args=None, output_dir=/root/autodl-tmp/liwugpt/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/root/autodl-tmp/liwugpt/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=20000, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=200, weight_decay=0, ) 2024-10-16 19:44:01.174 | INFO | __main__:init_components:369 - Initializing components... 2024-10-16 19:44:01.236 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-16 19:44:01.236 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-16 19:44:01.236 | INFO | __main__:load_model:257 - Train model with full 2024-10-16 19:44:01.384 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-16 19:44:01.384 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-16 19:44:01.384 | INFO | __main__:load_model:257 - Train model with full 2024-10-16 19:44:01.389 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-16 19:44:01.389 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-16 19:44:01.389 | INFO | __main__:load_model:257 - Train model with full 2024-10-16 19:44:01.451 | INFO | __main__:load_tokenizer:217 - vocab_size of tokenizer: 151643 2024-10-16 19:44:01.451 | INFO | __main__:load_model:256 - Loading model from base model: /root/autodl-tmp/qwen/ 2024-10-16 19:44:01.451 | INFO | __main__:load_model:257 - Train model with full 2024-10-16 19:44:14.061 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-16 19:44:14.061 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-16 19:44:14.061 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-16 19:44:14.062 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-16 19:44:14.720 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-16 19:44:14.720 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-16 19:44:14.721 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-16 19:44:14.721 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-16 19:44:14.855 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-16 19:44:14.856 | INFO | component.dataset:__init__:23 - There are 40954 data in dataset 2024-10-16 19:44:14.918 | INFO | __main__:main:426 - *** starting training *** 2024-10-16 19:44:14.968 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-16 19:44:14.968 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-16 19:44:14.968 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-16 19:44:14.968 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-16 19:44:15.259 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-16 19:44:15.259 | INFO | component.dataset:__init__:23 - There are 40954 data in dataset 2024-10-16 19:44:15.270 | INFO | __main__:load_model:331 - Total model params: 7615.62M 2024-10-16 19:44:15.270 | INFO | __main__:init_components:388 - Train model with sft task 2024-10-16 19:44:15.271 | INFO | __main__:load_sft_dataset:351 - Loading data with UnifiedSFTDataset 2024-10-16 19:44:15.271 | INFO | component.dataset:__init__:19 - Loading data: /root/autodl-tmp/output_firefly.jsonl 2024-10-16 19:44:15.301 | INFO | __main__:main:426 - *** starting training *** 2024-10-16 19:44:15.465 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-16 19:44:15.465 | INFO | component.dataset:__init__:23 - There are 40954 data in dataset 2024-10-16 19:44:15.519 | INFO | __main__:main:426 - *** starting training *** 2024-10-16 19:44:15.764 | INFO | component.dataset:__init__:22 - Use template "qwen" for training 2024-10-16 19:44:15.764 | INFO | component.dataset:__init__:23 - There are 40954 data in dataset 2024-10-16 19:44:15.806 | INFO | __main__:main:426 - *** starting training ***