mikr's picture
Training in progress, step 1000
40f1bc6
raw
history blame
63.4 kB
[2022-12-18 08:40:52,091] [WARNING] [runner.py:179:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2022-12-18 08:40:52,100] [INFO] [runner.py:508:main] cmd = /usr/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 run_speech_recognition_seq2seq_streaming.py --deepspeed=ds_config.json --model_name_or_path=openai/whisper-small --dataset_name=mozilla-foundation/common_voice_11_0 --dataset_config_name=ro --language=romanian --train_split_name=train+validation --eval_split_name=test --model_index_name=Whisper Small Romanian CV11 --max_steps=5000 --output_dir=./ --per_device_train_batch_size=64 --per_device_eval_batch_size=32 --logging_steps=25 --learning_rate=1e-5 --warmup_steps=500 --evaluation_strategy=steps --eval_steps=1000 --save_strategy=steps --save_steps=1000 --generation_max_length=225 --length_column_name=input_length --max_duration_in_seconds=30 --text_column_name=sentence --freeze_feature_encoder=False --report_to=tensorboard --metric_for_best_model=wer --greater_is_better=False --load_best_model_at_end --gradient_checkpointing --fp16 --overwrite_output_dir --do_train --do_eval --predict_with_generate --do_normalize_eval --streaming --use_auth_token --push_to_hub
[2022-12-18 08:40:55,346] [INFO] [launch.py:135:main] 0 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.13.4-1+cuda11.7
[2022-12-18 08:40:55,346] [INFO] [launch.py:135:main] 0 NV_LIBNCCL_DEV_PACKAGE_VERSION=2.13.4-1
[2022-12-18 08:40:55,346] [INFO] [launch.py:135:main] 0 NCCL_VERSION=2.13.4-1
[2022-12-18 08:40:55,347] [INFO] [launch.py:135:main] 0 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
[2022-12-18 08:40:55,347] [INFO] [launch.py:135:main] 0 NV_LIBNCCL_PACKAGE=libnccl2=2.13.4-1+cuda11.7
[2022-12-18 08:40:55,347] [INFO] [launch.py:135:main] 0 NV_LIBNCCL_PACKAGE_NAME=libnccl2
[2022-12-18 08:40:55,347] [INFO] [launch.py:135:main] 0 NV_LIBNCCL_PACKAGE_VERSION=2.13.4-1
[2022-12-18 08:40:55,347] [INFO] [launch.py:142:main] WORLD INFO DICT: {'localhost': [0]}
[2022-12-18 08:40:55,347] [INFO] [launch.py:148:main] nnodes=1, num_local_procs=1, node_rank=0
[2022-12-18 08:40:55,347] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})
[2022-12-18 08:40:55,347] [INFO] [launch.py:162:main] dist_world_size=1
[2022-12-18 08:40:55,347] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0
[2022-12-18 08:41:04,141] [INFO] [comm.py:654:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
12/18/2022 08:41:04 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True
12/18/2022 08:41:04 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=ds_config.json,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=1000,
evaluation_strategy=steps,
fp16=True,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
generation_max_length=225,
generation_num_beams=None,
gradient_accumulation_steps=1,
gradient_checkpointing=True,
greater_is_better=False,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=1e-05,
length_column_name=input_length,
load_best_model_at_end=True,
local_rank=0,
log_level=passive,
log_level_replica=passive,
log_on_each_node=True,
logging_dir=./runs/Dec18_08-41-04_fe2747a042f0,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=25,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=5000,
metric_for_best_model=wer,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
optim=adamw_hf,
optim_args=None,
output_dir=./,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=32,
per_device_train_batch_size=64,
predict_with_generate=True,
prediction_loss_only=False,
push_to_hub=True,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=./,
save_on_each_node=False,
save_steps=1000,
save_strategy=steps,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
sortish_sampler=False,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=500,
weight_decay=0.0,
xpu_backend=None,
)
12/18/2022 08:41:04 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=ds_config.json,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=1000,
evaluation_strategy=steps,
fp16=True,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
generation_max_length=225,
generation_num_beams=None,
gradient_accumulation_steps=1,
gradient_checkpointing=True,
greater_is_better=False,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=1e-05,
length_column_name=input_length,
load_best_model_at_end=True,
local_rank=0,
log_level=passive,
log_level_replica=passive,
log_on_each_node=True,
logging_dir=./runs/Dec18_08-41-04_fe2747a042f0,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=25,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=5000,
metric_for_best_model=wer,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
optim=adamw_hf,
optim_args=None,
output_dir=./,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=32,
per_device_train_batch_size=64,
predict_with_generate=True,
prediction_loss_only=False,
push_to_hub=True,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=./,
save_on_each_node=False,
save_steps=1000,
save_strategy=steps,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
sortish_sampler=False,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=500,
weight_decay=0.0,
xpu_backend=None,
)
12/18/2022 08:41:07 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f
12/18/2022 08:41:11 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f
12/18/2022 08:41:14 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f
12/18/2022 08:41:59 - WARNING - huggingface_hub.repository - /usr/src/app/models/whisper-small-ro-cv11/./ is already a clone of https://huggingface.co/mikr/whisper-small-ro-cv11. Make sure you pull the latest changes with `repo.git_pull()`.
[2022-12-18 08:42:04,348] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.8.0+a25c31b6, git-hash=a25c31b6, git-branch=master
[2022-12-18 08:42:04,669] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
Adam Optimizer #0 is created with AVX2 arithmetic capability.
Config: alpha=0.000010, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_w=1
[2022-12-18 08:42:07,543] [INFO] [logging.py:68:log_dist] [Rank 0] Using DeepSpeed Optimizer param name adamw as basic optimizer
[2022-12-18 08:42:07,597] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
[2022-12-18 08:42:07,597] [INFO] [utils.py:52:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
[2022-12-18 08:42:07,598] [INFO] [logging.py:68:log_dist] [Rank 0] Creating fp16 ZeRO stage 2 optimizer
[2022-12-18 08:42:07,598] [INFO] [stage_1_and_2.py:141:__init__] Reduce bucket size 200000000
[2022-12-18 08:42:07,598] [INFO] [stage_1_and_2.py:142:__init__] Allgather bucket size 200000000
[2022-12-18 08:42:07,598] [INFO] [stage_1_and_2.py:143:__init__] CPU Offload: True
[2022-12-18 08:42:07,598] [INFO] [stage_1_and_2.py:144:__init__] Round robin gradient partitioning: False
Rank: 0 partition count [1] and sizes[(241734912, False)]
[2022-12-18 08:42:08,957] [INFO] [utils.py:831:see_memory_usage] Before initializing optimizer states
[2022-12-18 08:42:08,958] [INFO] [utils.py:832:see_memory_usage] MA 0.53 GB Max_MA 0.53 GB CA 0.53 GB Max_CA 1 GB
[2022-12-18 08:42:08,958] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 379.95 GB, percent = 75.4%
[2022-12-18 08:42:10,038] [INFO] [utils.py:831:see_memory_usage] After initializing optimizer states
[2022-12-18 08:42:10,039] [INFO] [utils.py:832:see_memory_usage] MA 0.53 GB Max_MA 0.53 GB CA 0.53 GB Max_CA 1 GB
[2022-12-18 08:42:10,039] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 382.79 GB, percent = 76.0%
[2022-12-18 08:42:10,039] [INFO] [stage_1_and_2.py:527:__init__] optimizer state initialized
[2022-12-18 08:42:10,147] [INFO] [utils.py:831:see_memory_usage] After initializing ZeRO optimizer
[2022-12-18 08:42:10,148] [INFO] [utils.py:832:see_memory_usage] MA 0.53 GB Max_MA 0.53 GB CA 0.53 GB Max_CA 1 GB
[2022-12-18 08:42:10,148] [INFO] [utils.py:840:see_memory_usage] CPU Virtual Memory: used = 382.83 GB, percent = 76.0%
[2022-12-18 08:42:10,170] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw
[2022-12-18 08:42:10,170] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = WarmupDecayLR
[2022-12-18 08:42:10,170] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = <deepspeed.runtime.lr_schedules.WarmupDecayLR object at 0x7ff2cab02f10>
[2022-12-18 08:42:10,170] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[1e-05], mom=[[0.9, 0.999]]
[2022-12-18 08:42:10,172] [INFO] [config.py:1008:print] DeepSpeedEngine configuration:
[2022-12-18 08:42:10,172] [INFO] [config.py:1012:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2022-12-18 08:42:10,172] [INFO] [config.py:1012:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] amp_enabled .................. False
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] amp_params ................... False
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] autotuning_config ............ {
"enabled": false,
"start_step": null,
"end_step": null,
"metric_path": null,
"arg_mappings": null,
"metric": "throughput",
"model_info": null,
"results_dir": "autotuning_results",
"exps_dir": "autotuning_exps",
"overwrite": true,
"fast": true,
"start_profile_step": 3,
"end_profile_step": 5,
"tuner_type": "gridsearch",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"model_info_path": null,
"mp_size": 1,
"max_train_batch_size": null,
"min_train_batch_size": 1,
"max_train_micro_batch_size_per_gpu": 1.024000e+03,
"min_train_micro_batch_size_per_gpu": 1,
"num_tuning_micro_batch_sizes": 3
}
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] bfloat16_enabled ............. False
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] checkpoint_parallel_write_pipeline False
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] checkpoint_tag_validation_enabled True
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] checkpoint_tag_validation_fail False
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7ff2cde355b0>
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] communication_data_type ...... None
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2022-12-18 08:42:10,173] [INFO] [config.py:1012:print] curriculum_enabled_legacy .... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] curriculum_params_legacy ..... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] data_efficiency_enabled ...... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] dataloader_drop_last ......... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] disable_allgather ............ False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] dump_state ................... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] dynamic_loss_scale_args ...... {'init_scale': 65536, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] eigenvalue_enabled ........... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] eigenvalue_gas_boundary_resolution 1
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] eigenvalue_layer_name ........ bert.encoder.layer
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] eigenvalue_layer_num ......... 0
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] eigenvalue_max_iter .......... 100
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] eigenvalue_stability ......... 1e-06
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] eigenvalue_tol ............... 0.01
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] eigenvalue_verbose ........... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] elasticity_enabled ........... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] flops_profiler_config ........ {
"enabled": false,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] fp16_auto_cast ............... False
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] fp16_enabled ................. True
[2022-12-18 08:42:10,174] [INFO] [config.py:1012:print] fp16_master_weights_and_gradients False
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] global_rank .................. 0
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] grad_accum_dtype ............. None
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] gradient_accumulation_steps .. 1
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] gradient_clipping ............ 1.0
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] gradient_predivide_factor .... 1.0
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] initial_dynamic_scale ........ 65536
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] load_universal_checkpoint .... False
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] loss_scale ................... 0
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] memory_breakdown ............. False
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] monitor_config ............... <deepspeed.monitor.config.DeepSpeedMonitorConfig object at 0x7ff2cde359d0>
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] nebula_config ................ {
"enabled": false,
"persistent_storage_path": null,
"persistent_time_interval": 100,
"num_of_version_in_retention": 2,
"enable_nebula_load": true,
"load_path": null
}
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] optimizer_legacy_fusion ...... False
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] optimizer_name ............... adamw
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] optimizer_params ............. {'lr': 1e-05, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.0}
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] pld_enabled .................. False
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] pld_params ................... False
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] prescale_gradients ........... False
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] scheduler_name ............... WarmupDecayLR
[2022-12-18 08:42:10,175] [INFO] [config.py:1012:print] scheduler_params ............. {'last_batch_iteration': -1, 'total_num_steps': 5000, 'warmup_min_lr': 0, 'warmup_max_lr': 1e-05, 'warmup_num_steps': 500}
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] sparse_attention ............. None
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] sparse_gradients_enabled ..... False
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] steps_per_print .............. 10
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] train_batch_size ............. 64
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] train_micro_batch_size_per_gpu 64
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] use_node_local_storage ....... False
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] wall_clock_breakdown ......... False
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] world_size ................... 1
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] zero_allow_untested_optimizer False
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] zero_config .................. stage=2 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=200000000 allgather_partitions=True allgather_bucket_size=200000000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='cpu', nvme_path=None, buffer_count=4, pin_memory=True, pipeline=False, pipeline_read=False, pipeline_write=False, fast_init=False) sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50,000,000 param_persistence_threshold=100,000 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] zero_enabled ................. True
[2022-12-18 08:42:10,176] [INFO] [config.py:1012:print] zero_optimization_stage ...... 2
[2022-12-18 08:42:10,176] [INFO] [config.py:997:print_user_config] json = {
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 1e-05,
"betas": [0.9, 0.999],
"eps": 1e-08,
"weight_decay": 0.0
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"last_batch_iteration": -1,
"total_num_steps": 5.000000e+03,
"warmup_min_lr": 0,
"warmup_max_lr": 1e-05,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2.000000e+08,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2.000000e+08,
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"train_batch_size": 64,
"train_micro_batch_size_per_gpu": 64
}
[2022-12-18 08:44:27,389] [INFO] [stage_1_and_2.py:1767:step] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 65536
[2022-12-18 08:44:43,482] [INFO] [stage_1_and_2.py:1767:step] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768.0
[2022-12-18 08:45:01,180] [INFO] [stage_1_and_2.py:1767:step] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768.0, reducing to 16384.0
[2022-12-18 08:45:17,756] [INFO] [stage_1_and_2.py:1767:step] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384.0, reducing to 8192.0
[2022-12-18 08:47:03,107] [INFO] [logging.py:68:log_dist] [Rank 0] step=10, skipped=4, lr=[2.883141528559073e-06], mom=[[0.9, 0.999]]
[2022-12-18 08:47:03,108] [INFO] [timer.py:196:stop] epoch=0/micro_step=10/global_step=10, RunningAvgSamplesPerSec=17.81398816266197, CurrSamplesPerSec=17.536521056985904, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 08:49:58,788] [INFO] [logging.py:68:log_dist] [Rank 0] step=20, skipped=4, lr=[4.461405575910259e-06], mom=[[0.9, 0.999]]
[2022-12-18 08:49:58,790] [INFO] [timer.py:196:stop] epoch=0/micro_step=20/global_step=20, RunningAvgSamplesPerSec=17.707913433080698, CurrSamplesPerSec=17.67266334257317, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.817, 'learning_rate': 4.898977360288234e-06, 'epoch': 0.01}
[2022-12-18 08:52:59,924] [INFO] [logging.py:68:log_dist] [Rank 0] step=30, skipped=4, lr=[5.242641991936178e-06], mom=[[0.9, 0.999]]
[2022-12-18 08:52:59,926] [INFO] [timer.py:196:stop] epoch=0/micro_step=30/global_step=30, RunningAvgSamplesPerSec=17.624193801450975, CurrSamplesPerSec=17.42146956646065, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 08:56:04,927] [INFO] [logging.py:68:log_dist] [Rank 0] step=40, skipped=4, lr=[5.766283057118146e-06], mom=[[0.9, 0.999]]
[2022-12-18 08:56:04,928] [INFO] [timer.py:196:stop] epoch=0/micro_step=40/global_step=40, RunningAvgSamplesPerSec=17.611825644201268, CurrSamplesPerSec=17.199435261553543, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 08:59:02,160] [INFO] [logging.py:68:log_dist] [Rank 0] step=50, skipped=4, lr=[6.160712527409633e-06], mom=[[0.9, 0.999]]
[2022-12-18 08:59:02,163] [INFO] [timer.py:196:stop] epoch=0/micro_step=50/global_step=50, RunningAvgSamplesPerSec=17.57224681838786, CurrSamplesPerSec=17.471532518337153, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.3452, 'learning_rate': 6.160712527409633e-06, 'epoch': 0.01}
[2022-12-18 09:01:57,588] [INFO] [logging.py:68:log_dist] [Rank 0] step=60, skipped=4, lr=[6.4772414076394205e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:01:57,590] [INFO] [timer.py:196:stop] epoch=0/micro_step=60/global_step=60, RunningAvgSamplesPerSec=17.556437081890596, CurrSamplesPerSec=17.58886056667178, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:04:51,502] [INFO] [logging.py:68:log_dist] [Rank 0] step=70, skipped=4, lr=[6.741623406776245e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:04:51,503] [INFO] [timer.py:196:stop] epoch=0/micro_step=70/global_step=70, RunningAvgSamplesPerSec=17.550034157580424, CurrSamplesPerSec=17.404004920974277, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.3043, 'learning_rate': 6.85912902234906e-06, 'epoch': 0.01}
[2022-12-18 09:07:42,313] [INFO] [logging.py:68:log_dist] [Rank 0] step=80, skipped=4, lr=[6.968634661590082e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:07:42,314] [INFO] [timer.py:196:stop] epoch=0/micro_step=80/global_step=80, RunningAvgSamplesPerSec=17.54554576845597, CurrSamplesPerSec=17.300649307375817, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:10:37,574] [INFO] [logging.py:68:log_dist] [Rank 0] step=90, skipped=4, lr=[7.1675433522258775e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:10:37,575] [INFO] [timer.py:196:stop] epoch=0/micro_step=90/global_step=90, RunningAvgSamplesPerSec=17.54364083829183, CurrSamplesPerSec=17.543068524409126, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:13:33,056] [INFO] [logging.py:68:log_dist] [Rank 0] step=100, skipped=4, lr=[7.344547104469332e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:13:33,058] [INFO] [timer.py:196:stop] epoch=0/micro_step=100/global_step=100, RunningAvgSamplesPerSec=17.544745558767882, CurrSamplesPerSec=17.311613610419286, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.2484, 'learning_rate': 7.344547104469332e-06, 'epoch': 0.02}
[2022-12-18 09:16:00,166] [INFO] [logging.py:68:log_dist] [Rank 0] step=110, skipped=4, lr=[7.503995457567235e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:16:00,167] [INFO] [timer.py:196:stop] epoch=0/micro_step=110/global_step=110, RunningAvgSamplesPerSec=17.544294890083517, CurrSamplesPerSec=17.83244775214949, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:19:12,626] [INFO] [logging.py:68:log_dist] [Rank 0] step=120, skipped=4, lr=[7.649058662787184e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:19:12,627] [INFO] [timer.py:196:stop] epoch=0/micro_step=120/global_step=120, RunningAvgSamplesPerSec=17.573285029996587, CurrSamplesPerSec=17.949146033725896, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.1897, 'learning_rate': 7.716963756434345e-06, 'epoch': 1.0}
[2022-12-18 09:22:01,411] [INFO] [logging.py:68:log_dist] [Rank 0] step=130, skipped=4, lr=[7.782118888847307e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:22:01,412] [INFO] [timer.py:196:stop] epoch=0/micro_step=130/global_step=130, RunningAvgSamplesPerSec=17.55456949498131, CurrSamplesPerSec=17.13766063397266, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:24:51,224] [INFO] [logging.py:68:log_dist] [Rank 0] step=140, skipped=4, lr=[7.905011559752758e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:24:51,225] [INFO] [timer.py:196:stop] epoch=0/micro_step=140/global_step=140, RunningAvgSamplesPerSec=17.55073845541743, CurrSamplesPerSec=17.72275019960219, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:27:38,350] [INFO] [logging.py:68:log_dist] [Rank 0] step=150, skipped=4, lr=[8.019180844200955e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:27:38,352] [INFO] [timer.py:196:stop] epoch=0/micro_step=150/global_step=150, RunningAvgSamplesPerSec=17.54153626249399, CurrSamplesPerSec=17.28127515360052, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.1751, 'learning_rate': 8.019180844200955e-06, 'epoch': 1.01}
[2022-12-18 09:30:26,932] [INFO] [logging.py:68:log_dist] [Rank 0] step=160, skipped=4, lr=[8.125783520495252e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:30:26,933] [INFO] [timer.py:196:stop] epoch=0/micro_step=160/global_step=160, RunningAvgSamplesPerSec=17.54237608474746, CurrSamplesPerSec=17.425431147803835, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:33:14,947] [INFO] [logging.py:68:log_dist] [Rank 0] step=170, skipped=4, lr=[8.225760510392298e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:33:14,948] [INFO] [timer.py:196:stop] epoch=0/micro_step=170/global_step=170, RunningAvgSamplesPerSec=17.5394079030006, CurrSamplesPerSec=17.734454226542, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.1499, 'learning_rate': 8.27351214279797e-06, 'epoch': 1.01}
[2022-12-18 09:36:02,185] [INFO] [logging.py:68:log_dist] [Rank 0] step=180, skipped=4, lr=[8.31988745412743e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:36:02,186] [INFO] [timer.py:196:stop] epoch=0/micro_step=180/global_step=180, RunningAvgSamplesPerSec=17.537096713384305, CurrSamplesPerSec=17.369822224845038, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:38:51,989] [INFO] [logging.py:68:log_dist] [Rank 0] step=190, skipped=4, lr=[8.408811289387583e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:38:51,991] [INFO] [timer.py:196:stop] epoch=0/micro_step=190/global_step=190, RunningAvgSamplesPerSec=17.532972451853638, CurrSamplesPerSec=17.369981828636313, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:41:41,517] [INFO] [logging.py:68:log_dist] [Rank 0] step=200, skipped=4, lr=[8.49307723936858e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:41:41,519] [INFO] [timer.py:196:stop] epoch=0/micro_step=200/global_step=200, RunningAvgSamplesPerSec=17.521639611761227, CurrSamplesPerSec=17.05773687963696, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.145, 'learning_rate': 8.49307723936858e-06, 'epoch': 1.02}
[2022-12-18 09:44:30,225] [INFO] [logging.py:68:log_dist] [Rank 0] step=210, skipped=4, lr=[8.573149077803088e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:44:30,227] [INFO] [timer.py:196:stop] epoch=0/micro_step=210/global_step=210, RunningAvgSamplesPerSec=17.518351230897235, CurrSamplesPerSec=17.491189976865027, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:47:20,015] [INFO] [logging.py:68:log_dist] [Rank 0] step=220, skipped=4, lr=[8.64942458567722e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:47:20,016] [INFO] [timer.py:196:stop] epoch=0/micro_step=220/global_step=220, RunningAvgSamplesPerSec=17.52044478044767, CurrSamplesPerSec=17.551188301729674, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.1039, 'learning_rate': 8.686247975778677e-06, 'epoch': 1.02}
[2022-12-18 09:48:47,089] [INFO] [logging.py:68:log_dist] [Rank 0] step=230, skipped=4, lr=[8.722247506883805e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:48:47,090] [INFO] [timer.py:196:stop] epoch=0/micro_step=230/global_step=230, RunningAvgSamplesPerSec=17.52583657733623, CurrSamplesPerSec=17.795484760153986, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:52:53,020] [INFO] [logging.py:68:log_dist] [Rank 0] step=240, skipped=4, lr=[8.79191691333329e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:52:53,022] [INFO] [timer.py:196:stop] epoch=0/micro_step=240/global_step=240, RunningAvgSamplesPerSec=17.540278044164523, CurrSamplesPerSec=17.340279812152833, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 09:55:42,935] [INFO] [logging.py:68:log_dist] [Rank 0] step=250, skipped=4, lr=[8.858694625217149e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:55:42,936] [INFO] [timer.py:196:stop] epoch=0/micro_step=250/global_step=250, RunningAvgSamplesPerSec=17.528756039040783, CurrSamplesPerSec=17.753428194487324, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0958, 'learning_rate': 8.858694625217149e-06, 'epoch': 2.0}
[2022-12-18 09:58:31,035] [INFO] [logging.py:68:log_dist] [Rank 0] step=260, skipped=4, lr=[8.922811151820517e-06], mom=[[0.9, 0.999]]
[2022-12-18 09:58:31,036] [INFO] [timer.py:196:stop] epoch=0/micro_step=260/global_step=260, RunningAvgSamplesPerSec=17.51445195778724, CurrSamplesPerSec=17.76546084196235, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:01:19,374] [INFO] [logging.py:68:log_dist] [Rank 0] step=270, skipped=4, lr=[8.984470493319244e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:01:19,375] [INFO] [timer.py:196:stop] epoch=0/micro_step=270/global_step=270, RunningAvgSamplesPerSec=17.514860492931785, CurrSamplesPerSec=17.65513873928837, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.086, 'learning_rate': 9.014436199608479e-06, 'epoch': 2.01}
[2022-12-18 10:04:09,796] [INFO] [logging.py:68:log_dist] [Rank 0] step=280, skipped=4, lr=[9.043854055968706e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:04:09,797] [INFO] [timer.py:196:stop] epoch=0/micro_step=280/global_step=280, RunningAvgSamplesPerSec=17.507487376679528, CurrSamplesPerSec=17.419261679194896, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:06:58,165] [INFO] [logging.py:68:log_dist] [Rank 0] step=290, skipped=4, lr=[9.10112387015335e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:06:58,167] [INFO] [timer.py:196:stop] epoch=0/micro_step=290/global_step=290, RunningAvgSamplesPerSec=17.506782737249797, CurrSamplesPerSec=17.55021064009011, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:09:48,667] [INFO] [logging.py:68:log_dist] [Rank 0] step=300, skipped=4, lr=[9.156425255148058e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:09:48,669] [INFO] [timer.py:196:stop] epoch=0/micro_step=300/global_step=300, RunningAvgSamplesPerSec=17.505229793083824, CurrSamplesPerSec=17.57551569533738, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0686, 'learning_rate': 9.156425255148058e-06, 'epoch': 2.01}
[2022-12-18 10:12:41,600] [INFO] [logging.py:68:log_dist] [Rank 0] step=310, skipped=4, lr=[9.209889040960644e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:12:41,601] [INFO] [timer.py:196:stop] epoch=0/micro_step=310/global_step=310, RunningAvgSamplesPerSec=17.506193858975763, CurrSamplesPerSec=17.60106809421937, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:15:34,448] [INFO] [logging.py:68:log_dist] [Rank 0] step=320, skipped=4, lr=[9.261633432763397e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:15:34,449] [INFO] [timer.py:196:stop] epoch=0/micro_step=320/global_step=320, RunningAvgSamplesPerSec=17.502672734107193, CurrSamplesPerSec=17.71995293610675, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0684, 'learning_rate': 9.28689473531776e-06, 'epoch': 2.02}
[2022-12-18 10:18:25,400] [INFO] [logging.py:68:log_dist] [Rank 0] step=330, skipped=4, lr=[9.311765584761373e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:18:25,402] [INFO] [timer.py:196:stop] epoch=0/micro_step=330/global_step=330, RunningAvgSamplesPerSec=17.502310309234087, CurrSamplesPerSec=17.587128551283918, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:21:18,016] [INFO] [logging.py:68:log_dist] [Rank 0] step=340, skipped=4, lr=[9.360382936198493e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:21:18,018] [INFO] [timer.py:196:stop] epoch=0/micro_step=340/global_step=340, RunningAvgSamplesPerSec=17.500643734444402, CurrSamplesPerSec=17.005072277531735, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:24:04,872] [INFO] [logging.py:68:log_dist] [Rank 0] step=350, skipped=4, lr=[9.407574351377137e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:24:04,874] [INFO] [timer.py:196:stop] epoch=0/micro_step=350/global_step=350, RunningAvgSamplesPerSec=17.51464932561927, CurrSamplesPerSec=17.630059089676152, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0482, 'learning_rate': 9.407574351377137e-06, 'epoch': 3.0}
[2022-12-18 10:26:57,641] [INFO] [logging.py:68:log_dist] [Rank 0] step=360, skipped=4, lr=[9.45342109721062e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:26:57,643] [INFO] [timer.py:196:stop] epoch=0/micro_step=360/global_step=360, RunningAvgSamplesPerSec=17.505842284973642, CurrSamplesPerSec=17.025209457256768, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:29:51,036] [INFO] [logging.py:68:log_dist] [Rank 0] step=370, skipped=4, lr=[9.497997685324628e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:29:51,038] [INFO] [timer.py:196:stop] epoch=0/micro_step=370/global_step=370, RunningAvgSamplesPerSec=17.505074655498845, CurrSamplesPerSec=17.17602944826478, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0504, 'learning_rate': 9.519831289296397e-06, 'epoch': 3.01}
[2022-12-18 10:32:41,738] [INFO] [logging.py:68:log_dist] [Rank 0] step=380, skipped=4, lr=[9.541372600623587e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:32:41,739] [INFO] [timer.py:196:stop] epoch=0/micro_step=380/global_step=380, RunningAvgSamplesPerSec=17.50077202769039, CurrSamplesPerSec=17.66240728131972, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:35:32,596] [INFO] [logging.py:68:log_dist] [Rank 0] step=390, skipped=4, lr=[9.583608934209288e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:35:32,597] [INFO] [timer.py:196:stop] epoch=0/micro_step=390/global_step=390, RunningAvgSamplesPerSec=17.502615176175013, CurrSamplesPerSec=17.671288199732384, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:38:24,713] [INFO] [logging.py:68:log_dist] [Rank 0] step=400, skipped=4, lr=[9.624764935335318e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:38:24,714] [INFO] [timer.py:196:stop] epoch=0/micro_step=400/global_step=400, RunningAvgSamplesPerSec=17.501141402908466, CurrSamplesPerSec=17.413306679702867, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0446, 'learning_rate': 9.624764935335318e-06, 'epoch': 3.01}
[2022-12-18 10:41:14,089] [INFO] [logging.py:68:log_dist] [Rank 0] step=410, skipped=4, lr=[9.664894494516345e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:41:14,090] [INFO] [timer.py:196:stop] epoch=0/micro_step=410/global_step=410, RunningAvgSamplesPerSec=17.50299955137452, CurrSamplesPerSec=16.856797996893206, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:44:03,121] [INFO] [logging.py:68:log_dist] [Rank 0] step=420, skipped=4, lr=[9.704047567846437e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:44:03,123] [INFO] [timer.py:196:stop] epoch=0/micro_step=420/global_step=420, RunningAvgSamplesPerSec=17.49955598397826, CurrSamplesPerSec=17.39210969006482, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0396, 'learning_rate': 9.723272550712454e-06, 'epoch': 3.02}
[2022-12-18 10:46:53,231] [INFO] [logging.py:68:log_dist] [Rank 0] step=430, skipped=4, lr=[9.742270550908135e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:46:53,233] [INFO] [timer.py:196:stop] epoch=0/micro_step=430/global_step=430, RunningAvgSamplesPerSec=17.502169263783284, CurrSamplesPerSec=17.82420181661996, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:49:44,733] [INFO] [logging.py:68:log_dist] [Rank 0] step=440, skipped=4, lr=[9.779606609292176e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:49:44,734] [INFO] [timer.py:196:stop] epoch=0/micro_step=440/global_step=440, RunningAvgSamplesPerSec=17.502468898132452, CurrSamplesPerSec=17.01432443514231, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:52:34,347] [INFO] [logging.py:68:log_dist] [Rank 0] step=450, skipped=4, lr=[9.816095971633122e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:52:34,348] [INFO] [timer.py:196:stop] epoch=0/micro_step=450/global_step=450, RunningAvgSamplesPerSec=17.50661530301726, CurrSamplesPerSec=17.68532190940408, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0309, 'learning_rate': 9.816095971633122e-06, 'epoch': 3.02}
[2022-12-18 10:54:31,089] [INFO] [logging.py:68:log_dist] [Rank 0] step=460, skipped=4, lr=[9.851776190149156e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:54:31,091] [INFO] [timer.py:196:stop] epoch=0/micro_step=460/global_step=460, RunningAvgSamplesPerSec=17.505343508354425, CurrSamplesPerSec=17.622945563320123, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 10:58:09,980] [INFO] [logging.py:68:log_dist] [Rank 0] step=470, skipped=4, lr=[9.886682372916766e-06], mom=[[0.9, 0.999]]
[2022-12-18 10:58:09,982] [INFO] [timer.py:196:stop] epoch=0/micro_step=470/global_step=470, RunningAvgSamplesPerSec=17.51432072204074, CurrSamplesPerSec=17.560018580808034, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0269, 'learning_rate': 9.90385555539545e-06, 'epoch': 4.0}
[2022-12-18 11:01:02,577] [INFO] [logging.py:68:log_dist] [Rank 0] step=480, skipped=4, lr=[9.92084739148192e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:01:02,579] [INFO] [timer.py:196:stop] epoch=0/micro_step=480/global_step=480, RunningAvgSamplesPerSec=17.51471041450659, CurrSamplesPerSec=17.770770916559375, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:03:53,720] [INFO] [logging.py:68:log_dist] [Rank 0] step=490, skipped=4, lr=[9.954302066885107e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:03:53,722] [INFO] [timer.py:196:stop] epoch=0/micro_step=490/global_step=490, RunningAvgSamplesPerSec=17.514707806076828, CurrSamplesPerSec=17.474965145825077, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:06:40,711] [INFO] [logging.py:68:log_dist] [Rank 0] step=500, skipped=4, lr=[9.987075336738768e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:06:40,713] [INFO] [timer.py:196:stop] epoch=0/micro_step=500/global_step=500, RunningAvgSamplesPerSec=17.51673358493395, CurrSamplesPerSec=17.728490220205128, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.03, 'learning_rate': 9.987075336738768e-06, 'epoch': 4.01}
[2022-12-18 11:09:32,253] [INFO] [logging.py:68:log_dist] [Rank 0] step=510, skipped=4, lr=[9.98888888888889e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:09:32,254] [INFO] [timer.py:196:stop] epoch=0/micro_step=510/global_step=510, RunningAvgSamplesPerSec=17.515181954535795, CurrSamplesPerSec=17.44658815742413, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:12:24,243] [INFO] [logging.py:68:log_dist] [Rank 0] step=520, skipped=4, lr=[9.966666666666667e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:12:24,244] [INFO] [timer.py:196:stop] epoch=0/micro_step=520/global_step=520, RunningAvgSamplesPerSec=17.516449396915945, CurrSamplesPerSec=17.71918211824117, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.025, 'learning_rate': 9.955555555555556e-06, 'epoch': 4.01}
[2022-12-18 11:15:10,174] [INFO] [logging.py:68:log_dist] [Rank 0] step=530, skipped=4, lr=[9.944444444444445e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:15:10,175] [INFO] [timer.py:196:stop] epoch=0/micro_step=530/global_step=530, RunningAvgSamplesPerSec=17.515811116445363, CurrSamplesPerSec=17.44589309419759, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:17:59,140] [INFO] [logging.py:68:log_dist] [Rank 0] step=540, skipped=4, lr=[9.922222222222222e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:17:59,141] [INFO] [timer.py:196:stop] epoch=0/micro_step=540/global_step=540, RunningAvgSamplesPerSec=17.51307013237252, CurrSamplesPerSec=17.709312456169634, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:20:51,468] [INFO] [logging.py:68:log_dist] [Rank 0] step=550, skipped=4, lr=[9.9e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:20:51,469] [INFO] [timer.py:196:stop] epoch=0/micro_step=550/global_step=550, RunningAvgSamplesPerSec=17.51360962534701, CurrSamplesPerSec=17.514523814355172, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0214, 'learning_rate': 9.9e-06, 'epoch': 4.02}
[2022-12-18 11:23:41,540] [INFO] [logging.py:68:log_dist] [Rank 0] step=560, skipped=4, lr=[9.877777777777778e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:23:41,542] [INFO] [timer.py:196:stop] epoch=0/micro_step=560/global_step=560, RunningAvgSamplesPerSec=17.515173467375842, CurrSamplesPerSec=17.700679243083353, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:26:28,412] [INFO] [logging.py:68:log_dist] [Rank 0] step=570, skipped=4, lr=[9.855555555555555e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:26:28,413] [INFO] [timer.py:196:stop] epoch=0/micro_step=570/global_step=570, RunningAvgSamplesPerSec=17.515792660700566, CurrSamplesPerSec=17.636638364608388, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0176, 'learning_rate': 9.844444444444446e-06, 'epoch': 4.02}
[2022-12-18 11:27:30,542] [INFO] [logging.py:68:log_dist] [Rank 0] step=580, skipped=4, lr=[9.833333333333333e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:27:30,544] [INFO] [timer.py:196:stop] epoch=0/micro_step=580/global_step=580, RunningAvgSamplesPerSec=17.525312485669467, CurrSamplesPerSec=22.15033486404057, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:31:57,731] [INFO] [logging.py:68:log_dist] [Rank 0] step=590, skipped=4, lr=[9.811111111111112e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:31:57,733] [INFO] [timer.py:196:stop] epoch=0/micro_step=590/global_step=590, RunningAvgSamplesPerSec=17.52578604906633, CurrSamplesPerSec=17.542104377157184, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:34:45,506] [INFO] [logging.py:68:log_dist] [Rank 0] step=600, skipped=4, lr=[9.78888888888889e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:34:45,507] [INFO] [timer.py:196:stop] epoch=0/micro_step=600/global_step=600, RunningAvgSamplesPerSec=17.525721346081685, CurrSamplesPerSec=17.728407089792164, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0186, 'learning_rate': 9.78888888888889e-06, 'epoch': 5.0}
[2022-12-18 11:37:32,907] [INFO] [logging.py:68:log_dist] [Rank 0] step=610, skipped=4, lr=[9.766666666666667e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:37:32,909] [INFO] [timer.py:196:stop] epoch=0/micro_step=610/global_step=610, RunningAvgSamplesPerSec=17.52620539215051, CurrSamplesPerSec=17.665249180375554, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:40:23,836] [INFO] [logging.py:68:log_dist] [Rank 0] step=620, skipped=4, lr=[9.744444444444445e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:40:23,837] [INFO] [timer.py:196:stop] epoch=0/micro_step=620/global_step=620, RunningAvgSamplesPerSec=17.524553828947862, CurrSamplesPerSec=17.647862254407272, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0179, 'learning_rate': 9.733333333333334e-06, 'epoch': 5.01}
[2022-12-18 11:43:10,955] [INFO] [logging.py:68:log_dist] [Rank 0] step=630, skipped=4, lr=[9.722222222222223e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:43:10,957] [INFO] [timer.py:196:stop] epoch=0/micro_step=630/global_step=630, RunningAvgSamplesPerSec=17.524283076579064, CurrSamplesPerSec=17.71436339529198, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:45:59,599] [INFO] [logging.py:68:log_dist] [Rank 0] step=640, skipped=4, lr=[9.7e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:45:59,600] [INFO] [timer.py:196:stop] epoch=0/micro_step=640/global_step=640, RunningAvgSamplesPerSec=17.5239493520232, CurrSamplesPerSec=17.29191637538188, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:48:51,158] [INFO] [logging.py:68:log_dist] [Rank 0] step=650, skipped=4, lr=[9.677777777777778e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:48:51,159] [INFO] [timer.py:196:stop] epoch=0/micro_step=650/global_step=650, RunningAvgSamplesPerSec=17.520107118857844, CurrSamplesPerSec=17.428385128642994, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0141, 'learning_rate': 9.677777777777778e-06, 'epoch': 5.01}
[2022-12-18 11:51:44,941] [INFO] [logging.py:68:log_dist] [Rank 0] step=660, skipped=4, lr=[9.655555555555556e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:51:44,942] [INFO] [timer.py:196:stop] epoch=0/micro_step=660/global_step=660, RunningAvgSamplesPerSec=17.519444727080156, CurrSamplesPerSec=16.994613216593315, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:54:37,926] [INFO] [logging.py:68:log_dist] [Rank 0] step=670, skipped=4, lr=[9.633333333333335e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:54:37,928] [INFO] [timer.py:196:stop] epoch=0/micro_step=670/global_step=670, RunningAvgSamplesPerSec=17.519984376324164, CurrSamplesPerSec=17.66502830473203, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0131, 'learning_rate': 9.622222222222222e-06, 'epoch': 5.02}
[2022-12-18 11:57:26,593] [INFO] [logging.py:68:log_dist] [Rank 0] step=680, skipped=4, lr=[9.611111111111112e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:57:26,595] [INFO] [timer.py:196:stop] epoch=0/micro_step=680/global_step=680, RunningAvgSamplesPerSec=17.51917469369641, CurrSamplesPerSec=17.81082610549716, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 11:59:46,615] [INFO] [logging.py:68:log_dist] [Rank 0] step=690, skipped=4, lr=[9.58888888888889e-06], mom=[[0.9, 0.999]]
[2022-12-18 11:59:46,616] [INFO] [timer.py:196:stop] epoch=0/micro_step=690/global_step=690, RunningAvgSamplesPerSec=17.520953040279323, CurrSamplesPerSec=17.893703708073588, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:03:02,532] [INFO] [logging.py:68:log_dist] [Rank 0] step=700, skipped=4, lr=[9.566666666666668e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:03:02,534] [INFO] [timer.py:196:stop] epoch=0/micro_step=700/global_step=700, RunningAvgSamplesPerSec=17.526881403928282, CurrSamplesPerSec=17.811006916180688, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0111, 'learning_rate': 9.566666666666668e-06, 'epoch': 6.0}
[2022-12-18 12:05:50,396] [INFO] [logging.py:68:log_dist] [Rank 0] step=710, skipped=4, lr=[9.544444444444445e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:05:50,397] [INFO] [timer.py:196:stop] epoch=0/micro_step=710/global_step=710, RunningAvgSamplesPerSec=17.525663669476064, CurrSamplesPerSec=17.627963556175473, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:08:40,501] [INFO] [logging.py:68:log_dist] [Rank 0] step=720, skipped=4, lr=[9.522222222222223e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:08:40,503] [INFO] [timer.py:196:stop] epoch=0/micro_step=720/global_step=720, RunningAvgSamplesPerSec=17.522315207058163, CurrSamplesPerSec=17.08808021934021, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0115, 'learning_rate': 9.511111111111112e-06, 'epoch': 6.01}
[2022-12-18 12:11:29,479] [INFO] [logging.py:68:log_dist] [Rank 0] step=730, skipped=4, lr=[9.5e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:11:29,481] [INFO] [timer.py:196:stop] epoch=0/micro_step=730/global_step=730, RunningAvgSamplesPerSec=17.522397647878112, CurrSamplesPerSec=17.69114963575808, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:14:18,585] [INFO] [logging.py:68:log_dist] [Rank 0] step=740, skipped=4, lr=[9.47777777777778e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:14:18,587] [INFO] [timer.py:196:stop] epoch=0/micro_step=740/global_step=740, RunningAvgSamplesPerSec=17.522899649689435, CurrSamplesPerSec=17.835842357826117, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:17:07,842] [INFO] [logging.py:68:log_dist] [Rank 0] step=750, skipped=4, lr=[9.455555555555557e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:17:07,843] [INFO] [timer.py:196:stop] epoch=0/micro_step=750/global_step=750, RunningAvgSamplesPerSec=17.52216964269384, CurrSamplesPerSec=17.715532463583024, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0097, 'learning_rate': 9.455555555555557e-06, 'epoch': 6.01}
[2022-12-18 12:19:56,259] [INFO] [logging.py:68:log_dist] [Rank 0] step=760, skipped=4, lr=[9.433333333333335e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:19:56,260] [INFO] [timer.py:196:stop] epoch=0/micro_step=760/global_step=760, RunningAvgSamplesPerSec=17.52350085830269, CurrSamplesPerSec=17.825603229647047, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:22:43,814] [INFO] [logging.py:68:log_dist] [Rank 0] step=770, skipped=4, lr=[9.411111111111113e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:22:43,816] [INFO] [timer.py:196:stop] epoch=0/micro_step=770/global_step=770, RunningAvgSamplesPerSec=17.523872671951036, CurrSamplesPerSec=17.464620175491046, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0091, 'learning_rate': 9.4e-06, 'epoch': 6.02}
[2022-12-18 12:25:34,399] [INFO] [logging.py:68:log_dist] [Rank 0] step=780, skipped=4, lr=[9.38888888888889e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:25:34,401] [INFO] [timer.py:196:stop] epoch=0/micro_step=780/global_step=780, RunningAvgSamplesPerSec=17.523888473835193, CurrSamplesPerSec=17.62105645548852, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:28:22,069] [INFO] [logging.py:68:log_dist] [Rank 0] step=790, skipped=4, lr=[9.366666666666668e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:28:22,070] [INFO] [timer.py:196:stop] epoch=0/micro_step=790/global_step=790, RunningAvgSamplesPerSec=17.52396646475176, CurrSamplesPerSec=17.7942001353745, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:31:10,611] [INFO] [logging.py:68:log_dist] [Rank 0] step=800, skipped=4, lr=[9.344444444444446e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:31:10,613] [INFO] [timer.py:196:stop] epoch=0/micro_step=800/global_step=800, RunningAvgSamplesPerSec=17.52362785335887, CurrSamplesPerSec=17.645739287742696, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0099, 'learning_rate': 9.344444444444446e-06, 'epoch': 6.02}
[2022-12-18 12:32:39,816] [INFO] [logging.py:68:log_dist] [Rank 0] step=810, skipped=4, lr=[9.322222222222223e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:32:39,818] [INFO] [timer.py:196:stop] epoch=0/micro_step=810/global_step=810, RunningAvgSamplesPerSec=17.525217387924933, CurrSamplesPerSec=17.6307387978646, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:36:44,201] [INFO] [logging.py:68:log_dist] [Rank 0] step=820, skipped=4, lr=[9.3e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:36:44,203] [INFO] [timer.py:196:stop] epoch=0/micro_step=820/global_step=820, RunningAvgSamplesPerSec=17.527549173776915, CurrSamplesPerSec=17.24311195262599, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0073, 'learning_rate': 9.28888888888889e-06, 'epoch': 7.0}
[2022-12-18 12:39:35,915] [INFO] [logging.py:68:log_dist] [Rank 0] step=830, skipped=4, lr=[9.277777777777778e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:39:35,917] [INFO] [timer.py:196:stop] epoch=0/micro_step=830/global_step=830, RunningAvgSamplesPerSec=17.527139118776905, CurrSamplesPerSec=17.824774664911605, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:42:25,214] [INFO] [logging.py:68:log_dist] [Rank 0] step=840, skipped=4, lr=[9.255555555555556e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:42:25,215] [INFO] [timer.py:196:stop] epoch=0/micro_step=840/global_step=840, RunningAvgSamplesPerSec=17.529876938383726, CurrSamplesPerSec=17.677469879502205, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:45:16,974] [INFO] [logging.py:68:log_dist] [Rank 0] step=850, skipped=4, lr=[9.233333333333334e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:45:16,976] [INFO] [timer.py:196:stop] epoch=0/micro_step=850/global_step=850, RunningAvgSamplesPerSec=17.533049828024673, CurrSamplesPerSec=17.934881195101724, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0062, 'learning_rate': 9.233333333333334e-06, 'epoch': 7.01}
[2022-12-18 12:48:07,233] [INFO] [logging.py:68:log_dist] [Rank 0] step=860, skipped=4, lr=[9.211111111111111e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:48:07,234] [INFO] [timer.py:196:stop] epoch=0/micro_step=860/global_step=860, RunningAvgSamplesPerSec=17.536686576379214, CurrSamplesPerSec=17.72774792834313, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:50:56,254] [INFO] [logging.py:68:log_dist] [Rank 0] step=870, skipped=4, lr=[9.188888888888889e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:50:56,255] [INFO] [timer.py:196:stop] epoch=0/micro_step=870/global_step=870, RunningAvgSamplesPerSec=17.53939897716571, CurrSamplesPerSec=17.863981354170473, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.006, 'learning_rate': 9.17777777777778e-06, 'epoch': 7.01}
[2022-12-18 12:53:49,512] [INFO] [logging.py:68:log_dist] [Rank 0] step=880, skipped=4, lr=[9.166666666666666e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:53:49,514] [INFO] [timer.py:196:stop] epoch=0/micro_step=880/global_step=880, RunningAvgSamplesPerSec=17.54251129449292, CurrSamplesPerSec=17.672755259073455, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:56:39,481] [INFO] [logging.py:68:log_dist] [Rank 0] step=890, skipped=4, lr=[9.144444444444444e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:56:39,482] [INFO] [timer.py:196:stop] epoch=0/micro_step=890/global_step=890, RunningAvgSamplesPerSec=17.545431248601155, CurrSamplesPerSec=17.820363263008623, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 12:59:28,307] [INFO] [logging.py:68:log_dist] [Rank 0] step=900, skipped=4, lr=[9.122222222222223e-06], mom=[[0.9, 0.999]]
[2022-12-18 12:59:28,309] [INFO] [timer.py:196:stop] epoch=0/micro_step=900/global_step=900, RunningAvgSamplesPerSec=17.5474477902634, CurrSamplesPerSec=17.84197843798558, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0046, 'learning_rate': 9.122222222222223e-06, 'epoch': 7.02}
[2022-12-18 13:02:23,366] [INFO] [logging.py:68:log_dist] [Rank 0] step=910, skipped=4, lr=[9.100000000000001e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:02:23,368] [INFO] [timer.py:196:stop] epoch=0/micro_step=910/global_step=910, RunningAvgSamplesPerSec=17.54704328541452, CurrSamplesPerSec=17.599693685140846, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 13:05:10,345] [INFO] [logging.py:68:log_dist] [Rank 0] step=920, skipped=4, lr=[9.077777777777779e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:05:10,346] [INFO] [timer.py:196:stop] epoch=0/micro_step=920/global_step=920, RunningAvgSamplesPerSec=17.548757345088557, CurrSamplesPerSec=17.80023677023281, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0045, 'learning_rate': 9.066666666666667e-06, 'epoch': 7.02}
[2022-12-18 13:07:52,456] [INFO] [logging.py:68:log_dist] [Rank 0] step=930, skipped=4, lr=[9.055555555555556e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:07:52,457] [INFO] [timer.py:196:stop] epoch=0/micro_step=930/global_step=930, RunningAvgSamplesPerSec=17.555835577865636, CurrSamplesPerSec=16.947261645741815, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 13:10:39,445] [INFO] [logging.py:68:log_dist] [Rank 0] step=940, skipped=4, lr=[9.033333333333334e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:10:39,447] [INFO] [timer.py:196:stop] epoch=0/micro_step=940/global_step=940, RunningAvgSamplesPerSec=17.558626670023333, CurrSamplesPerSec=17.837192265183017, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 13:13:28,825] [INFO] [logging.py:68:log_dist] [Rank 0] step=950, skipped=4, lr=[9.011111111111111e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:13:28,827] [INFO] [timer.py:196:stop] epoch=0/micro_step=950/global_step=950, RunningAvgSamplesPerSec=17.561116636037408, CurrSamplesPerSec=17.875913197667035, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0053, 'learning_rate': 9.011111111111111e-06, 'epoch': 8.0}
[2022-12-18 13:16:15,728] [INFO] [logging.py:68:log_dist] [Rank 0] step=960, skipped=4, lr=[8.988888888888889e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:16:15,729] [INFO] [timer.py:196:stop] epoch=0/micro_step=960/global_step=960, RunningAvgSamplesPerSec=17.56308806750036, CurrSamplesPerSec=17.781870571307575, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 13:19:02,146] [INFO] [logging.py:68:log_dist] [Rank 0] step=970, skipped=4, lr=[8.966666666666667e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:19:02,147] [INFO] [timer.py:196:stop] epoch=0/micro_step=970/global_step=970, RunningAvgSamplesPerSec=17.564578542499007, CurrSamplesPerSec=17.74904968508855, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0039, 'learning_rate': 8.955555555555555e-06, 'epoch': 8.01}
[2022-12-18 13:21:50,887] [INFO] [logging.py:68:log_dist] [Rank 0] step=980, skipped=4, lr=[8.944444444444446e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:21:50,889] [INFO] [timer.py:196:stop] epoch=0/micro_step=980/global_step=980, RunningAvgSamplesPerSec=17.566081411767524, CurrSamplesPerSec=18.11684822516133, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 13:24:37,286] [INFO] [logging.py:68:log_dist] [Rank 0] step=990, skipped=4, lr=[8.922222222222224e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:24:37,287] [INFO] [timer.py:196:stop] epoch=0/micro_step=990/global_step=990, RunningAvgSamplesPerSec=17.56737818663697, CurrSamplesPerSec=17.898017842980305, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
[2022-12-18 13:27:24,217] [INFO] [logging.py:68:log_dist] [Rank 0] step=1000, skipped=4, lr=[8.900000000000001e-06], mom=[[0.9, 0.999]]
[2022-12-18 13:27:24,219] [INFO] [timer.py:196:stop] epoch=0/micro_step=1000/global_step=1000, RunningAvgSamplesPerSec=17.569257091709765, CurrSamplesPerSec=17.374920879792757, MemAllocated=0.53GB, MaxMemAllocated=17.47GB
{'loss': 0.0046, 'learning_rate': 8.900000000000001e-06, 'epoch': 8.01}
{'eval_loss': 0.28076171875, 'eval_wer': 17.571297148114077, 'eval_runtime': 1237.4696, 'eval_samples_per_second': 3.118, 'eval_steps_per_second': 0.098, 'epoch': 8.01}
[2022-12-18 13:48:02,674] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step1000 is begin to save!
[2022-12-18 13:48:02,684] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: ./checkpoint-1000/global_step1000/mp_rank_00_model_states.pt
[2022-12-18 13:48:02,684] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving ./checkpoint-1000/global_step1000/mp_rank_00_model_states.pt...
[2022-12-18 13:48:03,680] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved ./checkpoint-1000/global_step1000/mp_rank_00_model_states.pt.
[2022-12-18 13:48:03,682] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving ./checkpoint-1000/global_step1000/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2022-12-18 13:48:08,206] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved ./checkpoint-1000/global_step1000/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2022-12-18 13:48:08,208] [INFO] [engine.py:3394:_save_zero_checkpoint] zero checkpoint saved ./checkpoint-1000/global_step1000/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2022-12-18 13:48:08,208] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now!