diff --git "a/attnserver.run_attnserver.slurm.sh.343210.out.log" "b/attnserver.run_attnserver.slurm.sh.343210.out.log" --- "a/attnserver.run_attnserver.slurm.sh.343210.out.log" +++ "b/attnserver.run_attnserver.slurm.sh.343210.out.log" @@ -534,3 +534,12828 @@ make: Nothing to be done for 'default'. make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' >>> done with dataset index builder. Compilation time: 0.047 seconds > compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.820 seconds +time to initialize megatron (seconds): 7.578 +[after megatron is initialized] datetime: 2025-06-21 21:19:53 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> embedding>>> decoder + +>>> output_layer +>>> decoder +>>> output_layer +>>> embedding +>>> decoder>>> embedding +>>> output_layer + +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 74511872 + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 74511872 + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 74511872 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 74511872 +>>> embedding > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 74511872 + +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 74511872 + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 74511872 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 74511872 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (74511872 elements, 74511872 padded size): + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.25, 2.27) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:19:53 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=1024, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.007773 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66592 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.003309 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66562 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.003404 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66686 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:19:53 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (215.42, 215.86) + train/valid/test-data-iterators-setup ..........: (30.68, 123.37) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:19:53 +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 0 +Done exporting trace 0 + [2025-06-21 21:20:02] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 8421.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.0703 +Theoretical memory footprints: weight and optimizer=1206.09 MB +[Rank 6] (after 1 iterations) memory (MB) | allocated: 1504.47607421875 | max allocated: 11299.61181640625 | reserved: 13000.0 | max reserved: 13000.0 +[Rank 1] (after 1 iterations) memory (MB) | allocated: 1504.47607421875 | max allocated: 11299.61181640625 | reserved: 11976.0 | max reserved: 11976.0 +[Rank 5] (after 1 iterations) memory (MB) | allocated: 1504.47607421875 | max allocated: 11299.61181640625 | reserved: 11464.0 | max reserved: 11464.0 +[Rank 4] (after 1 iterations) memory (MB) | allocated: 1504.47607421875 | max allocated: 11299.61181640625 | reserved: 11464.0 | max reserved: 11464.0 +[Rank 7] (after 1 iterations) memory (MB) | allocated: 1504.47607421875 | max allocated: 11299.61181640625 | reserved: 11976.0 | max reserved: 11976.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 1504.47607421875 | max allocated: 11299.61181640625 | reserved: 11976.0 | max reserved: 11976.0 +[Rank 2] (after 1 iterations) memory (MB) | allocated: 1504.47607421875 | max allocated: 11299.61181640625 | reserved: 11976.0 | max reserved: 11976.0 +[Rank 3] (after 1 iterations) memory (MB) | allocated: 1504.47607421875 | max allocated: 11299.61181640625 | reserved: 11464.0 | max reserved: 11464.0 +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids batch tensor:torch.Size([8, 8192]) + tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask batch tensor after cp:torch.Size([8, 8192]) +tokens batch tensor:torch.Size([8, 8192]) +batch tensor after cp:attention_mask labels torch.Size([8, 1, 8192, 8192])torch.Size([8, 8192]) + +batch tensor after cp: batch tensor:loss_mask position_idstorch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor after cp: + attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor:batch tensor after cp: attention_masktokens torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor:batch tensor: tokenstokens torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: labelsbatch tensor: torch.Size([8, 8192])labels + batch tensor:torch.Size([8, 8192]) +loss_maskbatch tensor: torch.Size([8, 8192]) +loss_mask batch tensor:torch.Size([8, 8192]) +attention_maskbatch tensor: attention_masktorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 1, 8192, 8192]) +batch tensor:batch tensor: position_idsposition_ids torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp:batch tensor after cp: tokenstokens torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp:batch tensor after cp: labelslabels torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp:batch tensor after cp: loss_maskloss_mask torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: batch tensor after cp:attention_mask attention_masktorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 1, 8192, 8192])batch tensor after cp: + batch tensor after cp:position_ids position_idstorch.Size([8, 8192]) +torch.Size([8, 8192]) +Start exporting trace 1 +Done exporting trace 1 + [2025-06-21 21:20:03] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 1520.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens batch tensor after cp: tokens torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: labelsbatch tensor: torch.Size([8, 8192])labels + batch tensor after cp:torch.Size([8, 8192]) +loss_mask batch tensor:torch.Size([8, 8192]) +loss_mask batch tensor after cp:torch.Size([8, 8192]) +attention_mask batch tensor:torch.Size([8, 1, 8192, 8192]) +attention_maskbatch tensor after cp: position_idstorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192])batch tensor after cp: + batch tensor:batch tensor:tokens loss_masktorch.Size([8, 8192])tokens + batch tensor after cp:torch.Size([8, 8192]) +labelsbatch tensor: torch.Size([8, 8192])torch.Size([8, 8192])attention_mask + + batch tensor after cp:torch.Size([8, 1, 8192, 8192]) batch tensor:loss_mask + labelsbatch tensor:torch.Size([8, 8192]) +torch.Size([8, 8192])position_ids +batch tensor after cp: batch tensor: torch.Size([8, 8192]) attention_mask +loss_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 8192]) + +batch tensor after cp: batch tensor:position_ids attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192]) +batch tensor: batch tensor after cp:position_ids tokenstorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 2 +Done exporting trace 2 + [2025-06-21 21:20:04] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 852.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_idsbatch tensor: torch.Size([8, 8192]) + tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: batch tensor after cp:attention_mask tokens torch.Size([8, 1, 8192, 8192])torch.Size([8, 8192]) + +batch tensor:batch tensor after cp: position_idslabels torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 3 +Done exporting trace 3 + [2025-06-21 21:20:04] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 132.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])batch tensor: +batch tensor: position_ids torch.Size([8, 8192])tokens + torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192])batch tensor after cp: + batch tensor:tokens attention_masktorch.Size([8, 8192]) + batch tensor after cp: labelstorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192])batch tensor: +batch tensor after cp: loss_mask position_idstorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokensbatch tensor: torch.Size([8, 8192]) + batch tensor after cp:tokens labels torch.Size([8, 8192]) +batch tensor after cp: loss_masktorch.Size([8, 8192]) torch.Size([8, 8192]) + +batch tensor after cp: batch tensor:attention_mask labels torch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192])batch tensor after cp: + batch tensor:position_ids loss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192])batch tensor after cp: + tokens batch tensor:torch.Size([8, 8192]) +labelsbatch tensor after cp: torch.Size([8, 8192])labels + torch.Size([8, 8192]) +batch tensor:batch tensor after cp: loss_mask torch.Size([8, 8192])loss_mask + batch tensor:torch.Size([8, 8192]) + batch tensor after cp:attention_mask batch tensor: torch.Size([8, 1, 8192, 8192]) + batch tensor:attention_masktokens position_idstorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192])batch tensor after cp:torch.Size([8, 8192]) +position_ids + batch tensor: labelstorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: batch tensor:tokens position_idstorch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor after cp: + labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 4 +Done exporting trace 4 + [2025-06-21 21:20:04] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 134.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])batch tensor: +batch tensor: position_ids tokenstorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_maskbatch tensor after cp: tokens torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp:batch tensor: labelsattention_mask torch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: batch tensor:loss_mask position_idstorch.Size([8, 8192]) +batch tensor after cp:torch.Size([8, 8192]) +attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens batch tensor:torch.Size([8, 8192]) + tokens batch tensor: labels torch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor: + loss_mask torch.Size([8, 8192])batch tensor: + labelsbatch tensor: torch.Size([8, 8192])attention_mask + batch tensor:torch.Size([8, 1, 8192, 8192]) +loss_maskbatch tensor: torch.Size([8, 8192])position_ids + torch.Size([8, 8192])batch tensor: + attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192])batch tensor after cp: + batch tensor after cp:tokens loss_masktorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp:batch tensor after cp: labels attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192])batch tensor after cp: + batch tensor after cp:loss_mask position_idstorch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor after cp: + attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp:batch tensor: labels torch.Size([8, 8192]) +tokensbatch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_masktorch.Size([8, 8192]) torch.Size([8, 1, 8192, 8192]) + +batch tensor after cp: position_idsbatch tensor: torch.Size([8, 8192])labels + torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 5 +Done exporting trace 5 + [2025-06-21 21:20:05] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 135.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels batch tensor:torch.Size([8, 8192]) +batch tensor: loss_masktokens torch.Size([8, 8192]) +batch tensor: attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192]) +batch tensor:batch tensor: position_idslabels torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_idsbatch tensor after cp: torch.Size([8, 8192])tokens + torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 6 +Done exporting trace 6 + [2025-06-21 21:20:05] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 133.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192])batch tensor: +batch tensor: loss_mask tokenstorch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 8192])torch.Size([8, 1, 8192, 8192]) + +batch tensor: position_idsbatch tensor: torch.Size([8, 8192])labels + torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp:batch tensor after cp: attention_mask tokenstorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: batch tensor after cp:position_ids labelstorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])batch tensor: +batch tensor: position_ids tokenstorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_maskbatch tensor after cp: tokenstorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192]) +batch tensor:batch tensor after cp: position_idslabels torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokensbatch tensor: torch.Size([8, 8192]) +batch tensor after cp: tokenslabels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: batch tensor:attention_mask labels torch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192])batch tensor after cp: + position_idsbatch tensor: torch.Size([8, 8192])loss_mask + torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 7 +Done exporting trace 7 + [2025-06-21 21:20:05] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 133.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels batch tensor:torch.Size([8, 8192]) +batch tensor: loss_masktokens torch.Size([8, 8192]) +batch tensor: attention_masktorch.Size([8, 8192]) torch.Size([8, 1, 8192, 8192]) + +batch tensor:batch tensor: labelsposition_ids torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask batch tensor after cp:torch.Size([8, 1, 8192, 8192]) +tokensbatch tensor after cp: batch tensor: torch.Size([8, 8192])position_ids + torch.Size([8, 8192])batch tensor after cp: +tokens labels torch.Size([8, 8192]) +batch tensor after cp: torch.Size([8, 8192])loss_mask +torch.Size([8, 8192])batch tensor: +batch tensor after cp: labelsattention_mask torch.Size([8, 8192])torch.Size([8, 1, 8192, 8192]) + +batch tensor:batch tensor after cp: loss_maskposition_ids torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor:batch tensor: position_idstokens torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor:batch tensor after cp: loss_masktokens torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: labels torch.Size([8, 8192])batch tensor: + batch tensor after cp:attention_mask loss_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 8192]) + +batch tensor:batch tensor after cp: attention_maskposition_ids torch.Size([8, 1, 8192, 8192])torch.Size([8, 8192]) + +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 8 +Done exporting trace 8 + [2025-06-21 21:20:05] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 134.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192])batch tensor: + tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192])batch tensor after cp: + batch tensor:tokens loss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: batch tensor:labels attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192])batch tensor after cp: + batch tensor:loss_mask position_idstorch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor after cp: + attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 9 +Done exporting trace 9 + [2025-06-21 21:20:05] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 134.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[after training is done] datetime: 2025-06-21 21:20:05 +saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.024217605590820312 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.024455785751342773 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.02464008331298828 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.024644851684570312 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.02470564842224121 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.025150537490844727 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.025212526321411133 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.025650501251220703 to prepare state dict for ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0710339546203613 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0710883140563965 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0710439682006836 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0710883140563965 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.070911169052124 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0702335834503174 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.071293592453003 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.0067827701568603516 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.004592418670654297 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.004601955413818359 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.0046100616455078125 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.004520416259765625 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.0044672489166259766 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.004481077194213867 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.004083395004272461 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540808.0107894 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540808.0107899 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540808.0107932 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540808.0107942 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540808.0107956 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540808.010801 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540808.0108223 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.940696716308594e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.1552734375e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.012222290039062e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.797645568847656e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.512901306152344e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.870529174804688e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00011157989501953125 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.006936788558959961 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540808.015448 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00010657310485839844 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04913210868835449 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540808.060361 rank: 1, write(async) time: 0.04956841468811035 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05004763603210449 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540808.0612743 rank: 6, write(async) time: 0.050482749938964844 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05018138885498047 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540808.061406 rank: 7, write(async) time: 0.050614356994628906 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05061626434326172 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540808.0618773 rank: 3, write(async) time: 0.05107593536376953 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05188322067260742 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540808.0631225 rank: 5, write(async) time: 0.05232524871826172 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05283164978027344 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540808.0640554 rank: 2, write(async) time: 0.05325961112976074 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05486011505126953 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540808.0661147 rank: 4, write(async) time: 0.05529212951660156 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05603313446044922 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540808.0719247 rank: 0, write(async) time: 0.056475162506103516 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 1.621246337890625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 1.7404556274414062e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 2.0742416381835938e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 2.1457672119140625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 1.621246337890625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 1.6450881958007812e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 1.6927719116210938e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.0289304256439209 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.027466297149658203 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.028212785720825195 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.03045177459716797 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.029480457305908203 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.03006291389465332 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.029594898223876953 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 1.3113021850585938e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.030603885650634766 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214687744, before: 1643679744, after: 1858367488 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214319104, before: 1641013248, after: 1855332352 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214331392, before: 1631760384, after: 1846091776 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214405120, before: 1632911360, after: 1847316480 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214425600, before: 1632583680, after: 1847009280 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216498176, before: 1643679744, after: 1860177920 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214753280, before: 1629265920, after: 1844019200 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216473600, before: 1641017344, after: 1857490944 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216612864, before: 1638285312, after: 1854898176 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216428544, before: 1632911360, after: 1849339904 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540809.2835784, rank: 1, write(sync,parallel): 1.012014389038086 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216530944, before: 1629265920, after: 1845796864 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540809.2902935, rank: 5, write(sync,parallel): 1.0051798820495605 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214446080, before: 1638285312, after: 1852731392 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216444928, before: 1632583680, after: 1849028608 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540809.3135726, rank: 7, write(sync,parallel): 1.0360107421875 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.08s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.08s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216526848, before: 1631760384, after: 1848287232 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540809.33052, rank: 4, write(sync,parallel): 1.0463225841522217 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540809.337384, rank: 6, write(sync,parallel): 1.0591201782226562 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.10s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540809.3561854, rank: 3, write(sync,parallel): 1.0753779411315918 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.12s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.12s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540809.3714268, rank: 2, write(sync,parallel): 1.0908682346343994 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.15s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.16s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 227844096, before: 1906298880, after: 2134142976 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 229228544, before: 1906298880, after: 2135527424 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540809.5379992, rank: 0, write(sync,parallel): 1.156348466873169 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.24s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.586772, 1, gather: 0.2680501937866211 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.5869174, 2, gather: 0.17745280265808105 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.5869458, 3, gather: 0.19167065620422363 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.5870144, 4, gather: 0.21680688858032227 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.5870845, 5, gather: 0.25909876823425293 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.587158, 6, gather: 0.21511220932006836 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.587148, 7, gather: 0.23856139183044434 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.5886424, 0, gather: 0.004002809524536133 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540809.6018884, metadata_write: 0.013123750686645508 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2318s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0198s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2336s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2850s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1943s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2552s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2760s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2086s + successfully saved checkpoint from iteration 10 to gpt-checkpoint [ t 1/8, p 1/1 ] +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.002226114273071289 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.0022232532501220703 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.0022063255310058594 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.0022368431091308594 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.0022287368774414062 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.002191781997680664 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.0022194385528564453 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.0022170543670654297 to finalize ckpt save +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor:batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])tokens + batch tensor: position_ids torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: batch tensor:tokens labelstorch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor after cp: + labelsbatch tensor: loss_masktorch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor after cp: + loss_maskbatch tensor: attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192])batch tensor after cp: + attention_maskbatch tensor: position_idstorch.Size([8, 1, 8192, 8192]) + batch tensor after cp:torch.Size([8, 8192]) +position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels batch tensor after cp:torch.Size([8, 8192]) +batch tensor:tokens loss_mask torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: batch tensor:labels attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192])batch tensor after cp: + batch tensor:loss_mask torch.Size([8, 8192])position_ids + batch tensor after cp:torch.Size([8, 8192]) +attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: batch tensor after cp:loss_mask tokenstorch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor: + batch tensor after cp:attention_mask labelstorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192]) +batch tensor: batch tensor after cp:position_ids loss_masktorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 10 +Done exporting trace 10 +(min, max) time across ranks (ms): + evaluate .......................................: (1745.04, 1745.13) +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on validation set | lm loss value: 1.178512E+01 | lm loss PPL: 1.312837E+05 | +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------------- +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor:batch tensor: loss_mask torch.Size([8, 8192]) +tokens batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192])batch tensor: + position_ids batch tensor: torch.Size([8, 8192])labels + torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192])batch tensor after cp: + tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask batch tensor:torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp:batch tensor: tokensloss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: batch tensor:labels attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192])batch tensor after cp: +loss_mask batch tensor:torch.Size([8, 8192]) +position_ids batch tensor after cp:torch.Size([8, 8192]) +attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids batch tensor:torch.Size([8, 8192]) + tokens batch tensor after cp: tokens torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp:batch tensor: labels labelstorch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor after cp: + loss_mask batch tensor:torch.Size([8, 8192]) +loss_mask batch tensor after cp:torch.Size([8, 8192]) +attention_mask batch tensor: torch.Size([8, 1, 8192, 8192])attention_mask + batch tensor after cp:torch.Size([8, 1, 8192, 8192]) +position_ids batch tensor:torch.Size([8, 8192]) +position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor:batch tensor after cp: tokens tokenstorch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp:torch.Size([8, 8192]) +loss_mask torch.Size([8, 8192]) +batch tensor:batch tensor after cp: labelsattention_mask torch.Size([8, 8192])torch.Size([8, 1, 8192, 8192]) + +batch tensor after cp: batch tensor:position_ids loss_masktorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 11 +Done exporting trace 11 +(min, max) time across ranks (ms): + evaluate .......................................: (51.14, 51.24) +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on test set | lm loss value: 1.178512E+01 | lm loss PPL: 1.312837E+05 | +---------------------------------------------------------------------------------------------------------- +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Running ctx_length=2048, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 2048 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 2048 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 2048 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 2048 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.052 seconds +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.408 seconds +time to initialize megatron (seconds): 8.477 +[after megatron is initialized] datetime: 2025-06-21 21:20:51 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 78706176 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 78706176 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 78706176 + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 78706176 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 78706176 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 78706176 + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 78706176 + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 78706176 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (78706176 elements, 78706176 padded size): + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.embedding.word_embeddings.weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (5.89, 5.92) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:20:51 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=2048, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005554 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 33296 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.002601 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 33281 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.002458 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 33343 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:20:51 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (287.60, 291.07) + train/valid/test-data-iterators-setup ..........: (24.68, 115.78) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:20:51 +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 0 +Done exporting trace 0 + [2025-06-21 21:20:57] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 6150.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.0703 +Theoretical memory footprints: weight and optimizer=1206.09 MB +[Rank 5] (after 1 iterations) memory (MB) | allocated: 3090.22607421875 | max allocated: 23022.73681640625 | reserved: 25412.0 | max reserved: 25412.0 +[Rank 4] (after 1 iterations) memory (MB) | allocated: 3090.22607421875 | max allocated: 23022.73681640625 | reserved: 25412.0 | max reserved: 25412.0 +[Rank 6] (after 1 iterations) memory (MB) | allocated: 3090.22607421875 | max allocated: 23022.73681640625 | reserved: 25412.0 | max reserved: 25412.0 +[Rank 2] (after 1 iterations) memory (MB) | allocated: 3090.22607421875 | max allocated: 23022.73681640625 | reserved: 25412.0 | max reserved: 25412.0 +[Rank 7] (after 1 iterations) memory (MB) | allocated: 3090.22607421875 | max allocated: 23022.73681640625 | reserved: 26436.0 | max reserved: 26436.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 3090.22607421875 | max allocated: 23022.73681640625 | reserved: 25412.0 | max reserved: 25412.0 +[Rank 3] (after 1 iterations) memory (MB) | allocated: 3090.22607421875 | max allocated: 23022.73681640625 | reserved: 25412.0 | max reserved: 25412.0 +[Rank 1] (after 1 iterations) memory (MB) | allocated: 3090.22607421875 | max allocated: 23022.73681640625 | reserved: 25412.0 | max reserved: 25412.0 +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 1 +Done exporting trace 1 + [2025-06-21 21:20:58] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 373.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens batch tensor:torch.Size([8, 16384]) +tokensbatch tensor: labels torch.Size([8, 16384]) +batch tensor: torch.Size([8, 16384])loss_mask + torch.Size([8, 16384]) +batch tensor:batch tensor: labelsattention_mask torch.Size([8, 16384]) +torch.Size([8, 1, 16384, 16384])batch tensor: + loss_mask batch tensor:torch.Size([8, 16384]) +position_ids batch tensor: torch.Size([8, 16384])attention_mask + torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor:batch tensor: position_ids torch.Size([8, 16384]) + tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: batch tensor after cp:position_ids tokens torch.Size([8, 16384])torch.Size([8, 16384]) + +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 2 +Done exporting trace 2 + [2025-06-21 21:20:58] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 350.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384])batch tensor: +batch tensor after cp: labels tokenstorch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 16384])torch.Size([8, 1, 16384, 16384]) + +batch tensor after cp: position_idsbatch tensor: torch.Size([8, 16384])labels + torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor:batch tensor after cp: labels torch.Size([8, 16384]) + batch tensor after cp: tokens loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 16384])torch.Size([8, 1, 16384, 16384]) + +batch tensor after cp: position_ids batch tensor:torch.Size([8, 16384]) +labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 3 +Done exporting trace 3 + [2025-06-21 21:20:58] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 342.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 4 +Done exporting trace 4 + [2025-06-21 21:20:59] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 347.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 5 +Done exporting trace 5 + [2025-06-21 21:20:59] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 345.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384])batch tensor after cp: +tokens torch.Size([8, 16384])batch tensor: + batch tensor after cp:labels labelstorch.Size([8, 16384]) +torch.Size([8, 16384])batch tensor: + batch tensor after cp:loss_mask loss_mask torch.Size([8, 16384]) +torch.Size([8, 16384]) +batch tensor:batch tensor after cp: attention_maskattention_mask torch.Size([8, 1, 16384, 16384])torch.Size([8, 1, 16384, 16384]) + +batch tensor after cp: batch tensor:position_ids torch.Size([8, 16384])position_ids + torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 6 +Done exporting trace 6 + [2025-06-21 21:20:59] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 357.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens batch tensor:torch.Size([8, 16384]) + batch tensor after cp:tokens labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384])torch.Size([8, 16384]) + +batch tensor:batch tensor after cp:batch tensor: attention_masktokens labels torch.Size([8, 1, 16384, 16384]) +torch.Size([8, 16384]) +batch tensor after cp:torch.Size([8, 16384])batch tensor: + position_idsloss_maskbatch tensor: torch.Size([8, 16384])torch.Size([8, 16384])labels + +torch.Size([8, 16384]) +batch tensor: batch tensor:attention_mask loss_mask torch.Size([8, 1, 16384, 16384])torch.Size([8, 16384]) + +batch tensor:batch tensor: position_idsattention_mask torch.Size([8, 16384]) +torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens batch tensor after cp:torch.Size([8, 16384]) +tokensbatch tensor after cp: torch.Size([8, 16384])labels + batch tensor after cp:torch.Size([8, 16384]) +labels batch tensor after cp:torch.Size([8, 16384]) +loss_mask batch tensor after cp:torch.Size([8, 16384]) +loss_mask batch tensor after cp:torch.Size([8, 16384]) +attention_mask batch tensor after cp: torch.Size([8, 1, 16384, 16384])attention_mask + batch tensor after cp:torch.Size([8, 1, 16384, 16384]) +position_ids batch tensor after cp:torch.Size([8, 16384]) +position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 7 +Done exporting trace 7 + [2025-06-21 21:21:00] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 343.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor:batch tensor after cp: attention_mask tokenstorch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 8 +Done exporting trace 8 + [2025-06-21 21:21:00] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 348.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 9 +Done exporting trace 9 + [2025-06-21 21:21:00] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 344.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[after training is done] datetime: 2025-06-21 21:21:00 +saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.047446489334106445 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.04744696617126465 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.04742884635925293 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.04746246337890625 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.0474853515625 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.04751110076904297 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.04758572578430176 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.04787325859069824 to prepare state dict for ckpt +WARNING:megatron.core.dist_checkpointing.serialization:Overwriting old incomplete / corrupted checkpoint... +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0701558589935303 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0701587200164795 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0702571868896484 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0704734325408936 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0703845024108887 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.0706052780151367 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.071131944656372 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.007448911666870117 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.006315946578979492 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.006306648254394531 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.006196260452270508 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540862.853816 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540862.8538272 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.0049211978912353516 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.005027294158935547 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540862.853845 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540862.8538692 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540862.8538852 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.608268737792969e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.465217590332031e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00010395050048828125 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00010633468627929688 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.608268737792969e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.002508878707885742 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.006544828414916992 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540862.8540225 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540862.8540282 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00010013580322265625 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00013017654418945312 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.0072095394134521484 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540862.858626 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00010633468627929688 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04658865928649902 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04656410217285156 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540862.9008646 rank: 3, write(async) time: 0.04703497886657715 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540862.9008837 rank: 4, write(async) time: 0.04699969291687012 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04675745964050293 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540862.901338 rank: 6, write(async) time: 0.047316789627075195 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.048253774642944336 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04848599433898926 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540862.902628 rank: 5, write(async) time: 0.048779964447021484 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540862.9027402 rank: 1, write(async) time: 0.04892253875732422 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.0488893985748291 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.049250125885009766 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540862.903472 rank: 2, write(async) time: 0.04944252967834473 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540862.903589 rank: 7, write(async) time: 0.04971766471862793 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04995274543762207 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540862.909087 rank: 0, write(async) time: 0.050461769104003906 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 1.5497207641601562e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 1.6450881958007812e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 1.6450881958007812e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 1.6450881958007812e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 4.2438507080078125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 1.71661376953125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 1.621246337890625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.025706052780151367 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.025520801544189453 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.031020402908325195 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.02749347686767578 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.028128862380981445 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.029539108276367188 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.038559913635253906 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 1.3828277587890625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.029468059539794922 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214740992, before: 1668866048, after: 1883607040 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214441984, before: 1651179520, after: 1865621504 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216547328, before: 1657335808, after: 1873883136 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216571904, before: 1651179520, after: 1867751424 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216682496, before: 1668866048, after: 1885548544 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214446080, before: 1648058368, after: 1862504448 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216469504, before: 1644466176, after: 1860935680 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 215994368, before: 1704005632, after: 1920000000 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216424448, before: 1674133504, after: 1890557952 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214392832, before: 1644466176, after: 1858859008 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216600576, before: 1648058368, after: 1864658944 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214372352, before: 1657335808, after: 1871708160 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540864.0917497, rank: 7, write(sync,parallel): 0.9690725803375244 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214437888, before: 1704005632, after: 1918443520 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214409216, before: 1674133504, after: 1888542720 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540864.1104023, rank: 4, write(sync,parallel): 0.9898681640625 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540864.11714, rank: 5, write(sync,parallel): 0.9922361373901367 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.04s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540864.1339746, rank: 2, write(sync,parallel): 1.0032286643981934 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540864.1372392, rank: 6, write(sync,parallel): 0.9979920387268066 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540864.1381836, rank: 3, write(sync,parallel): 1.0166881084442139 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.06s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540864.1483567, rank: 1, write(sync,parallel): 1.014906406402588 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.06s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.07s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.08s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.09s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.08s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 240558080, before: 1912029184, after: 2152587264 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 240439296, before: 1912045568, after: 2152484864 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540864.285279, rank: 0, write(sync,parallel): 1.0592100620269775 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.14s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.3336205, 1, gather: 0.1470801830291748 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.3336802, 2, gather: 0.16177701950073242 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.3338838, 3, gather: 0.16014623641967773 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.3341277, 4, gather: 0.18654561042785645 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.3341916, 7, gather: 0.2018890380859375 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.3341799, 5, gather: 0.18288803100585938 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.334417, 6, gather: 0.15107989311218262 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.3364, 0, gather: 0.004857540130615234 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540864.3470955, metadata_write: 0.010549545288085938 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0186s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1628s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1979s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1773s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2016s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1756s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2172s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1663s + successfully saved checkpoint from iteration 10 to gpt-checkpoint [ t 1/8, p 1/1 ] +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.003470897674560547 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.003464937210083008 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.0034744739532470703 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.0034568309783935547 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.0033609867095947266 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.003448009490966797 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.0034356117248535156 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.0034117698669433594 to finalize ckpt save +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels batch tensor after cp: torch.Size([8, 16384])tokens +batch tensor:torch.Size([8, 16384]) +loss_maskbatch tensor after cp: torch.Size([8, 16384])labels + torch.Size([8, 16384]) +batch tensor after cp:batch tensor: loss_maskattention_mask torch.Size([8, 16384]) +torch.Size([8, 1, 16384, 16384])batch tensor after cp: + attention_maskbatch tensor: torch.Size([8, 1, 16384, 16384]) +position_idsbatch tensor after cp: torch.Size([8, 16384])position_ids + torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 10 +Done exporting trace 10 +(min, max) time across ranks (ms): + evaluate .......................................: (1660.00, 1660.22) +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on validation set | lm loss value: 1.138065E+01 | lm loss PPL: 8.761018E+04 | +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------------- +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384]) +Start exporting trace 11 +Done exporting trace 11 +(min, max) time across ranks (ms): + evaluate .......................................: (127.21, 127.32) +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on test set | lm loss value: 1.138065E+01 | lm loss PPL: 8.761018E+04 | +---------------------------------------------------------------------------------------------------------- +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Running ctx_length=4096, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 4096 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 4096 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 4096 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 4096 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.045 seconds +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.591 seconds +time to initialize megatron (seconds): 7.154 +[after megatron is initialized] datetime: 2025-06-21 21:21:43 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 87094784 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 87094784 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 87094784 + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 87094784 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 87094784 + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 87094784 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 87094784 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 87094784 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (87094784 elements, 87094784 padded size): + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.embedding.word_embeddings.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (5.04, 5.08) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:21:43 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=4096, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005098 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 16648 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.002039 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 16640 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.002042 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 16671 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:21:43 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (345.12, 348.88) + train/valid/test-data-iterators-setup ..........: (27.95, 109.44) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:21:43 +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 0 +Done exporting trace 0 + [2025-06-21 21:21:52] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 9129.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.0703 +Theoretical memory footprints: weight and optimizer=1206.09 MB +[Rank 3] (after 1 iterations) memory (MB) | allocated: 9333.72607421875 | max allocated: 49540.98681640625 | reserved: 54356.0 | max reserved: 54356.0[Rank 1] (after 1 iterations) memory (MB) | allocated: 9333.72607421875 | max allocated: 49540.98681640625 | reserved: 54356.0 | max reserved: 54356.0 + +[Rank 7] (after 1 iterations) memory (MB) | allocated: 9333.72607421875 | max allocated: 49540.98681640625 | reserved: 54356.0 | max reserved: 54356.0 +[Rank 4] (after 1 iterations) memory (MB) | allocated: 9333.72607421875 | max allocated: 49540.98681640625 | reserved: 54356.0 | max reserved: 54356.0 +[Rank 6] (after 1 iterations) memory (MB) | allocated: 9333.72607421875 | max allocated: 49540.98681640625 | reserved: 54356.0 | max reserved: 54356.0 +[Rank 5] (after 1 iterations) memory (MB) | allocated: 9333.72607421875 | max allocated: 49540.98681640625 | reserved: 54356.0 | max reserved: 54356.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 9333.72607421875 | max allocated: 49540.98681640625 | reserved: 54356.0 | max reserved: 54356.0 +[Rank 2] (after 1 iterations) memory (MB) | allocated: 9333.72607421875 | max allocated: 49540.98681640625 | reserved: 54356.0 | max reserved: 54356.0 +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor:batch tensor: tokenstokens torch.Size([8, 32768])torch.Size([8, 32768]) + +batch tensor:batch tensor: labelslabels torch.Size([8, 32768])torch.Size([8, 32768]) + +batch tensor:batch tensor: loss_maskloss_mask torch.Size([8, 32768])torch.Size([8, 32768]) + +batch tensor: attention_mask batch tensor: attention_masktorch.Size([8, 1, 32768, 32768]) +torch.Size([8, 1, 32768, 32768])batch tensor: + position_idsbatch tensor: position_idstorch.Size([8, 32768]) +torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp:batch tensor after cp: labelstokens torch.Size([8, 32768])torch.Size([8, 32768]) + +batch tensor after cp:batch tensor after cp: loss_masklabels torch.Size([8, 32768])torch.Size([8, 32768]) + +batch tensor after cp: batch tensor after cp:attention_mask loss_mask torch.Size([8, 1, 32768, 32768])torch.Size([8, 32768]) + +batch tensor after cp:batch tensor after cp: position_idsattention_mask torch.Size([8, 32768])torch.Size([8, 1, 32768, 32768]) + +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 1 +Done exporting trace 1 + [2025-06-21 21:21:53] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 1052.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 2 +Done exporting trace 2 + [2025-06-21 21:21:54] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 1027.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 3 +Done exporting trace 3 + [2025-06-21 21:21:55] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 1024.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 4 +Done exporting trace 4 + [2025-06-21 21:21:56] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 1018.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 5 +Done exporting trace 5 + [2025-06-21 21:21:57] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 1034.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 6 +Done exporting trace 6 + [2025-06-21 21:21:58] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 1017.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 7 +Done exporting trace 7 + [2025-06-21 21:21:59] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 1021.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 8 +Done exporting trace 8 + [2025-06-21 21:22:00] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 1012.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 9 +Done exporting trace 9 + [2025-06-21 21:22:01] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 1014.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[after training is done] datetime: 2025-06-21 21:22:01 +saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.029221534729003906 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.029253005981445312 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.029274702072143555 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.029543638229370117 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.030131101608276367 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.03185248374938965 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.03436684608459473 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.04229593276977539 to prepare state dict for ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.1595733165740967 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.1596040725708008 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.159599781036377 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.1596400737762451 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.1590709686279297 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.159682035446167 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.1602625846862793 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.0072650909423828125 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.007737159729003906 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.007760524749755859 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.007696628570556641 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.007694244384765625 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540924.079593 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540924.079595 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540924.0795972 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540924.0795977 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.006170988082885742 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.004765748977661133 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540924.0796657 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.322166442871094e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.083747863769531e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.608268737792969e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.608268737792969e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540924.0796945 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.0027420520782470703 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00010776519775390625 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540924.0797813 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00011539459228515625 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00010895729064941406 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.008323907852172852 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540924.084511 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00011134147644042969 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.048734188079833984 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540924.128807 rank: 5, write(async) time: 0.0492098331451416 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04911088943481445 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.049249887466430664 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540924.1292224 rank: 7, write(async) time: 0.04962658882141113 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04927635192871094 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540924.1293721 rank: 6, write(async) time: 0.04977297782897949 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540924.129427 rank: 1, write(async) time: 0.04976034164428711 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05021023750305176 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540924.1303358 rank: 3, write(async) time: 0.050742149353027344 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.046769142150878906 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540924.1317284 rank: 0, write(async) time: 0.04721879959106445 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.054204702377319336 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540924.1343873 rank: 2, write(async) time: 0.05469012260437012 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06274294853210449 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540924.1430116 rank: 4, write(async) time: 0.0632326602935791 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 2.3365020751953125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 1.8835067749023438e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 2.4318695068359375e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 1.6689300537109375e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 1.6450881958007812e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 1.6689300537109375e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 1.6689300537109375e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.02905106544494629 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.027660846710205078 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.02837538719177246 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.03602290153503418 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.029557466506958008 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.037657737731933594 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.03875470161437988 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 1.3828277587890625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.0300595760345459 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214433792, before: 1793847296, after: 2008281088 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214503424, before: 1681575936, after: 1896079360 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214405120, before: 1693741056, after: 1908146176 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214355968, before: 1692864512, after: 1907220480 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214499328, before: 1743351808, after: 1957851136 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214482944, before: 1673068544, after: 1887551488 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 214675456, before: 1707081728, after: 1921757184 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216547328, before: 1692774400, after: 1909321728 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216653824, before: 1681575936, after: 1898229760 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216621056, before: 1793847296, after: 2010468352 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216702976, before: 1707081728, after: 1923784704 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216596480, before: 1693741056, after: 1910337536 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216641536, before: 1673068544, after: 1889710080 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540924.9723327, rank: 3, write(sync,parallel): 0.6097226142883301 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 216317952, before: 1743351808, after: 1959669760 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540924.9832647, rank: 1, write(sync,parallel): 0.6233716011047363 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540924.991234, rank: 7, write(sync,parallel): 0.6258518695831299 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540925.001489, rank: 4, write(sync,parallel): 0.6298003196716309 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540925.0047367, rank: 2, write(sync,parallel): 0.6366307735443115 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.68s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540925.0130847, rank: 5, write(sync,parallel): 0.646996259689331 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540925.01614, rank: 6, write(sync,parallel): 0.6528716087341309 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.69s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.71s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.71s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.72s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.72s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.72s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 267005952, before: 1963257856, after: 2230263808 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 265531392, before: 1963257856, after: 2228789248 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540925.1980476, rank: 0, write(sync,parallel): 0.7040996551513672 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.77s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.2344456, 1, gather: 0.21375417709350586 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.2345288, 3, gather: 0.22632217407226562 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.2345815, 2, gather: 0.18529224395751953 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.234655, 5, gather: 0.18621563911437988 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.2347636, 4, gather: 0.1845099925994873 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.2350206, 7, gather: 0.19874954223632812 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.2349427, 6, gather: 0.18164634704589844 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.2370396, 0, gather: 0.004956960678100586 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540925.246701, metadata_write: 0.009536266326904297 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0174s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2405s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1996s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2002s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2282s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1987s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1956s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.2128s + successfully saved checkpoint from iteration 10 to gpt-checkpoint [ t 1/8, p 1/1 ] +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.0024614334106445312 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.0024924278259277344 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.0024733543395996094 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.0025196075439453125 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.0024607181549072266 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.0025119781494140625 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.0024199485778808594 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.002729177474975586 to finalize ckpt save +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 10 +Done exporting trace 10 +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +(min, max) time across ranks (ms): + evaluate .......................................: (2061.17, 2061.55) +---------------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on validation set | lm loss value: 1.248712E+01 | lm loss PPL: 2.649039E+05 | +---------------------------------------------------------------------------------------------------------------- +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 32768]) +Start exporting trace 11 +Done exporting trace 11 +(min, max) time across ranks (ms): + evaluate .......................................: (367.12, 367.67) +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------- +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED + validation loss at iteration 10 on test set | lm loss value: 1.248712E+01 | lm loss PPL: 2.649039E+05 | +---------------------------------------------------------------------------------------------------------- +Running ctx_length=8192, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 8192 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 8192 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 8192 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 8192 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +INFO:megatron.training.initialize:Setting logging level to 0 +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.046 seconds +> compiling and loading fused kernels ... +INFO:megatron.training.initialize:Setting logging level to 0 +>>> done with compiling and loading fused kernels. Compilation time: 2.550 seconds +time to initialize megatron (seconds): 7.305 +[after megatron is initialized] datetime: 2025-06-21 21:22:44 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 103872000 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 103872000 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 103872000 + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 103872000 +>>> embedding>>> embedding + +>>> decoder>>> decoder + +>>> output_layer>>> output_layer + + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 103872000 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 103872000 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 103872000 + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 103872000 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (103872000 elements, 103872000 padded size): + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.embedding.position_embeddings.weight + module.embedding.word_embeddings.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (239.30, 240.68) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:22:45 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=8192, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005160 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8324 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001900 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8320 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001765 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8335 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:22:45 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (763.03, 765.29) + train/valid/test-data-iterators-setup ..........: (30.78, 124.99) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:22:45 +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +Start exporting trace 0 +Done exporting trace 0 + [2025-06-21 21:22:56] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 10759.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.0703 +Theoretical memory footprints: weight and optimizer=1206.09 MB +[Rank 6] (after 1 iterations) memory (MB) | allocated: 34108.72607421875 | max allocated: 114865.48681640625 | reserved: 127878.0 | max reserved: 127878.0 +[Rank 5] (after 1 iterations) memory (MB) | allocated: 34108.72607421875 | max allocated: 114865.48681640625 | reserved: 125830.0 | max reserved: 125830.0[Rank 7] (after 1 iterations) memory (MB) | allocated: 34108.72607421875 | max allocated: 114865.48681640625 | reserved: 125830.0 | max reserved: 125830.0 + +[Rank 1] (after 1 iterations) memory (MB) | allocated: 34108.72607421875 | max allocated: 114865.48681640625 | reserved: 125830.0 | max reserved: 125830.0 +[Rank 4] (after 1 iterations) memory (MB) | allocated: 34108.72607421875 | max allocated: 114865.48681640625 | reserved: 127878.0 | max reserved: 127878.0 +[Rank 3] (after 1 iterations) memory (MB) | allocated: 34108.72607421875 | max allocated: 114865.48681640625 | reserved: 127878.0 | max reserved: 127878.0 +[Rank 2] (after 1 iterations) memory (MB) | allocated: 34108.72607421875 | max allocated: 114865.48681640625 | reserved: 127878.0 | max reserved: 127878.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 34108.72607421875 | max allocated: 114865.48681640625 | reserved: 125830.0 | max reserved: 125830.0 +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +Start exporting trace 1 +Done exporting trace 1 + [2025-06-21 21:23:00] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 4337.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +Start exporting trace 2 +Done exporting trace 2 + [2025-06-21 21:23:04] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 4039.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +Start exporting trace 3 +Done exporting trace 3 + [2025-06-21 21:23:08] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 3845.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 65536]) +Running ctx_length=12288, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 12288 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 12288 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 12288 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 12288 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.045 seconds +> compiling and loading fused kernels ... +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +>>> done with compiling and loading fused kernels. Compilation time: 2.505 seconds +time to initialize megatron (seconds): 7.225 +[after megatron is initialized] datetime: 2025-06-21 21:23:47 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 120649216 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 120649216 + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 120649216 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 120649216 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 120649216 + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 120649216 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 120649216 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 120649216 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (120649216 elements, 120649216 padded size): + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.90, 2.94) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:23:48 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=12288, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005290 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 5549 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001869 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 5546 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001730 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 5557 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:23:48 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (665.62, 679.14) + train/valid/test-data-iterators-setup ..........: (27.79, 118.56) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:23:48 +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 98304]) +batch tensor after cp: labels torch.Size([8, 98304]) +batch tensor after cp: loss_mask torch.Size([8, 98304]) +batch tensor after cp: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor after cp: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 98304]) +batch tensor after cp: labels torch.Size([8, 98304]) +batch tensor after cp: loss_mask torch.Size([8, 98304]) +batch tensor after cp: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor after cp: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 98304]) +batch tensor after cp: labels torch.Size([8, 98304]) +batch tensor after cp: loss_mask torch.Size([8, 98304]) +batch tensor after cp: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor after cp: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 98304]) +batch tensor after cp: labels torch.Size([8, 98304]) +batch tensor after cp: loss_mask torch.Size([8, 98304]) +batch tensor after cp: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor after cp: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 98304]) +batch tensor after cp: labels torch.Size([8, 98304]) +batch tensor after cp: loss_mask torch.Size([8, 98304]) +batch tensor after cp: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor after cp: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 98304]) +batch tensor after cp: labels torch.Size([8, 98304]) +batch tensor after cp: loss_mask torch.Size([8, 98304]) +batch tensor after cp: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor after cp: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 98304]) +batch tensor after cp: labels torch.Size([8, 98304]) +batch tensor after cp: loss_mask torch.Size([8, 98304]) +batch tensor after cp: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor after cp: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 98304]) +batch tensor after cp: labels torch.Size([8, 98304]) +batch tensor after cp: loss_mask torch.Size([8, 98304]) +batch tensor after cp: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor after cp: position_ids torch.Size([8, 98304]) +Running ctx_length=16384, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 16384 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 16384 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 16384 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 16384 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.045 seconds +> compiling and loading fused kernels ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +>>> done with compiling and loading fused kernels. Compilation time: 3.454 seconds +time to initialize megatron (seconds): 8.586 +[after megatron is initialized] datetime: 2025-06-21 21:24:34 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 137426432 + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 137426432 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 137426432 + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 137426432 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 137426432 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 137426432 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 137426432 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 137426432 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (137426432 elements, 137426432 padded size): + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.embedding.word_embeddings.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.final_layernorm.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.90, 3.06) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:24:35 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=16384, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005946 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 4162 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001729 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 4160 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001450 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 4167 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:24:35 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (830.31, 857.93) + train/valid/test-data-iterators-setup ..........: (24.92, 125.58) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:24:35 +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor after cp: tokens torch.Size([8, 131072]) +batch tensor after cp: labels torch.Size([8, 131072]) +batch tensor after cp: loss_mask torch.Size([8, 131072]) +batch tensor after cp: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor after cp: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor after cp: tokens torch.Size([8, 131072]) +batch tensor after cp: labels torch.Size([8, 131072]) +batch tensor after cp: loss_mask torch.Size([8, 131072]) +batch tensor after cp: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor after cp: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor after cp: tokens torch.Size([8, 131072]) +batch tensor after cp: labels torch.Size([8, 131072]) +batch tensor after cp: loss_mask torch.Size([8, 131072]) +batch tensor after cp: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor after cp: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor after cp: tokens torch.Size([8, 131072]) +batch tensor after cp: labels torch.Size([8, 131072]) +batch tensor after cp: loss_mask torch.Size([8, 131072]) +batch tensor after cp: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor after cp: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor after cp: tokens torch.Size([8, 131072]) +batch tensor after cp: labels torch.Size([8, 131072]) +batch tensor after cp: loss_mask torch.Size([8, 131072]) +batch tensor after cp: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor after cp: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor after cp: tokens torch.Size([8, 131072]) +batch tensor after cp: labels torch.Size([8, 131072]) +batch tensor after cp: loss_mask torch.Size([8, 131072]) +batch tensor after cp: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor after cp: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor after cp: tokens torch.Size([8, 131072]) +batch tensor after cp: labels torch.Size([8, 131072]) +batch tensor after cp: loss_mask torch.Size([8, 131072]) +batch tensor after cp: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor after cp: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor after cp: tokens torch.Size([8, 131072]) +batch tensor after cp: labels torch.Size([8, 131072]) +batch tensor after cp: loss_mask torch.Size([8, 131072]) +batch tensor after cp: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor after cp: position_ids torch.Size([8, 131072]) +Running ctx_length=24576, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 24576 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 24576 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 24576 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 24576 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.041 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 4.600 seconds +time to initialize megatron (seconds): 9.573 +[after megatron is initialized] datetime: 2025-06-21 21:25:20 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 170980864 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 170980864 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 170980864 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 170980864 + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 170980864 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 170980864 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 170980864 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 170980864 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (170980864 elements, 170980864 padded size): + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.final_layernorm.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine + loading distributed checkpoint from gpt-checkpoint at iteration 10 +Running ctx_length=32768, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 32768 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 32768 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 32768 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 32768 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.047 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +>>> done with compiling and loading fused kernels. Compilation time: 2.354 seconds +time to initialize megatron (seconds): 7.350 +[after megatron is initialized] datetime: 2025-06-21 21:25:58 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 204535296 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 204535296 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 204535296 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 204535296 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 204535296 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 204535296 + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 204535296 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 204535296 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (204535296 elements, 204535296 padded size): + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.embedding.position_embeddings.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (3.21, 3.84) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:26:00 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=32768, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004606 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2081 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001621 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2080 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001421 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2083 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:26:00 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (1539.95, 1551.37) + train/valid/test-data-iterators-setup ..........: (61.26, 160.17) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:26:00 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 136.62 GiB is free. Including non-PyTorch memory, this process has 3.19 GiB memory in use. Of the allocated memory 1.56 GiB is allocated by PyTorch, and 171.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=40960, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 40960 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 40960 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 40960 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 40960 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.040 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +>>> done with compiling and loading fused kernels. Compilation time: 2.861 seconds +time to initialize megatron (seconds): 7.467 +[after megatron is initialized] datetime: 2025-06-21 21:26:35 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 238089728 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 238089728 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 238089728 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 238089728 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 238089728 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 238089728 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 238089728 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (238089728 elements, 238089728 padded size): + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.final_layernorm.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 238089728 +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (4.51, 4.72) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:26:37 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=40960, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004626 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1664 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001662 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1664 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001370 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1667 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:26:38 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (2340.91, 2356.86) + train/valid/test-data-iterators-setup ..........: (32.34, 122.35) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:26:38 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 136.34 GiB is free. Including non-PyTorch memory, this process has 3.46 GiB memory in use. Of the allocated memory 1.82 GiB is allocated by PyTorch, and 189.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=49152, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 49152 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 49152 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 49152 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 49152 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.041 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.467 seconds +time to initialize megatron (seconds): 7.360 +[after megatron is initialized] datetime: 2025-06-21 21:27:12 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 271644160 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 271644160 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 271644160 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 271644160 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 271644160 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 271644160 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 271644160 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (271644160 elements, 271644160 padded size): + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.embedding.word_embeddings.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 271644160 +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.93, 3.69) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:27:14 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=49152, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005276 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1387 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001670 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1386 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001411 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1389 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:27:15 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (2315.75, 2321.74) + train/valid/test-data-iterators-setup ..........: (26.39, 111.03) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:27:15 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1152.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.97 GiB is free. Including non-PyTorch memory, this process has 3.84 GiB memory in use. Of the allocated memory 2.08 GiB is allocated by PyTorch, and 307.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=65536, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 65536 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 65536 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 65536 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 65536 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +INFO:megatron.training.initialize:Setting logging level to 0 +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.046 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +>>> done with compiling and loading fused kernels. Compilation time: 2.653 seconds +time to initialize megatron (seconds): 7.650 +[after megatron is initialized] datetime: 2025-06-21 21:27:51 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 338753024 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 338753024 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 338753024 + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 338753024 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 338753024 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 338753024 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 338753024 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (338753024 elements, 338753024 padded size): + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.embedding.word_embeddings.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.embedding.position_embeddings.weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 338753024 +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.71, 3.86) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:27:55 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=65536, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004865 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1040 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001679 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1040 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001474 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1041 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:27:55 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (3780.12, 3780.52) + train/valid/test-data-iterators-setup ..........: (27.25, 106.40) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:27:55 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2048.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.22 GiB is free. Including non-PyTorch memory, this process has 4.59 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 543.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=81920, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 81920 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 81920 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 81920 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 81920 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +INFO:megatron.training.initialize:Setting logging level to 0 +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.046 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.623 seconds +time to initialize megatron (seconds): 7.514 +[after megatron is initialized] datetime: 2025-06-21 21:28:31 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 405861888 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 405861888 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 405861888 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 405861888 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 405861888 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 405861888 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 405861888 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 405861888 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (405861888 elements, 405861888 padded size): + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.97, 3.21) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:28:34 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=81920, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004925 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 832 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001641 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 832 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001411 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 833 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:28:34 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (3613.50, 3613.62) + train/valid/test-data-iterators-setup ..........: (28.47, 121.18) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:28:34 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3200.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.09 GiB is free. Including non-PyTorch memory, this process has 4.71 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 139.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=98304, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 98304 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 98304 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 98304 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 98304 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.042 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.616 seconds +time to initialize megatron (seconds): 7.960 +[after megatron is initialized] datetime: 2025-06-21 21:29:11 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 472970752 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 472970752 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 472970752 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 472970752 + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 472970752 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 472970752 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 472970752 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (472970752 elements, 472970752 padded size): + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.final_layernorm.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.embedding.word_embeddings.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 472970752 +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (3.76, 3.99) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:29:15 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=98304, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004812 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 693 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001667 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 693 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001369 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 694 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 21:29:16 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (4368.45, 4373.20) + train/valid/test-data-iterators-setup ..........: (15.58, 103.30) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 21:29:16 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4608.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 134.47 GiB is free. Including non-PyTorch memory, this process has 5.34 GiB memory in use. Of the allocated memory 3.64 GiB is allocated by PyTorch, and 247.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=131072, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 131072 +TP_SIZE: 8 +CP_SIZE: 1 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 1 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 131072 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 8 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 8 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 131072 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 131072 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 8 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 8 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +> initialized tensor model parallel with size 8 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.049 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +>>> done with compiling and loading fused kernels. Compilation time: 2.813 seconds +time to initialize megatron (seconds): 7.692 +[after megatron is initialized] datetime: 2025-06-21 21:29:52 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (607188480 elements, 607188480 padded size): + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.final_layernorm.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +(TP, PP, encoder TP, encoder PP) mismatch after resume ((8, 1, 0, 0) vs (4, 1, 0, 0) from checkpoint): RNG state will be ignored +(TP, PP, encoder TP, encoder PP) mismatch after resume ((8, 1, 0, 0) vs (4, 1, 0, 0) from checkpoint): Rerun state will be ignored + loading distributed checkpoint from gpt-checkpoint at iteration 10