diff --git "a/attnserver.run_attnserver.slurm.sh.343240.out.log" "b/attnserver.run_attnserver.slurm.sh.343240.out.log" --- "a/attnserver.run_attnserver.slurm.sh.343240.out.log" +++ "b/attnserver.run_attnserver.slurm.sh.343240.out.log" @@ -856,3 +856,15149 @@ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) batch tensor after cp: position_ids torch.Size([8, 1024]) Start exporting trace 0 Done exporting trace 0 +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.2795 +Theoretical memory footprints: weight and optimizer=4797.35 MB +[Rank 3] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5990.0 | max reserved: 5990.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5962.0 | max reserved: 5962.0 + [2025-06-21 21:59:36] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 18957.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[Rank 2] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5926.0 | max reserved: 5926.0 +[Rank 6] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5928.0 | max reserved: 5928.0 +[Rank 7] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5992.0 | max reserved: 5992.0 +[Rank 15] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5950.0 | max reserved: 5950.0 +[Rank 1] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5898.0 | max reserved: 5898.0 +[Rank 8] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5978.0 | max reserved: 5978.0 +[Rank 5] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5928.0 | max reserved: 5928.0 +[Rank 12] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 6026.0 | max reserved: 6026.0 +[Rank 4] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5928.0 | max reserved: 5928.0 +[Rank 11] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 5832.0 | max reserved: 5832.0 +[Rank 9] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 6042.0 | max reserved: 6042.0 +[Rank 13] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 6090.0 | max reserved: 6090.0 +[Rank 10] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 6024.0 | max reserved: 6024.0 +[Rank 14] (after 1 iterations) memory (MB) | allocated: 4090.73681640625 | max allocated: 5465.11181640625 | reserved: 6014.0 | max reserved: 6014.0 +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: batch tensor:tokens tokens torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +torch.Size([8, 8192]) +batch tensor: labelsbatch tensor: torch.Size([8, 8192])labels +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) + batch tensor:torch.Size([8, 8192]) +loss_mask batch tensor:torch.Size([8, 8192]) +loss_mask batch tensor:torch.Size([8, 8192]) +attention_mask batch tensor:torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +attention_mask batch tensor: torch.Size([8, 1, 8192, 8192])position_ids + torch.Size([8, 8192])batch tensor: + position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024])batch tensor after cp: +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokensloss_mask torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp:batch tensor after cp: attention_masklabels torch.Size([8, 1024])torch.Size([8, 1, 1024, 8192]) + +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp:batch tensor after cp: batch tensor:loss_maskposition_ids torch.Size([8, 1024])torch.Size([8, 1024])tokens + + batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +torch.Size([8, 8192]) +batch tensor after cp: tokens batch tensor after cp:torch.Size([8, 1024]) +tokensbatch tensor after cp: labels torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor after cp:batch tensor after cp: labelsloss_mask torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp:batch tensor after cp: loss_maskattention_mask torch.Size([8, 1024])torch.Size([8, 1, 1024, 8192]) + +batch tensor after cp:batch tensor after cp: attention_maskposition_ids torch.Size([8, 1, 1024, 8192])torch.Size([8, 1024]) + +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +Start exporting trace 1 +batch tensor: tokens torch.Size([8, 8192]) +Done exporting trace 1 +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) + [2025-06-21 21:59:36] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 125.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: batch tensor:labels torch.Size([8, 8192]) +tokensbatch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192]) +batch tensor: batch tensor:position_ids labelstorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: tokens batch tensor after cp:torch.Size([8, 1024]) +tokensbatch tensor after cp: torch.Size([8, 1024])labels +batch tensor after cp: position_ids torch.Size([8, 1024]) +torch.Size([8, 1024])batch tensor after cp: +batch tensor:batch tensor: tokenstokens torch.Size([8, 8192])torch.Size([8, 8192]) + batch tensor after cp:labels loss_masktorch.Size([8, 1024]) +torch.Size([8, 1024])batch tensor after cp: + loss_maskbatch tensor after cp: batch tensor:attention_masktorch.Size([8, 1024]) + +batch tensor after cp:torch.Size([8, 1, 1024, 8192]) +batch tensor: batch tensor:labels labelstorch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor:batch tensor: loss_maskloss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + + attention_maskbatch tensor after cp:tokens position_idstorch.Size([8, 1, 1024, 8192]) + torch.Size([8, 1024])batch tensor after cp: + position_ids torch.Size([8, 1024]) +torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor:batch tensor: attention_maskattention_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 1, 8192, 8192]) + +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor:batch tensor:batch tensor: position_idsposition_ids batch tensor:tokenstorch.Size([8, 8192])torch.Size([8, 8192]) + + tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192])torch.Size([8, 8192]) +batch tensor: + loss_mask torch.Size([8, 8192]) +batch tensor: labelsbatch tensor: attention_masktorch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192]) +batch tensor: batch tensor:loss_mask position_ids torch.Size([8, 8192])torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) + +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: batch tensor after cp:tokens tokenstorch.Size([8, 1024]) +torch.Size([8, 1024])batch tensor after cp: +batch tensor: tokens torch.Size([8, 8192])batch tensor: + labelsbatch tensor after cp: torch.Size([8, 1024])labels + batch tensor:tokens labels torch.Size([8, 8192]) + batch tensor after cp:torch.Size([8, 1024]) + loss_maskbatch tensor after cp: torch.Size([8, 1024])loss_mask + torch.Size([8, 1024])batch tensor after cp: +batch tensor: batch tensor after cp:loss_masktorch.Size([8, 8192]) + tokenstorch.Size([8, 8192]) batch tensor: + attention_maskbatch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +torch.Size([8, 1, 1024, 8192])batch tensor after cp: + batch tensor after cp:position_ids position_idstorch.Size([8, 1024]) +torch.Size([8, 1024]) +torch.Size([8, 1024])batch tensor: + labelsattention_maskbatch tensor after cp: torch.Size([8, 8192]) torch.Size([8, 1, 8192, 8192]) +labels +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: batch tensor:batch tensor: torch.Size([8, 1024]) loss_maskposition_ids +batch tensor after cp: tokenstorch.Size([8, 8192]) +loss_mask batch tensor:torch.Size([8, 8192]) torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +attention_mask +torch.Size([8, 8192])batch tensor after cp: + torch.Size([8, 1, 8192, 8192])attention_mask +batch tensor: batch tensor: torch.Size([8, 1, 1024, 8192])labels + position_idsbatch tensor after cp: torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: position_ids torch.Size([8, 1024]) +position_idsbatch tensor: torch.Size([8, 1024])loss_mask + torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp:batch tensor: labels torch.Size([8, 1024]) +tokensbatch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp:torch.Size([8, 8192]) +attention_mask torch.Size([8, 1, 1024, 8192])batch tensor: + batch tensor after cp:labels position_idstorch.Size([8, 8192]) +torch.Size([8, 1024])batch tensor: +batch tensor after cp: tokens torch.Size([8, 1024]) + loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp:batch tensor after cp: labelstokens torch.Size([8, 1024]) +batch tensor after cp:torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +loss_mask batch tensor after cp:torch.Size([8, 1024]) +labels batch tensor after cp:torch.Size([8, 1024]) +attention_maskbatch tensor after cp: loss_masktorch.Size([8, 1, 1024, 8192]) +torch.Size([8, 1024])batch tensor after cp: +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) + batch tensor after cp:position_ids attention_masktorch.Size([8, 1024]) +torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +Start exporting trace 2 +Done exporting trace 2 + [2025-06-21 21:59:36] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 88.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192])batch tensor: + batch tensor: tokenslabels torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: attention_maskbatch tensor: torch.Size([8, 1, 8192, 8192])labels +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) + torch.Size([8, 8192])batch tensor: + position_idsbatch tensor after cp:batch tensor: torch.Size([8, 8192])tokensloss_mask + torch.Size([8, 8192])torch.Size([8, 1024]) + +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: batch tensor after cp:attention_mask torch.Size([8, 1, 8192, 8192])labels + batch tensor:torch.Size([8, 1024]) +position_idsbatch tensor after cp: torch.Size([8, 8192])loss_mask + torch.Size([8, 1024]) +batch tensor after cp: attention_mask batch tensor after cp:torch.Size([8, 1, 1024, 8192]) +tokensbatch tensor after cp: position_idstorch.Size([8, 1024]) +torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels batch tensor after cp:torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +tokensbatch tensor after cp: torch.Size([8, 1024])loss_mask +batch tensor: tokens torch.Size([8, 8192]) + batch tensor after cp:torch.Size([8, 1024]) +labelsbatch tensor after cp: torch.Size([8, 1024])attention_mask +batch tensor: labels torch.Size([8, 8192]) + batch tensor after cp:torch.Size([8, 1, 1024, 8192]) +loss_maskbatch tensor after cp: torch.Size([8, 1024])position_ids +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) + batch tensor after cp:torch.Size([8, 1024]) +attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: position_ids batch tensor after cp:torch.Size([8, 1024]) +tokens torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_maskbatch tensor: torch.Size([8, 1024]) +batch tensor after cp: tokensattention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: torch.Size([8, 8192])position_ids +torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +Start exporting trace 3 +Done exporting trace 3 + [2025-06-21 21:59:37] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 88.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor:batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])tokens +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) + batch tensor: position_ids torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: batch tensor after cp:tokens tokenstorch.Size([8, 1024]) +torch.Size([8, 1024])batch tensor after cp: + labelsbatch tensor after cp: torch.Size([8, 1024])labels + batch tensor after cp:torch.Size([8, 1024]) +loss_maskbatch tensor after cp: loss_masktorch.Size([8, 1024]) +torch.Size([8, 1024])batch tensor after cp: +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) + batch tensor after cp:attention_mask attention_mask torch.Size([8, 1, 1024, 8192]) +torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: batch tensor after cp:position_ids position_ids torch.Size([8, 1024]) +torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: tokens batch tensor:torch.Size([8, 8192]) + tokensbatch tensor: labels torch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor: +loss_mask torch.Size([8, 8192])batch tensor: + labelsbatch tensor: torch.Size([8, 8192])attention_mask + batch tensor:torch.Size([8, 1, 8192, 8192]) +loss_mask batch tensor:torch.Size([8, 8192]) +position_ids batch tensor:torch.Size([8, 8192]) +attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192])batch tensor: + tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: batch tensor:loss_mask torch.Size([8, 8192]) + batch tensor:tokens attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor:torch.Size([8, 8192]) +position_ids batch tensor:torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: batch tensor:labels torch.Size([8, 8192])tokens + batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +torch.Size([8, 8192])batch tensor: + attention_maskbatch tensor: torch.Size([8, 1, 8192, 8192])labels +torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor:batch tensor after cp: batch tensor:batch tensor: position_ids tokensloss_mask tokens torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +torch.Size([8, 8192])torch.Size([8, 1024]) +torch.Size([8, 8192])batch tensor: + +batch tensor after cp: tokens torch.Size([8, 1024]) + batch tensor after cp:attention_mask labelsbatch tensor:torch.Size([8, 1, 8192, 8192]) + batch tensor:labelstorch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +position_idsbatch tensor after cp:torch.Size([8, 8192]) +torch.Size([8, 8192])loss_mask +batch tensor: torch.Size([8, 1024])loss_mask +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask batch tensor after cp:torch.Size([8, 1, 1024, 8192]) + tokensbatch tensor after cp: position_ids torch.Size([8, 1024])torch.Size([8, 1024]) + batch tensor after cp:torch.Size([8, 8192]) +attention_mask batch tensor: torch.Size([8, 1, 1024, 8192])attention_mask + +batch tensor after cp: labels torch.Size([8, 1024]) + batch tensor after cp:torch.Size([8, 1, 8192, 8192])batch tensor: +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +position_idsbatch tensor: tokens torch.Size([8, 1024])position_ids +batch tensor after cp: tokens torch.Size([8, 1024]) + torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids batch tensor after cp:torch.Size([8, 1024]) +tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_maskbatch tensor after cp: torch.Size([8, 1024])tokens + batch tensor after cp: torch.Size([8, 1024])attention_mask + batch tensor after cp:torch.Size([8, 1, 1024, 8192]) +labelsbatch tensor after cp: torch.Size([8, 1024])position_ids + batch tensor after cp:torch.Size([8, 1024]) +loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids batch tensor after cp:torch.Size([8, 1024]) + tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: batch tensor:position_ids torch.Size([8, 1024]) +tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +Start exporting trace 4 +Done exporting trace 4 + [2025-06-21 21:59:37] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 90.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor:batch tensor after cp: tokenstokens torch.Size([8, 1024]) +batch tensor after cp: torch.Size([8, 8192])labels +batch tensor after cp: tokens torch.Size([8, 1024]) + torch.Size([8, 1024]) +batch tensor after cp:batch tensor: loss_masklabels torch.Size([8, 1024])torch.Size([8, 8192]) + +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp:batch tensor: attention_maskloss_mask torch.Size([8, 1, 1024, 8192])torch.Size([8, 8192]) + +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: batch tensor:position_ids attention_mask torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192]) +batch tensor: batch tensor:position_ids torch.Size([8, 8192]) + tokens torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_idsbatch tensor: torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) + tokens torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor:batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])tokens + batch tensor: position_ids torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])batch tensor after cp: +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) + batch tensor after cp: tokensposition_ids torch.Size([8, 1024]) +torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp:batch tensor after cp: tokenstokens torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor after cp:batch tensor after cp: labelslabels torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp:batch tensor after cp: loss_maskloss_mask torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor after cp:batch tensor after cp: attention_maskattention_mask torch.Size([8, 1, 1024, 8192])torch.Size([8, 1, 1024, 8192]) + +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp:batch tensor after cp: position_idsposition_ids torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: position_idsbatch tensor after cp: torch.Size([8, 1024])tokens +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) + torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +Start exporting trace 5 +Done exporting trace 5 + [2025-06-21 21:59:37] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 90.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens batch tensor:torch.Size([8, 8192]) + tokensbatch tensor: labels torch.Size([8, 8192]) +batch tensor:torch.Size([8, 8192]) loss_mask + torch.Size([8, 8192]) +batch tensor: batch tensor:labels attention_mask torch.Size([8, 8192]) +torch.Size([8, 1, 8192, 8192]) +batch tensor: batch tensor:loss_mask position_ids torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024])batch tensor after cp: + batch tensor after cp:tokens attention_mask torch.Size([8, 1024])torch.Size([8, 1, 1024, 8192]) + +batch tensor after cp:batch tensor after cp: labelsposition_ids torch.Size([8, 1024])torch.Size([8, 1024])batch tensor: + +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokensloss_mask torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp:torch.Size([8, 8192]) +attention_mask torch.Size([8, 1, 1024, 8192])batch tensor: +batch tensor after cp: tokens torch.Size([8, 1024]) + batch tensor after cp:labels position_idstorch.Size([8, 8192]) +batch tensor after cp:torch.Size([8, 1024])batch tensor: + tokensloss_mask torch.Size([8, 1024])torch.Size([8, 8192]) + +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor:batch tensor after cp:batch tensor: labels attention_mask torch.Size([8, 1024]) tokens +torch.Size([8, 1, 8192, 8192])batch tensor after cp: + loss_maskbatch tensor: torch.Size([8, 8192])position_ids torch.Size([8, 8192]) + +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp:batch tensor: attention_mask tokenstorch.Size([8, 1, 1024, 8192]) +batch tensor: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192])torch.Size([8, 1024]) + +batch tensor after cp: position_ids torch.Size([8, 1024])torch.Size([8, 8192]) + +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor:batch tensor: loss_mask tokenstorch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 8192]) + +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: batch tensor:position_ids labelstorch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor:batch tensor after cp: position_ids tokenstorch.Size([8, 8192]) +torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor:batch tensor after cp: labels tokenstorch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024])torch.Size([8, 8192]) + +batch tensor after cp: batch tensor:attention_mask labels torch.Size([8, 1, 1024, 8192]) +torch.Size([8, 8192])batch tensor after cp: + batch tensor:position_ids loss_masktorch.Size([8, 1024]) +torch.Size([8, 8192]) +Start exporting trace 6 +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp:batch tensor after cp: loss_masktokens torch.Size([8, 1024]) +torch.Size([8, 1024])batch tensor after cp: + attention_maskbatch tensor after cp: labelstorch.Size([8, 1, 1024, 8192]) +torch.Size([8, 1024])batch tensor after cp: + batch tensor after cp: position_idsloss_mask torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +Done exporting trace 6 +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) + [2025-06-21 21:59:37] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 88.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor:batch tensor: tokenstokens torch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor: + labels batch tensor:torch.Size([8, 8192]) labels + batch tensor:torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +loss_maskbatch tensor: torch.Size([8, 8192])loss_mask + torch.Size([8, 8192])batch tensor: + attention_mask batch tensor: torch.Size([8, 1, 8192, 8192])attention_mask +batch tensor: loss_mask torch.Size([8, 8192]) + batch tensor:torch.Size([8, 1, 8192, 8192]) +position_idsbatch tensor: torch.Size([8, 8192])position_ids + torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor:batch tensor: loss_mask torch.Size([8, 8192])tokens +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) + batch tensor: attention_mask torch.Size([8, 8192])torch.Size([8, 1, 8192, 8192]) + +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor:batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: position_idsbatch tensor: batch tensor after cp:torch.Size([8, 8192])labels +tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) + tokens torch.Size([8, 8192])torch.Size([8, 1024]) + +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor:batch tensor after cp: loss_masklabels torch.Size([8, 8192]) +torch.Size([8, 1024]) +batch tensor: batch tensor after cp:attention_mask loss_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 1024]) + +batch tensor:batch tensor after cp: position_idsattention_mask torch.Size([8, 8192]) +torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor:batch tensor after cp: position_ids torch.Size([8, 1024])tokens +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) + torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp:batch tensor: tokensattention_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 1024]) + +batch tensor:batch tensor after cp: position_idslabels torch.Size([8, 8192])torch.Size([8, 1024]) + +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp:batch tensor after cp: attention_masktokens torch.Size([8, 1, 1024, 8192])torch.Size([8, 1024]) + +batch tensor after cp:batch tensor after cp: position_idslabels torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +Start exporting trace 7 +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +Done exporting trace 7 +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) + [2025-06-21 21:59:37] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 90.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask batch tensor:torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: position_idstorch.Size([8, 8192]) +torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor:batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])tokens +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor:batch tensor: torch.Size([8, 8192]) position_ids + tokensbatch tensor:batch tensor: torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +labels torch.Size([8, 8192])torch.Size([8, 8192])tokens +batch tensor after cp: + batch tensor: tokensbatch tensor:loss_mask torch.Size([8, 8192])torch.Size([8, 1024])labelstorch.Size([8, 8192]) + + +batch tensor after cp: position_ids torch.Size([8, 1024]) + batch tensor after cp:batch tensor:batch tensor:torch.Size([8, 8192]) +batch tensor: tokens batch tensor after cp: tokens torch.Size([8, 8192])torch.Size([8, 1024]) + attention_masklabelslabelsbatch tensor: torch.Size([8, 1024])torch.Size([8, 8192])torch.Size([8, 1, 8192, 8192])loss_mask + + +batch tensor after cp:torch.Size([8, 8192])batch tensor: batch tensor: +loss_maskloss_mask position_idsbatch tensor:torch.Size([8, 1024])torch.Size([8, 8192]) + +attention_masktorch.Size([8, 8192])batch tensor after cp: +batch tensor:torch.Size([8, 1, 8192, 8192]) + +attention_maskattention_mask batch tensor:torch.Size([8, 1, 1024, 8192]) torch.Size([8, 1, 8192, 8192]) +position_ids +batch tensor after cp: batch tensor: torch.Size([8, 8192]) position_ids +batch tensor:batch tensor after cp: labelslabels torch.Size([8, 8192])torch.Size([8, 1024]) + +position_ids torch.Size([8, 1024])torch.Size([8, 8192]) + +batch tensor:batch tensor after cp: loss_maskloss_mask torch.Size([8, 8192])torch.Size([8, 1024]) + +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp:batch tensor: attention_maskattention_mask torch.Size([8, 1, 1024, 8192])torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: +batch tensor after cp: loss_mask torch.Size([8, 1024]) + batch tensor:position_ids position_idstorch.Size([8, 1024]) +torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp:batch tensor: tokens torch.Size([8, 1024]) +tokensbatch tensor after cp: labels torch.Size([8, 1024]) +torch.Size([8, 8192])batch tensor after cp: +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +loss_mask batch tensor:torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +labelsbatch tensor after cp: torch.Size([8, 8192])attention_mask torch.Size([8, 1, 1024, 8192]) + +batch tensor:batch tensor after cp: loss_maskposition_ids torch.Size([8, 8192])torch.Size([8, 1024]) + +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp:batch tensor after cp: attention_mask tokens torch.Size([8, 1, 1024, 8192]) +torch.Size([8, 1024])batch tensor after cp: + position_idsbatch tensor after cp: batch tensor after cp:torch.Size([8, 1024])labels + tokenstorch.Size([8, 1024]) +batch tensor after cp:torch.Size([8, 1024]) +loss_mask batch tensor after cp:torch.Size([8, 1024]) +labels batch tensor after cp: torch.Size([8, 1024])attention_mask + batch tensor after cp: torch.Size([8, 1, 1024, 8192])loss_mask + batch tensor after cp:torch.Size([8, 1024]) +position_ids batch tensor after cp: torch.Size([8, 1024])attention_mask +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) + torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: batch tensor after cp:tokens tokens torch.Size([8, 1024])torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) + +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: labelsbatch tensor: labelstorch.Size([8, 1024]) +torch.Size([8, 8192])batch tensor after cp: +batch tensor after cp: position_ids torch.Size([8, 1024]) + batch tensor:batch tensor:loss_mask loss_masktorch.Size([8, 1024]) + tokenstorch.Size([8, 8192])batch tensor after cp: + batch tensor:attention_mask torch.Size([8, 8192])attention_mask +torch.Size([8, 1, 1024, 8192])torch.Size([8, 1, 8192, 8192]) + +batch tensor:batch tensor after cp:batch tensor: labels position_ids position_ids torch.Size([8, 8192])torch.Size([8, 1024]) +torch.Size([8, 8192]) + +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp:batch tensor: tokensposition_ids torch.Size([8, 8192])torch.Size([8, 1024]) + +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +Start exporting trace 8 +Done exporting trace 8 + [2025-06-21 21:59:37] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 89.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labelsbatch tensor: torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) + batch tensor after cp:tokens loss_mask torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_masktorch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +torch.Size([8, 1, 1024, 8192])batch tensor: +batch tensor: tokens torch.Size([8, 8192]) + attention_mask batch tensor after cp: torch.Size([8, 1, 8192, 8192])position_ids + batch tensor:torch.Size([8, 1024]) +position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens batch tensor:batch tensor:torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) + tokenstokensbatch tensor: labels torch.Size([8, 8192]) +torch.Size([8, 8192])torch.Size([8, 8192])batch tensor:batch tensor: + +batch tensor: tokens torch.Size([8, 8192]) + batch tensor:tokensloss_maskbatch tensor: torch.Size([8, 8192]) labelslabels +batch tensor: labels torch.Size([8, 8192]) + torch.Size([8, 8192]) torch.Size([8, 8192]) +torch.Size([8, 8192])batch tensor: + +batch tensor: loss_mask torch.Size([8, 8192])batch tensor: + batch tensor:batch tensor:attention_maskbatch tensor: loss_maskloss_masklabels torch.Size([8, 1, 8192, 8192]) torch.Size([8, 8192]) torch.Size([8, 8192]) + +torch.Size([8, 8192]) +batch tensor:batch tensor: +batch tensor: tokensattention_mask torch.Size([8, 1, 8192, 8192]) + batch tensor: batch tensor:attention_mask position_ids loss_mask attention_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 8192])torch.Size([8, 8192]) + + +batch tensor:torch.Size([8, 8192]) +position_ids torch.Size([8, 8192])batch tensor: +batch tensor: torch.Size([8, 1, 8192, 8192])batch tensor:attention_mask + position_idsbatch tensor:torch.Size([8, 1, 8192, 8192]) + torch.Size([8, 8192])position_ids + labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) + batch tensor: torch.Size([8, 8192])position_ids + torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor:batch tensor: tokenstokens torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor:batch tensor: labelslabels torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor:batch tensor: loss_maskloss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor:batch tensor: attention_maskattention_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 1, 8192, 8192]) + +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor:batch tensor: position_idsposition_ids torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp:batch tensor after cp: tokensposition_ids torch.Size([8, 1024])torch.Size([8, 1024]) + +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: loss_mask batch tensor:torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) + batch tensor: tokensattention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp:batch tensor after cp: position_idstokens torch.Size([8, 1024]) +torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: batch tensor after cp:position_ids batch tensor after cp: tokens torch.Size([8, 1024])torch.Size([8, 1024])tokens + +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp:torch.Size([8, 1024]) +labels torch.Size([8, 1024])batch tensor after cp: +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) + batch tensor after cp:labels batch tensor after cp:loss_mask torch.Size([8, 1024])tokenstorch.Size([8, 1024]) + +batch tensor after cp:batch tensor after cp:torch.Size([8, 1024]) +attention_maskloss_mask batch tensor after cp: torch.Size([8, 1, 1024, 8192])labels +torch.Size([8, 1024]) batch tensor after cp: +torch.Size([8, 1024]) +position_idsbatch tensor after cp: batch tensor after cp: torch.Size([8, 1024]) attention_mask +loss_mask torch.Size([8, 1024])torch.Size([8, 1, 1024, 8192]) + +batch tensor after cp: attention_maskbatch tensor after cp: torch.Size([8, 1, 1024, 8192])position_ids +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) + batch tensor after cp: torch.Size([8, 1024])position_ids + torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +Start exporting trace 9 +Done exporting trace 9 + [2025-06-21 21:59:37] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 89.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[after training is done] datetime: 2025-06-21 21:59:37 +saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format +DEBUG:megatron.training.checkpointing:rank: 13, takes 0.023271799087524414 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 15, takes 0.0232241153717041 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 8, takes 0.023357152938842773 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 9, takes 0.024330854415893555 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 12, takes 0.024313688278198242 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 10, takes 0.024380207061767578 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 11, takes 0.025735855102539062 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 14, takes 0.026353836059570312 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.03228759765625 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.03271365165710449 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.03276491165161133 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.03284168243408203 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.03342103958129883 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.033429622650146484 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.03574109077453613 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.25612735748291016 to prepare state dict for ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(184549376), 4), (np.int64(176322560), 5), (np.int64(176322560), 6), (np.int64(176316416), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(184549376), 4), (np.int64(176322560), 5), (np.int64(176322560), 6), (np.int64(176316416), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(184549376), 4), (np.int64(176322560), 5), (np.int64(176322560), 6), (np.int64(176316416), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(184549376), 4), (np.int64(176322560), 5), (np.int64(176322560), 6), (np.int64(176316416), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(184549376), 4), (np.int64(176322560), 5), (np.int64(176322560), 6), (np.int64(176316416), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(184549376), 4), (np.int64(176322560), 5), (np.int64(176322560), 6), (np.int64(176316416), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(184549376), 4), (np.int64(176322560), 5), (np.int64(176322560), 6), (np.int64(176316416), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(184549376), 4), (np.int64(176322560), 5), (np.int64(176322560), 6), (np.int64(176316416), 7)] +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.320328712463379 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3203139305114746 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3306617736816406 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.327301025390625 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3282692432403564 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3207573890686035 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3208184242248535 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3283421993255615 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3275701999664307 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3208250999450684 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3208494186401367 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.349289894104004 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.17546796798706055 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3222103118896484 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.599280834197998 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.6639323234558105 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, plan time: 0.0024149417877197266 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6066558 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.775161743164062e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.3056025505065918 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, plan time: 0.3061237335205078 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6068091 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6069102 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.3059825897216797 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, plan time: 0.06834888458251953 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.700920104980469e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6068866 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.487701416015625e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.390975952148438e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6070035 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.3062562942504883 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.131431579589844e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.607053 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.3062117099761963 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, plan time: 0.3061189651489258 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6070848 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.607668 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 4.935264587402344e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, plan time: 0.30551958084106445 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.3044624328613281 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.53131103515625e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6077135 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.653236389160156e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6071675 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, plan time: 0.3061554431915283 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.30651140213012695 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00011086463928222656 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.3031790256500244 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6078386 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.250640869140625e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.510185241699219e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6072633 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, plan time: 0.3064897060394287 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6072898 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.125999450683594e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.607987 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.747245788574219e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, plan time: 0.30599069595336914 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.30677008628845215 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.6080415 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543179.610695 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.655952453613281e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.221366882324219e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.0001163482666015625 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05309796333312988 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6606731 rank: 15, write(async) time: 0.05366921424865723 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.053915977478027344 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6622093 rank: 13, write(async) time: 0.05436849594116211 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05554986000061035 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.054414987564086914 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05459475517272949 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6628773 rank: 6, write(async) time: 0.05598783493041992 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.662975 rank: 12, write(async) time: 0.05493426322937012 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6630921 rank: 9, write(async) time: 0.055103302001953125 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05654764175415039 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.056066036224365234 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6642404 rank: 10, write(async) time: 0.056528568267822266 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05677318572998047 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6643422 rank: 1, write(async) time: 0.057050228118896484 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6644542 rank: 5, write(async) time: 0.057286739349365234 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.057936906814575195 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05762338638305664 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6650276 rank: 14, write(async) time: 0.058373212814331055 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6648986 rank: 7, write(async) time: 0.058088064193725586 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05966377258300781 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6670218 rank: 8, write(async) time: 0.06011080741882324 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.059771060943603516 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6672406 rank: 3, write(async) time: 0.06018543243408203 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06017112731933594 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6682892 rank: 11, write(async) time: 0.060622215270996094 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.061412811279296875 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6689196 rank: 4, write(async) time: 0.061830997467041016 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06159806251525879 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6692865 rank: 2, write(async) time: 0.06202507019042969 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06383800506591797 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543179.6749601 rank: 0, write(async) time: 0.06426644325256348 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 1.7642974853515625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 2.1457672119140625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 2.5033950805664062e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 1.7404556274414062e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 1.7404556274414062e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 2.002716064453125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 1.8358230590820312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 1.6689300537109375e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 3.266334533691406e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 1.8835067749023438e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 1.9073486328125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 1.8596649169921875e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 0.0369563102722168 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 0.033365726470947266 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 0.03654360771179199 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 0.041193485260009766 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 0.03406476974487305 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.03497624397277832 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 0.03619670867919922 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.03723955154418945 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 0.03961825370788574 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.037691354751586914 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 0.04391789436340332 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 2.4318695068359375e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 2.09808349609375e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.03865361213684082 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.0386815071105957 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.03980112075805664 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 1.9788742065429688e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.03731107711791992 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 1.9311904907226562e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 38912000, before: 1707008000, after: 1745920000 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 47108096, before: 1728753664, after: 1775861760 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 38924288, before: 1716527104, after: 1755451392 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 360448, before: 1720184832, after: 1720545280 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.04810357093811035 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55439360, before: 1708015616, after: 1763454976 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55263232, before: 1726062592, after: 1781325824 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55508992, before: 1698770944, after: 1754279936 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 54947840, before: 1760755712, after: 1815703552 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55652352, before: 1704599552, after: 1760251904 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97427456, before: 1701572608, after: 1799000064 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 331776, before: 2022256640, after: 2022588416 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97275904, before: 1723715584, after: 1820991488 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 89042944, before: 1701564416, after: 1790607360 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97386496, before: 1702658048, after: 1800044544 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 89022464, before: 1702658048, after: 1791680512 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 88330240, before: 1703583744, after: 1791913984 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.094605, rank: 13, write(sync,parallel): 0.3220207691192627 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139341824, before: 1726062592, after: 1865404416 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139182080, before: 1708093440, after: 1847275520 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139186176, before: 1728753664, after: 1867939840 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97517568, before: 1703575552, after: 1801093120 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139300864, before: 1698762752, after: 1838063616 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 89083904, before: 1723715584, after: 1812799488 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.1135445, rank: 12, write(sync,parallel): 0.3306441307067871 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139489280, before: 1716498432, after: 1855987712 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.41s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.1437097, rank: 10, write(sync,parallel): 0.37374448776245117 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.1453407, rank: 15, write(sync,parallel): 0.3557713031768799 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139419648, before: 1704599552, after: 1844019200 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.1547217, rank: 7, write(sync,parallel): 0.36817336082458496 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.1556787, rank: 6, write(sync,parallel): 0.3718397617340088 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139444224, before: 1707008000, after: 1846452224 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.1618023, rank: 5, write(sync,parallel): 0.374830961227417 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.42s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139374592, before: 1760755712, after: 1900130304 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.1661906, rank: 14, write(sync,parallel): 0.38446593284606934 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.1700318, rank: 9, write(sync,parallel): 0.3951709270477295 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.45s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.45s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.205049, rank: 4, write(sync,parallel): 0.4078245162963867 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.46s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.47s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.2085748, rank: 11, write(sync,parallel): 0.4335043430328369 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.47s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.47s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.2171366, rank: 8, write(sync,parallel): 0.4330437183380127 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.49s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.52s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 211628032, before: 1701736448, after: 1913364480 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 211804160, before: 1706016768, after: 1917820928 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.51s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.53s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.2979019, rank: 3, write(sync,parallel): 0.4721968173980713 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.2984269, rank: 2, write(sync,parallel): 0.46985673904418945 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.56s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.56s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 417828864, before: 1720184832, after: 2138013696 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.7256267, rank: 1, write(sync,parallel): 0.8263847827911377 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 417726464, before: 2022256640, after: 2439983104 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.91s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543180.795968, rank: 0, write(sync,parallel): 0.8255836963653564 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.92s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8428204, 7, gather: 0.6336498260498047 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8429205, 6, gather: 0.6350600719451904 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.842968, 5, gather: 0.628260612487793 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8436444, 9, gather: 0.6214404106140137 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8430374, 1, gather: 0.0762026309967041 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8437371, 12, gather: 0.6797616481781006 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8430424, 4, gather: 0.5853049755096436 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8437443, 10, gather: 0.6621177196502686 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.843306, 2, gather: 0.5029470920562744 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.843862, 14, gather: 0.6272735595703125 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8432958, 3, gather: 0.5033347606658936 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8438947, 15, gather: 0.6512620449066162 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8456612, 0, gather: 0.005326271057128906 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8439662, 8, gather: 0.5742619037628174 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8439586, 13, gather: 0.702329158782959 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.8441777, 11, gather: 0.5915863513946533 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543180.903901, metadata_write: 0.058098793029785156 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0669s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1403s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.5671s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.5675s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6915s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7438s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6856s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6557s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6379s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7149s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7258s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7661s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6500s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6932s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7001s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6989s + successfully saved checkpoint from iteration 10 to gpt-checkpoint [ t 1/2, p 1/1 ] +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.003509998321533203 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.0035822391510009766 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 11, takes 0.003325939178466797 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 10, takes 0.003387928009033203 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 13, takes 0.0031714439392089844 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 8, takes 0.003497600555419922 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 15, takes 0.0031762123107910156 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 12, takes 0.00321197509765625 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 14, takes 0.0032160282135009766 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 9, takes 0.0034334659576416016 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.003625154495239258 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.0037457942962646484 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.0038895606994628906 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.003989696502685547 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.00411224365234375 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.004085063934326172 to finalize ckpt save +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192])batch tensor after cp: +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +tokens batch tensor:torch.Size([8, 1024]) +labels batch tensor after cp: torch.Size([8, 8192])labels +batch tensor after cp: position_ids torch.Size([8, 1024]) + batch tensor:torch.Size([8, 1024]) +loss_mask batch tensor after cp: torch.Size([8, 8192])loss_mask + torch.Size([8, 1024])batch tensor: +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) + attention_maskbatch tensor after cp: attention_masktorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: batch tensor after cp:position_ids position_idstorch.Size([8, 8192]) +torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: tokens batch tensor after cp:torch.Size([8, 8192]) tokens +batch tensor after cp: loss_mask torch.Size([8, 1024]) + torch.Size([8, 1024])batch tensor: + labelsbatch tensor after cp: labelstorch.Size([8, 8192]) +torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor:batch tensor after cp: loss_maskloss_mask torch.Size([8, 8192])torch.Size([8, 1024]) + +batch tensor after cp: attention_maskbatch tensor: attention_masktorch.Size([8, 1, 1024, 8192]) +batch tensor:batch tensor after cp: tokenstokens torch.Size([8, 1024]) +torch.Size([8, 1, 8192, 8192])batch tensor after cp: + position_idsbatch tensor: torch.Size([8, 1024])position_ids + torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 1024])torch.Size([8, 8192]) + +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: loss_mask batch tensor:torch.Size([8, 1024]) +labels batch tensor after cp: torch.Size([8, 8192])attention_mask + batch tensor:torch.Size([8, 1, 1024, 8192]) +loss_mask batch tensor after cp: torch.Size([8, 8192])position_ids + torch.Size([8, 1024])batch tensor: + attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor after cp:batch tensor: tokenslabels torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +torch.Size([8, 1024])batch tensor: +loss_maskbatch tensor after cp: torch.Size([8, 8192])labels + torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor:batch tensor after cp: attention_maskloss_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 1024]) + +batch tensor:batch tensor after cp: position_idsattention_mask torch.Size([8, 8192])torch.Size([8, 1, 1024, 8192]) + +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +Start exporting trace 10 +Done exporting trace 10 +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +(min, max) time across ranks (ms): + evaluate .......................................: (2973.01, 2975.26) +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on validation set | lm loss value: 1.135926E+01 | lm loss PPL: 8.575593E+04 | +---------------------------------------------------------------------------------------------------------------- +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: tokens batch tensor:torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) + batch tensor: tokenslabels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 8192])batch tensor: + batch tensor: tokenslabels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: batch tensor:attention_mask labelstorch.Size([8, 1, 8192, 8192]) +torch.Size([8, 8192])batch tensor: + position_ids batch tensor:torch.Size([8, 8192]) +loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: loss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor:batch tensor: attention_masklabels torch.Size([8, 1, 8192, 8192])torch.Size([8, 8192]) + +batch tensor: tokens torch.Size([8, 8192]) +batch tensor:batch tensor after cp: labels tokenstorch.Size([8, 8192]) +batch tensor:batch tensor: position_idsloss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +torch.Size([8, 1024])batch tensor: + loss_maskbatch tensor after cp: batch tensor after cp:labelstorch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +torch.Size([8, 1024])tokensbatch tensor: +attention_masktorch.Size([8, 1024]) batch tensor after cp: +torch.Size([8, 1, 8192, 8192]) batch tensor after cp: +loss_mask labelsbatch tensor:torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: torch.Size([8, 1024]) position_ids +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +attention_mask batch tensor after cp:torch.Size([8, 8192]) batch tensor: +torch.Size([8, 1, 1024, 8192])loss_mask +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) + batch tensor after cp:tokenstorch.Size([8, 1024]) +position_idsbatch tensor after cp: torch.Size([8, 1024])attention_mask +torch.Size([8, 8192]) torch.Size([8, 1, 1024, 8192]) + +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: batch tensor:position_ids labelstorch.Size([8, 1024]) +torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_maskbatch tensor after cp: torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +tokens batch tensor after cp: torch.Size([8, 1024])attention_mask + batch tensor after cp:torch.Size([8, 1, 1024, 8192]) +labels batch tensor after cp:torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +position_ids batch tensor after cp: torch.Size([8, 1024])loss_mask + torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_mask batch tensor:torch.Size([8, 1, 1024, 8192]) + batch tensor after cp: tokensposition_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor:batch tensor: tokenstokens torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: batch tensor:labels labels torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor:batch tensor: loss_maskloss_mask torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor:batch tensor: attention_maskattention_mask torch.Size([8, 1, 8192, 8192])torch.Size([8, 1, 8192, 8192]) + +batch tensor: tokens torch.Size([8, 8192]) +batch tensor:batch tensor: position_idsposition_ids torch.Size([8, 8192])torch.Size([8, 8192]) + +batch tensor: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +batch tensor: tokens torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 8192, 8192]) +batch tensor: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 1024]) +batch tensor after cp: labels torch.Size([8, 1024]) +batch tensor after cp: loss_mask torch.Size([8, 1024]) +batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192]) +batch tensor after cp: position_ids torch.Size([8, 1024]) +Start exporting trace 11 +Done exporting trace 11 +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +(min, max) time across ranks (ms): + evaluate .......................................: (41.27, 42.23) +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on test set | lm loss value: 1.135926E+01 | lm loss PPL: 8.575593E+04 | +---------------------------------------------------------------------------------------------------------- +Running ctx_length=2048, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 2048 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 2048 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 8 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 2048 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 2 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 16 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 2048 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 2048 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 2 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 16 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 2 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.055 seconds +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 5.237 seconds +time to initialize megatron (seconds): 14.344 +[after megatron is initialized] datetime: 2025-06-21 22:00:28 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 287913984 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 287913984 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 287913984 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 287913984 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 287913984 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 287913984 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 287913984 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 287913984 +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 287913984 +>>> embedding + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 287913984 +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 287913984 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 287913984 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 287913984 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 287913984 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 287913984 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +>>> embedding +>>> decoder +>>> output_layer +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (287913984 elements, 287913984 padded size): + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.bias + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 287913984 + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.embedding.word_embeddings.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.final_layernorm.weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (137.70, 137.96) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:00:29 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=2048, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005506 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 33296 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.002444 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 33281 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.002495 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 33343 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 22:00:29 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (405.86, 424.83) + train/valid/test-data-iterators-setup ..........: (18.04, 178.41) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 22:00:29 +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp:batch tensor after cp: tokenstokens torch.Size([8, 2048])torch.Size([8, 2048]) + +batch tensor after cp:batch tensor after cp: labelslabels torch.Size([8, 2048]) +torch.Size([8, 2048]) +batch tensor after cp:batch tensor after cp: loss_maskloss_mask torch.Size([8, 2048])torch.Size([8, 2048]) + +batch tensor after cp:batch tensor after cp: attention_maskattention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp:torch.Size([8, 1, 2048, 16384]) +position_ids batch tensor after cp: torch.Size([8, 2048])position_ids + torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 0 +Done exporting trace 0 +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.2795 +Theoretical memory footprints: weight and optimizer=4797.35 MB + [2025-06-21 22:00:43] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 14046.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[Rank 5] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10732.0 | max reserved: 10732.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10642.0 | max reserved: 10642.0 +[Rank 13] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 11100.0 | max reserved: 11100.0 +[Rank 3] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10720.0 | max reserved: 10720.0 +[Rank 8] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10878.0 | max reserved: 10878.0 +[Rank 2] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10720.0 | max reserved: 10720.0 +[Rank 4] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10796.0 | max reserved: 10796.0 +[Rank 9] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 11006.0 | max reserved: 11006.0 +[Rank 1] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10642.0 | max reserved: 10642.0 +[Rank 14] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10816.0 | max reserved: 10816.0 +[Rank 6] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10752.0 | max reserved: 10752.0 +[Rank 15] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10816.0 | max reserved: 10816.0 +[Rank 7] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10816.0 | max reserved: 10816.0 +[Rank 12] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10972.0 | max reserved: 10972.0 +[Rank 10] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 10880.0 | max reserved: 10880.0 +[Rank 11] (after 1 iterations) memory (MB) | allocated: 5868.70556640625 | max allocated: 9883.75244140625 | reserved: 11008.0 | max reserved: 11008.0 +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 1 +Done exporting trace 1 + [2025-06-21 22:00:43] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 268.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 2 +Done exporting trace 2 + [2025-06-21 22:00:43] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 209.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 3 +Done exporting trace 3 + [2025-06-21 22:00:44] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 206.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048])batch tensor: +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokensattention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 16384])torch.Size([8, 2048]) + +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 4 +Done exporting trace 4 + [2025-06-21 22:00:44] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 207.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor:batch tensor: tokens tokens torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +torch.Size([8, 16384])batch tensor: +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) + labels batch tensor:torch.Size([8, 16384]) +labels batch tensor: torch.Size([8, 16384])loss_mask +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) + batch tensor:torch.Size([8, 16384]) +loss_mask torch.Size([8, 16384])batch tensor: + attention_mask batch tensor: attention_masktorch.Size([8, 1, 16384, 16384]) +torch.Size([8, 1, 16384, 16384])batch tensor: + position_idsbatch tensor: torch.Size([8, 16384])position_ids + torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 5 +Done exporting trace 5 + [2025-06-21 22:00:44] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 203.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 6 +Done exporting trace 6 + [2025-06-21 22:00:44] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 208.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor:batch tensor: tokens tokens torch.Size([8, 16384]) +torch.Size([8, 16384]) +batch tensor: labels batch tensor: torch.Size([8, 16384])labels + batch tensor:torch.Size([8, 16384]) +loss_maskbatch tensor: torch.Size([8, 16384])loss_mask + torch.Size([8, 16384])batch tensor: + attention_mask batch tensor: attention_masktorch.Size([8, 1, 16384, 16384]) +torch.Size([8, 1, 16384, 16384])batch tensor: + position_idsbatch tensor: position_idstorch.Size([8, 16384]) +torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp:batch tensor after cp: tokenstokens torch.Size([8, 2048])torch.Size([8, 2048]) + +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: batch tensor after cp:labels labelstorch.Size([8, 2048]) +torch.Size([8, 2048])batch tensor after cp: +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) + batch tensor after cp:loss_mask loss_masktorch.Size([8, 2048]) +torch.Size([8, 2048])batch tensor after cp: + attention_maskbatch tensor after cp: attention_masktorch.Size([8, 1, 2048, 16384]) +torch.Size([8, 1, 2048, 16384])batch tensor after cp: + position_idsbatch tensor after cp: position_idstorch.Size([8, 2048]) +torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384])batch tensor after cp: +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) + tokens batch tensor: torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +labelsbatch tensor after cp: labelstorch.Size([8, 16384]) +torch.Size([8, 2048]) +batch tensor: batch tensor after cp: loss_maskloss_mask torch.Size([8, 2048]) +torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: batch tensor:attention_mask attention_masktorch.Size([8, 1, 2048, 16384]) + batch tensor after cp: torch.Size([8, 1, 16384, 16384])position_ids +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) + torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: batch tensor:tokens torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) + batch tensor after cp:tokens labels torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048])torch.Size([8, 16384]) + +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask batch tensor:torch.Size([8, 1, 2048, 16384]) +labels batch tensor after cp: torch.Size([8, 16384])position_ids + torch.Size([8, 2048])batch tensor: +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) + loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 7 +Done exporting trace 7 + [2025-06-21 22:00:44] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 206.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 8 +Done exporting trace 8 + [2025-06-21 22:00:45] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 207.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor:batch tensor: attention_mask torch.Size([8, 1, 16384, 16384])tokens + batch tensor: position_ids torch.Size([8, 16384]) +torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_maskbatch tensor: torch.Size([8, 16384]) + batch tensor:tokens attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_idstorch.Size([8, 16384]) torch.Size([8, 16384]) + +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: position_ids batch tensor after cp:torch.Size([8, 2048]) +tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 9 +Done exporting trace 9 + [2025-06-21 22:00:45] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 209.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[after training is done] datetime: 2025-06-21 22:00:45 +saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.029292583465576172 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.029306650161743164 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.029332637786865234 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.029362201690673828 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.029848814010620117 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.029911041259765625 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.029929399490356445 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.029963254928588867 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 9, takes 0.03573942184448242 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 11, takes 0.03575611114501953 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 13, takes 0.035782814025878906 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 15, takes 0.035851478576660156 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 8, takes 0.03634047508239746 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 12, takes 0.0363619327545166 to prepare state dict for ckpt +WARNING:megatron.core.dist_checkpointing.serialization:Overwriting old incomplete / corrupted checkpoint... +DEBUG:megatron.training.checkpointing:rank: 14, takes 0.03645443916320801 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 10, takes 0.03639078140258789 to prepare state dict for ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184631296), 2), (np.int64(184631296), 3), (np.int64(184627200), 4), (np.int64(184627200), 5), (np.int64(184629248), 6), (np.int64(184629248), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184631296), 2), (np.int64(184631296), 3), (np.int64(184627200), 4), (np.int64(184627200), 5), (np.int64(184629248), 6), (np.int64(184629248), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184631296), 2), (np.int64(184631296), 3), (np.int64(184627200), 4), (np.int64(184627200), 5), (np.int64(184629248), 6), (np.int64(184629248), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184631296), 2), (np.int64(184631296), 3), (np.int64(184627200), 4), (np.int64(184627200), 5), (np.int64(184629248), 6), (np.int64(184629248), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184631296), 2), (np.int64(184631296), 3), (np.int64(184627200), 4), (np.int64(184627200), 5), (np.int64(184629248), 6), (np.int64(184629248), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184631296), 2), (np.int64(184631296), 3), (np.int64(184627200), 4), (np.int64(184627200), 5), (np.int64(184629248), 6), (np.int64(184629248), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184631296), 2), (np.int64(184631296), 3), (np.int64(184627200), 4), (np.int64(184627200), 5), (np.int64(184629248), 6), (np.int64(184629248), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184631296), 2), (np.int64(184631296), 3), (np.int64(184627200), 4), (np.int64(184627200), 5), (np.int64(184629248), 6), (np.int64(184629248), 7)] +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3087272644042969 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3119885921478271 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.308880090713501 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3121426105499268 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3088090419769287 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.312162160873413 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.308791160583496 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3123221397399902 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.312394618988037 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3124923706054688 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3093278408050537 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3128104209899902 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3095858097076416 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3126869201660156 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.18627452850341797 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.3110616207122803 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.0056645870208740234 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.006020545959472656 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.005293130874633789 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, plan time: 0.005640745162963867 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, plan time: 0.005769014358520508 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, plan time: 0.006407260894775391 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, plan time: 0.005183219909667969 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, plan time: 0.005203723907470703 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.005547761917114258 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.0034461021423339844 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9634542 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.005461215972900391 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.96347 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9634721 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.963594 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.963597 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9635985 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, plan time: 0.005716800689697266 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, plan time: 0.005635738372802734 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9636052 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.0017712116241455078 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9636073 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, plan time: 0.00662994384765625 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9634814 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.963481 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9634862 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9636295 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9636323 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9635067 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.318092346191406e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.1021575927734375e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9636402 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.508827209472656e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.984306335449219e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.222724914550781e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.365776062011719e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.723403930664062e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.699562072753906e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.651878356933594e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.794929504394531e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.532669067382812e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.937980651855469e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.175041198730469e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.581710815429688e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.793571472167969e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.006831645965576172 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543246.9666731 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 4.649162292480469e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.052857160568237305 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0168622 rank: 13, write(async) time: 0.05326437950134277 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.055380821228027344 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.019243 rank: 6, write(async) time: 0.055771827697753906 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05576920509338379 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.019815 rank: 15, write(async) time: 0.05617547035217285 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.056203365325927734 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.020212 rank: 12, write(async) time: 0.056606292724609375 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05661273002624512 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.020476 rank: 5, write(async) time: 0.057022809982299805 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05683541297912598 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0208488 rank: 10, write(async) time: 0.05723905563354492 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05697822570800781 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0208774 rank: 3, write(async) time: 0.05739879608154297 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05768394470214844 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0215201 rank: 2, write(async) time: 0.058051109313964844 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.058055877685546875 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0221314 rank: 14, write(async) time: 0.05849814414978027 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.058312177658081055 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0223386 rank: 11, write(async) time: 0.058745384216308594 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05850672721862793 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0225062 rank: 9, write(async) time: 0.05890989303588867 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.0590670108795166 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.059205055236816406 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0231423 rank: 1, write(async) time: 0.05963540077209473 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0231628 rank: 8, write(async) time: 0.05953407287597656 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06069445610046387 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0246122 rank: 4, write(async) time: 0.06112241744995117 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05896806716918945 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0260422 rank: 0, write(async) time: 0.05936622619628906 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06379270553588867 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543247.0277064 rank: 7, write(async) time: 0.06422615051269531 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 1.7642974853515625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 1.8358230590820312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 1.8835067749023438e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 1.8358230590820312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 0.0347132682800293 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 0.0347898006439209 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 2.384185791015625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 2.3365020751953125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 2.2649765014648438e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 2.1457672119140625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 0.03910470008850098 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 0.03821372985839844 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 1.7881393432617188e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 1.8358230590820312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 1.811981201171875e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 1.811981201171875e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.03606009483337402 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.0373377799987793 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.03748488426208496 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 2.6464462280273438e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.04275918006896973 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 2.0265579223632812e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 0.03681349754333496 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 0.03954148292541504 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 0.03605031967163086 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 0.03954887390136719 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.03512740135192871 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.038175106048583984 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 2.384185791015625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.03924226760864258 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 2.2172927856445312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55627776, before: 1731215360, after: 1786843136 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55660544, before: 1746673664, after: 1802334208 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 38842368, before: 1739235328, after: 1778077696 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 38961152, before: 1744007168, after: 1782968320 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.04497981071472168 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97505280, before: 1752989696, after: 1850494976 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97296384, before: 1736687616, after: 1833984000 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 24576, before: 1741086720, after: 1741111296 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55574528, before: 1750487040, after: 1806061568 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55422976, before: 1791754240, after: 1847177216 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 89108480, before: 1752989696, after: 1842098176 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97521664, before: 1744879616, after: 1842401280 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 88907776, before: 1736687616, after: 1825595392 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 105795584, before: 1726124032, after: 1831919616 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 89145344, before: 1726124032, after: 1815269376 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55623680, before: 1744486400, after: 1800110080 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 335872, before: 2024341504, after: 2024677376 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55771136, before: 1742909440, after: 1798680576 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.4106684, rank: 15, write(sync,parallel): 0.33020687103271484 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 97456128, before: 1744879616, after: 1842335744 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.4209502, rank: 13, write(sync,parallel): 0.3433492183685303 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139382784, before: 1746673664, after: 1886056448 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.437757, rank: 12, write(sync,parallel): 0.34685420989990234 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139415552, before: 1731215360, after: 1870630912 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.41s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139370496, before: 1739235328, after: 1878605824 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139268096, before: 1750487040, after: 1889755136 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139300864, before: 1791754240, after: 1931055104 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.459708, rank: 14, write(sync,parallel): 0.3685178756713867 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.42s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139493376, before: 1744007168, after: 1883500544 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.479834, rank: 7, write(sync,parallel): 0.3510019779205322 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.485139, rank: 5, write(sync,parallel): 0.3625509738922119 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.45s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.4910283, rank: 11, write(sync,parallel): 0.36156773567199707 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.4917488, rank: 10, write(sync,parallel): 0.355999231338501 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.4966602, rank: 8, write(sync,parallel): 0.360581636428833 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.45s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.5155056, rank: 9, write(sync,parallel): 0.3830604553222656 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.44s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.44s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.43s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.44s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.45s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.47s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139460608, before: 1742909440, after: 1882370048 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139350016, before: 1744515072, after: 1883865088 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 211800064, before: 1747537920, after: 1959337984 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.6178424, rank: 4, write(sync,parallel): 0.4934074878692627 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.653745, rank: 6, write(sync,parallel): 0.5350959300994873 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.6571653, rank: 3, write(sync,parallel): 0.4911782741546631 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.58s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.62s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.57s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 211890176, before: 1748303872, after: 1960194048 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543247.9356923, rank: 2, write(sync,parallel): 0.7636432647705078 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.86s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 417853440, before: 1741086720, after: 2158940160 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543248.0833642, rank: 1, write(sync,parallel): 0.8175089359283447 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.90s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 417759232, before: 2024341504, after: 2442100736 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543248.620747, rank: 0, write(sync,parallel): 1.2981131076812744 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.39s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.666395, 5, gather: 1.1409938335418701 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6664782, 6, gather: 0.9699306488037109 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6665492, 4, gather: 1.0075721740722656 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6666355, 7, gather: 1.1435182094573975 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6666882, 3, gather: 0.9672336578369141 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6668053, 2, gather: 0.6794402599334717 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6668363, 1, gather: 0.5446176528930664 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6672513, 13, gather: 1.2032418251037598 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.667259, 9, gather: 1.1087298393249512 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6672807, 11, gather: 1.1361987590789795 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6673036, 10, gather: 1.1373951435089111 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6674335, 15, gather: 1.2165486812591553 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6674354, 8, gather: 1.1277878284454346 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6673722, 12, gather: 1.1765327453613281 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6675072, 14, gather: 1.1678078174591064 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.668908, 0, gather: 0.0052394866943359375 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543248.6781368, metadata_write: 0.009081602096557617 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0165s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.5579s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6935s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.9808s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.0214s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.2297s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.1219s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.2163s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.1807s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.1492s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.1898s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.1409s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.1503s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.1551s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 1.1573s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.9840s + successfully saved checkpoint from iteration 10 to gpt-checkpoint [ t 1/2, p 1/1 ] +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.002037525177001953 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.002065420150756836 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.0020287036895751953 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.0020639896392822266 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 12, takes 0.0019202232360839844 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 14, takes 0.0019757747650146484 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 15, takes 0.0019800662994384766 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 11, takes 0.002010822296142578 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 13, takes 0.002000570297241211 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.002165079116821289 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 10, takes 0.002056598663330078 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 9, takes 0.002098560333251953 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.0022766590118408203 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 8, takes 0.0020933151245117188 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.002248048782348633 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.0022170543670654297 to finalize ckpt save +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 10 +Done exporting trace 10 +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +(min, max) time across ranks (ms): + evaluate .......................................: (2986.95, 2987.49) +---------------------------------------------------------------------------------------------------------------- +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED + validation loss at iteration 10 on validation set | lm loss value: 9.577630E+00 | lm loss PPL: 1.443816E+04 | +---------------------------------------------------------------------------------------------------------------- +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor:batch tensor: tokens tokens torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +torch.Size([8, 16384])batch tensor: + labels batch tensor:torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +labelsbatch tensor: torch.Size([8, 16384])loss_mask + torch.Size([8, 16384])batch tensor: + loss_maskbatch tensor: torch.Size([8, 16384])attention_mask + torch.Size([8, 1, 16384, 16384])batch tensor: +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) + attention_maskbatch tensor: position_ids torch.Size([8, 1, 16384, 16384])torch.Size([8, 16384]) + +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp:batch tensor after cp: tokenstokens torch.Size([8, 2048])torch.Size([8, 2048]) + +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp:batch tensor after cp: labels labels torch.Size([8, 2048])torch.Size([8, 2048]) + +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp:batch tensor after cp: loss_maskloss_mask torch.Size([8, 2048])torch.Size([8, 2048]) + +batch tensor after cp:batch tensor after cp: attention_maskattention_mask torch.Size([8, 1, 2048, 16384])torch.Size([8, 1, 2048, 16384]) + +batch tensor after cp:batch tensor after cp: position_idsposition_ids torch.Size([8, 2048])torch.Size([8, 2048]) + +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor: tokens torch.Size([8, 16384]) +batch tensor: labels torch.Size([8, 16384]) +batch tensor: loss_mask torch.Size([8, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor: attention_mask torch.Size([8, 1, 16384, 16384]) +batch tensor: position_ids torch.Size([8, 16384]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +batch tensor after cp: tokens torch.Size([8, 2048]) +batch tensor after cp: labels torch.Size([8, 2048]) +batch tensor after cp: loss_mask torch.Size([8, 2048]) +batch tensor after cp: attention_mask torch.Size([8, 1, 2048, 16384]) +batch tensor after cp: position_ids torch.Size([8, 2048]) +Start exporting trace 11 +Done exporting trace 11 +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +(min, max) time across ranks (ms): + evaluate .......................................: (103.02, 103.31) +---------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on test set | lm loss value: 9.577630E+00 | lm loss PPL: 1.443816E+04 | +---------------------------------------------------------------------------------------------------------- +Running ctx_length=4096, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 4096 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +-------------------------------- +CTX_LENGTH: 4096 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 8 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 4096 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 2 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 16 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 4096 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 4096 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 2 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 16 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 2 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.043 seconds +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.562 seconds +time to initialize megatron (seconds): 8.485 +[after megatron is initialized] datetime: 2025-06-21 22:01:30 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592>>> embedding + +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592 +>>> embedding>>> embedding + +>>> decoder +>>> decoder>>> output_layer + +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592 +>>> embedding>>> embedding + +>>> decoder +>>> decoder>>> output_layer + +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (296302592 elements, 296302592 padded size): + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.embedding.position_embeddings.weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.embedding.word_embeddings.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.79, 3.34) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:01:30 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=4096, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.006842 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 16648 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.002721 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 16640 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.002633 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 16671 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 22:01:30 +done with setup ... +training ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (584.40, 601.52) + train/valid/test-data-iterators-setup ..........: (20.78, 152.58) +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 22:01:30 +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 0 +Done exporting trace 0 +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.2795 +Theoretical memory footprints: weight and optimizer=4797.35 MB +[Rank 5] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 23880.0 | max reserved: 23880.0[Rank 2] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 23944.0 | max reserved: 23944.0 + +[Rank 3] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 23944.0 | max reserved: 23944.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 24136.0 | max reserved: 24136.0 + [2025-06-21 22:01:48] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 17402.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[Rank 6] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 23944.0 | max reserved: 23944.0 +[Rank 15] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 24328.0 | max reserved: 24328.0 +[Rank 4] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 24264.0 | max reserved: 24264.0 +[Rank 14] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 24584.0 | max reserved: 24584.0 +[Rank 7] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 23944.0 | max reserved: 23944.0 +[Rank 13] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 24540.0 | max reserved: 24540.0[Rank 9] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 23880.0 | max reserved: 23880.0 + +[Rank 12] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 24284.0 | max reserved: 24284.0 +[Rank 8] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 24392.0 | max reserved: 24392.0 +[Rank 10] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 24200.0 | max reserved: 24200.0 +[Rank 11] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 23944.0 | max reserved: 23944.0 +[Rank 1] (after 1 iterations) memory (MB) | allocated: 12880.64306640625 | max allocated: 22177.03369140625 | reserved: 23880.0 | max reserved: 23880.0 +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 1 +Done exporting trace 1 + [2025-06-21 22:01:49] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 671.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 2 +Done exporting trace 2 + [2025-06-21 22:01:49] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 606.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_maskbatch tensor after cp: torch.Size([8, 1, 4096, 32768])tokens + batch tensor after cp: torch.Size([8, 4096])position_ids + batch tensor after cp:torch.Size([8, 4096]) + labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 3 +Done exporting trace 3 + [2025-06-21 22:01:50] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 623.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 4 +Done exporting trace 4 + [2025-06-21 22:01:50] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 614.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 5 +Done exporting trace 5 + [2025-06-21 22:01:51] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 609.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 6 +Done exporting trace 6 + [2025-06-21 22:01:52] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 594.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 7 +Done exporting trace 7 + [2025-06-21 22:01:52] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 604.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: batch tensor after cp:position_ids tokenstorch.Size([8, 32768]) +torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 8 +Done exporting trace 8 + [2025-06-21 22:01:53] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 605.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor:batch tensor after cp: loss_masktokens torch.Size([8, 32768]) +torch.Size([8, 4096]) +batch tensor:batch tensor after cp: attention_masklabels torch.Size([8, 4096])torch.Size([8, 1, 32768, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) + +batch tensor after cp: batch tensor:loss_mask position_idstorch.Size([8, 4096]) +torch.Size([8, 32768])batch tensor after cp: + attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +batch tensor: tokens torch.Size([8, 32768]) +batch tensor: labels torch.Size([8, 32768]) +batch tensor: loss_mask torch.Size([8, 32768]) +batch tensor: attention_mask torch.Size([8, 1, 32768, 32768]) +batch tensor: position_ids torch.Size([8, 32768]) +batch tensor after cp: tokens torch.Size([8, 4096]) +batch tensor after cp: labels torch.Size([8, 4096]) +batch tensor after cp: loss_mask torch.Size([8, 4096]) +batch tensor after cp: attention_mask torch.Size([8, 1, 4096, 32768]) +batch tensor after cp: position_ids torch.Size([8, 4096]) +Start exporting trace 9 +Done exporting trace 9 + [2025-06-21 22:01:53] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 609.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[after training is done] datetime: 2025-06-21 22:01:53 +saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format +DEBUG:megatron.training.checkpointing:rank: 15, takes 0.0309295654296875 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 11, takes 0.0309293270111084 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 9, takes 0.031063556671142578 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 12, takes 0.03137922286987305 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 14, takes 0.03140592575073242 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 8, takes 0.03148627281188965 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 13, takes 0.0323185920715332 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 10, takes 0.0342862606048584 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.03462982177734375 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.03471064567565918 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.03485703468322754 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.03487563133239746 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.03527951240539551 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.035300254821777344 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.0354158878326416 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.03538680076599121 to prepare state dict for ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(201326592), 2), (np.int64(201326592), 3), (np.int64(184788992), 4), (np.int64(192937984), 5), (np.int64(192937984), 6), (np.int64(184788992), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(201326592), 2), (np.int64(201326592), 3), (np.int64(184788992), 4), (np.int64(192937984), 5), (np.int64(192937984), 6), (np.int64(184788992), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(201326592), 2), (np.int64(201326592), 3), (np.int64(184788992), 4), (np.int64(192937984), 5), (np.int64(192937984), 6), (np.int64(184788992), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(201326592), 2), (np.int64(201326592), 3), (np.int64(184788992), 4), (np.int64(192937984), 5), (np.int64(192937984), 6), (np.int64(184788992), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(201326592), 2), (np.int64(201326592), 3), (np.int64(184788992), 4), (np.int64(192937984), 5), (np.int64(192937984), 6), (np.int64(184788992), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(201326592), 2), (np.int64(201326592), 3), (np.int64(184788992), 4), (np.int64(192937984), 5), (np.int64(192937984), 6), (np.int64(184788992), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(201326592), 2), (np.int64(201326592), 3), (np.int64(184788992), 4), (np.int64(192937984), 5), (np.int64(192937984), 6), (np.int64(184788992), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(201326592), 2), (np.int64(201326592), 3), (np.int64(184788992), 4), (np.int64(192937984), 5), (np.int64(192937984), 6), (np.int64(184788992), 7)] +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.454664707183838 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.452364206314087 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4549157619476318 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.454819917678833 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4524621963500977 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.45487380027771 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4526312351226807 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4550108909606934 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4528107643127441 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.455461025238037 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4528584480285645 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.455641269683838 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.18788695335388184 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.455489158630371 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4531559944152832 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4548659324645996 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.006014823913574219 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.005791425704956055 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, plan time: 0.006264686584472656 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.006005764007568359 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, plan time: 0.006192207336425781 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, plan time: 0.004954099655151367 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.0032417774200439453 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5273778 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, plan time: 0.004613161087036133 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, plan time: 0.004907369613647461 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.004759311676025391 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5273986 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5274022 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5272803 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5274107 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5274136 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5272863 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5272937 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.006079912185668945 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.527298 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, plan time: 0.003785371780395508 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, plan time: 0.0045397281646728516 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5273118 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5273242 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.222724914550781e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.4849853515625e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.151199340820312e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.626678466796875e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5274496 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, plan time: 0.004777193069458008 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5274491 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.8650970458984375e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.29425048828125e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.985664367675781e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.5274806 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.127357482910156e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.772445678710938e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.20159912109375e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.984306335449219e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.081031799316406e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.367134094238281e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.0019867420196533203 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.650520324707031e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.527418 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.91278076171875e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.006269931793212891 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543315.530357 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 4.4345855712890625e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05960822105407715 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.587372 rank: 7, write(async) time: 0.0600886344909668 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06038641929626465 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05988025665283203 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5882256 rank: 12, write(async) time: 0.06082415580749512 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5876157 rank: 5, write(async) time: 0.06032729148864746 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06092023849487305 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.060587406158447266 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5887454 rank: 11, write(async) time: 0.0613408088684082 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5884292 rank: 3, write(async) time: 0.0611271858215332 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.060829877853393555 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06206464767456055 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.588939 rank: 15, write(async) time: 0.06155872344970703 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5898263 rank: 6, write(async) time: 0.06253385543823242 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.061655282974243164 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05988574028015137 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5895584 rank: 9, write(async) time: 0.062105655670166016 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5906353 rank: 0, write(async) time: 0.06027793884277344 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06380820274353027 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06325078010559082 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5916991 rank: 13, write(async) time: 0.06425046920776367 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5910149 rank: 4, write(async) time: 0.06368780136108398 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06400322914123535 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06357097625732422 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5919437 rank: 14, write(async) time: 0.0645303726196289 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5914829 rank: 1, write(async) time: 0.06406259536743164 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06659293174743652 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.594303 rank: 2, write(async) time: 0.06699228286743164 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06708288192749023 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5949817 rank: 10, write(async) time: 0.06749987602233887 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06724762916564941 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543315.5951235 rank: 8, write(async) time: 0.06771039962768555 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 2.384185791015625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 1.9788742065429688e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 1.9073486328125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 2.0742416381835938e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 1.9788742065429688e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 1.8596649169921875e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 1.430511474609375e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 1.8835067749023438e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 0.03488969802856445 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.03761148452758789 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 1.8358230590820312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 1.7642974853515625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.03995656967163086 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 1.9073486328125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 0.054186344146728516 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 2.1219253540039062e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.03865814208984375 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 0.043144941329956055 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.04639005661010742 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 0.050977468490600586 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 131072, before: 1846235136, after: 1846366208 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 3.123283386230469e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 2.002716064453125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 0.03584623336791992 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 290816, before: 1851138048, after: 1851428864 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 524288, before: 1832476672, after: 1833000960 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 0.03765869140625 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 835584, before: 1846235136, after: 1847070720 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__11_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 0.03644895553588867 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7004986, rank: 11, write(sync,parallel): 0.05841493606567383 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 0.04291868209838867 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 589824, before: 1905012736, after: 1905602560 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 528384, before: 1895428096, after: 1895956480 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 131072, before: 1835913216, after: 1836044288 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 417792, before: 1865347072, after: 1865764864 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 589824, before: 1832476672, after: 1833066496 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__5_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7209623, rank: 5, write(sync,parallel): 0.058429718017578125 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 745472, before: 1851138048, after: 1851883520 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__7_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 831488, before: 1936379904, after: 1937211392 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.723141, rank: 7, write(sync,parallel): 0.06383776664733887 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.03888726234436035 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.03791928291320801 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 327680, before: 1905012736, after: 1905340416 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__6_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7348797, rank: 6, write(sync,parallel): 0.06679248809814453 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 651264, before: 1909960704, after: 1910611968 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 524288, before: 1895428096, after: 1895952384 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__9_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 528384, before: 1935347712, after: 1935876096 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__10_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 823296, before: 1835913216, after: 1836736512 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7428148, rank: 9, write(sync,parallel): 0.0790708065032959 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.743755, rank: 10, write(sync,parallel): 0.0731813907623291 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 528384, before: 1786920960, after: 1787449344 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 638976, before: 1912107008, after: 1912745984 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 421888, before: 1865347072, after: 1865768960 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__4_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7486765, rank: 4, write(sync,parallel): 0.07811474800109863 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 716800, before: 1936379904, after: 1937096704 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__8_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7582998, rank: 8, write(sync,parallel): 0.07896113395690918 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 536576, before: 1935335424, after: 1935872000 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__12_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.770827, rank: 12, write(sync,parallel): 0.06608891487121582 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 651264, before: 1909960704, after: 1910611968 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 663552, before: 1935859712, after: 1936523264 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__15_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__3_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7738025, rank: 15, write(sync,parallel): 0.07548904418945312 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7738874, rank: 3, write(sync,parallel): 0.03938698768615723 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 835584, before: 1882894336, after: 1883729920 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__2_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.7764869, rank: 2, write(sync,parallel): 0.039842844009399414 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 634880, before: 1786920960, after: 1787555840 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__13_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.781496, rank: 13, write(sync,parallel): 0.07473945617675781 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 2.2172927856445312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 528384, before: 1912107008, after: 1912635392 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__14_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.787678, rank: 14, write(sync,parallel): 0.08615589141845703 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.04177665710449219 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 2.0742416381835938e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 827392, before: 1938272256, after: 1939099648 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.043576717376708984 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.18s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.27s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.25s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.27s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.26s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.21s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.22s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.22s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0011s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.19s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0011s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0010s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.26s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.26s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.26s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0010s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.25s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0015s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0011s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0017s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0013s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0012s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0011s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0015s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0014s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.23s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0014s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0012s +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 827392, before: 1938272256, after: 1939099648 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__1_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.9013028, rank: 1, write(sync,parallel): 0.0671839714050293 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 724992, before: 2102136832, after: 2102861824 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 failed +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 811008, before: 2102136832, after: 2102947840 +ERROR:megatron.core.dist_checkpointing.strategies.filesystem_async:Local process 0 encountered an error: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__0_0.distcp' +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543315.97042, rank: 0, write(sync,parallel): 0.08865237236022949 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.21s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0011s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.25s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0016s +Running ctx_length=8192, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 8192 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +-------------------------------- +CTX_LENGTH: 8192 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False +INFO:megatron.training.initialize:Setting logging level to 0 + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 8 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False +INFO:megatron.training.initialize:Setting logging level to 0 + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 8192 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 2 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 16 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False +INFO:megatron.training.initialize:Setting logging level to 0 + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True +INFO:megatron.training.initialize:Setting logging level to 0 + max_position_embeddings ......................... 8192 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 8192 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 2 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 16 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... + > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 2 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.045 seconds +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.907 seconds +time to initialize megatron (seconds): 8.151 +[after megatron is initialized] datetime: 2025-06-21 22:02:38 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808 > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808 + +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (313079808 elements, 313079808 padded size): + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.embedding.word_embeddings.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (3.19, 3.30) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:02:39 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=8192, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004855 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8324 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001904 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8320 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001751 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8335 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 22:02:39 +done with setup ... +training ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (767.19, 791.28) + train/valid/test-data-iterators-setup ..........: (16.03, 174.89) +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 22:02:39 +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_maskbatch tensor after cp: torch.Size([8, 1, 8192, 65536]) +tokensbatch tensor after cp: position_ids torch.Size([8, 8192]) +torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 0 +Done exporting trace 0 +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.2795 +Theoretical memory footprints: weight and optimizer=4797.35 MB + [2025-06-21 22:02:57] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 18340.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[Rank 5] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64150.0 | max reserved: 64150.0 +[Rank 13] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64810.0 | max reserved: 64810.0 +[Rank 9] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64406.0 | max reserved: 64406.0 +[Rank 14] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64918.0 | max reserved: 64918.0[Rank 10] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64534.0 | max reserved: 64534.0 + +[Rank 11] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 65046.0 | max reserved: 65046.0 +[Rank 7] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64278.0 | max reserved: 64278.0 +[Rank 12] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64810.0 | max reserved: 64810.0 +[Rank 8] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 63894.0 | max reserved: 63894.0[Rank 15] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 65302.0 | max reserved: 65302.0 + +[Rank 2] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64022.0 | max reserved: 64022.0 +[Rank 6] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64278.0 | max reserved: 64278.0 +[Rank 4] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64150.0 | max reserved: 64150.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64386.0 | max reserved: 64386.0 +[Rank 3] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 64534.0 | max reserved: 64534.0 +[Rank 1] (after 1 iterations) memory (MB) | allocated: 40728.51806640625 | max allocated: 60587.59619140625 | reserved: 63874.0 | max reserved: 63874.0 +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 1 +Done exporting trace 1 + [2025-06-21 22:03:00] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 2550.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 2 +Done exporting trace 2 + [2025-06-21 22:03:02] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 2491.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 3 +Done exporting trace 3 + [2025-06-21 22:03:05] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 2474.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 4 +Done exporting trace 4 + [2025-06-21 22:03:07] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 2498.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 5 +Done exporting trace 5 + [2025-06-21 22:03:10] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 2497.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 6 +Done exporting trace 6 + [2025-06-21 22:03:12] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 2509.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 7 +Done exporting trace 7 + [2025-06-21 22:03:15] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 2486.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 8 +Done exporting trace 8 + [2025-06-21 22:03:17] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 2492.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 9 +Done exporting trace 9 + [2025-06-21 22:03:20] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 2492.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[after training is done] datetime: 2025-06-21 22:03:20 +saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format +DEBUG:megatron.training.checkpointing:rank: 13, takes 0.02636265754699707 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 9, takes 0.02641749382019043 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 11, takes 0.026665925979614258 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 10, takes 0.0267486572265625 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 12, takes 0.02678370475769043 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 15, takes 0.02689337730407715 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 8, takes 0.026856660842895508 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 14, takes 0.026907682418823242 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.029201030731201172 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.029797792434692383 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.029778480529785156 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.029793977737426758 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.029872894287109375 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.03708291053771973 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.03708624839782715 to prepare state dict for ckpt +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.0445706844329834 to prepare state dict for ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)] +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(218103808), 2), (np.int64(218103808), 3), (np.int64(201566208), 4), (np.int64(209715200), 5), (np.int64(209715200), 6), (np.int64(201566208), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(218103808), 2), (np.int64(218103808), 3), (np.int64(201566208), 4), (np.int64(209715200), 5), (np.int64(209715200), 6), (np.int64(201566208), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(218103808), 2), (np.int64(218103808), 3), (np.int64(201566208), 4), (np.int64(209715200), 5), (np.int64(209715200), 6), (np.int64(201566208), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(218103808), 2), (np.int64(218103808), 3), (np.int64(201566208), 4), (np.int64(209715200), 5), (np.int64(209715200), 6), (np.int64(201566208), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(218103808), 2), (np.int64(218103808), 3), (np.int64(201566208), 4), (np.int64(209715200), 5), (np.int64(209715200), 6), (np.int64(201566208), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(218103808), 2), (np.int64(218103808), 3), (np.int64(201566208), 4), (np.int64(209715200), 5), (np.int64(209715200), 6), (np.int64(201566208), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(218103808), 2), (np.int64(218103808), 3), (np.int64(201566208), 4), (np.int64(209715200), 5), (np.int64(209715200), 6), (np.int64(201566208), 7)] +DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(218103808), 2), (np.int64(218103808), 3), (np.int64(201566208), 4), (np.int64(209715200), 5), (np.int64(209715200), 6), (np.int64(201566208), 7)] +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4566154479980469 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.447014331817627 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4532809257507324 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4558942317962646 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4470500946044922 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.447178840637207 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4535725116729736 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4474382400512695 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.454092025756836 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4476659297943115 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4893138408660889 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4477581977844238 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4544169902801514 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4480748176574707 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.1998913288116455 +DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 1.4568781852722168 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, plan time: 0.009448051452636719 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, plan time: 0.009023666381835938 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, plan time: 0.008713722229003906 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, plan time: 0.009459972381591797 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, plan time: 0.008826017379760742 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.009976625442504883 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.009492874145507812 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.009122610092163086 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.668483 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6684844 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, plan time: 0.009659290313720703 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6684887 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6684904 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6684878 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.009062528610229492 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6683688 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6683712 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, plan time: 0.009807348251342773 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, plan time: 0.00873565673828125 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6683738 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6685207 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6685393 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.66854 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.0558319091796875e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.007727384567260742 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.668403 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.0023627281188964844 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.651878356933594e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.626678466796875e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.364418029785156e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.985664367675781e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.557868957519531e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.00859212875366211 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.222724914550781e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.668429 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.440017700195312e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6684346 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.295608520507812e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.127357482910156e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6684656 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.175041198730469e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.031990051269531e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.249282836914062e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.557868957519531e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.269050598144531e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.010318517684936523 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543401.6716528 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 4.7206878662109375e-05 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05220603942871094 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7210946 rank: 13, write(async) time: 0.05260062217712402 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05415797233581543 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7230396 rank: 10, write(async) time: 0.05456256866455078 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05473184585571289 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7235029 rank: 4, write(async) time: 0.05513119697570801 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.056014299392700195 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05520272254943848 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05608630180358887 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7240264 rank: 6, write(async) time: 0.05562853813171387 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7249427 rank: 14, write(async) time: 0.05645298957824707 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.056526899337768555 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7249954 rank: 15, write(async) time: 0.05650472640991211 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7253013 rank: 3, write(async) time: 0.05692744255065918 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05648231506347656 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.0566709041595459 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.725415 rank: 12, write(async) time: 0.05687713623046875 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7254896 rank: 5, write(async) time: 0.05711698532104492 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05666804313659668 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7255654 rank: 11, write(async) time: 0.057082176208496094 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.0569767951965332 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7259223 rank: 9, write(async) time: 0.05739879608154297 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.0586240291595459 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.727555 rank: 8, write(async) time: 0.05901622772216797 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 2.1219253540039062e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.0639791488647461 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7328115 rank: 2, write(async) time: 0.06435012817382812 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06319284439086914 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.7352586 rank: 0, write(async) time: 0.06360292434692383 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 2.3603439331054688e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.06987500190734863 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.738774 rank: 1, write(async) time: 0.07034039497375488 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.07400059700012207 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543401.742865 rank: 7, write(async) time: 0.07443404197692871 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.03550076484680176 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.04276680946350098 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 1.7642974853515625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 1.8358230590820312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 2.0265579223632812e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 1.8596649169921875e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 1.7642974853515625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 2.0265579223632812e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 1.7642974853515625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 2.2172927856445312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 1.8835067749023438e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 1.71661376953125e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 1.8358230590820312e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 1.811981201171875e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 0.03325295448303223 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 0.03450751304626465 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 0.03518843650817871 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 0.03575634956359863 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 0.03598284721374512 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.03426814079284668 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 0.0345304012298584 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 0.04198312759399414 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.038041114807128906 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 0.036028146743774414 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.03895211219787598 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.050839900970458984 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 2.1457672119140625e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 38883328, before: 1682882560, after: 1721765888 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 39018496, before: 1711751168, after: 1750769664 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.042981863021850586 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 2.2649765014648438e-05 to finish D2H +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55590912, before: 1694605312, after: 1750196224 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.047260284423828125 to schedule async ckpt +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, joining self.process +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97505280, before: 1685565440, after: 1783070720 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 131072, before: 1685798912, after: 1685929984 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 55791616, before: 1682485248, after: 1738276864 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 80576512, before: 1685630976, after: 1766207488 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 87810048, before: 1701593088, after: 1789403136 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 88854528, before: 1685565440, after: 1774419968 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 97681408, before: 1684586496, after: 1782267904 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 72617984, before: 1754775552, after: 1827393536 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 89079808, before: 1678573568, after: 1767653376 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results... +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 80646144, before: 1713053696, after: 1793699840 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 89284608, before: 1684586496, after: 1773871104 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 352256, before: 2007371776, after: 2007724032 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 106123264, before: 1688039424, after: 1794162688 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 106090496, before: 1688039424, after: 1794129920 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.1250856, rank: 13, write(sync,parallel): 0.3030281066894531 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139370496, before: 1682882560, after: 1822253056 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.1385674, rank: 15, write(sync,parallel): 0.3093693256378174 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139194368, before: 1685630976, after: 1824825344 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139542528, before: 1701593088, after: 1841135616 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.1585023, rank: 14, write(sync,parallel): 0.32282304763793945 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139407360, before: 1678573568, after: 1817980928 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139337728, before: 1754775552, after: 1894113280 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.38s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.1772842, rank: 11, write(sync,parallel): 0.3497192859649658 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139640832, before: 1694605312, after: 1834246144 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.39s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139456512, before: 1711751168, after: 1851207680 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139272192, before: 1713053696, after: 1852325888 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.191197, rank: 12, write(sync,parallel): 0.34186863899230957 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.40s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.1970348, rank: 6, write(sync,parallel): 0.34688544273376465 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 211599360, before: 1681457152, after: 1893056512 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.2022383, rank: 4, write(sync,parallel): 0.3530905246734619 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 211857408, before: 1715900416, after: 1927757824 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 139583488, before: 1682485248, after: 1822068736 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.2066646, rank: 8, write(sync,parallel): 0.35846447944641113 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.43s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.2200596, rank: 9, write(sync,parallel): 0.39240455627441406 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.2231011, rank: 5, write(sync,parallel): 0.38385915756225586 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.2261114, rank: 10, write(sync,parallel): 0.3821749687194824 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.42s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.43s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.43s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.45s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.2532244, rank: 2, write(sync,parallel): 0.46718287467956543 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.2579367, rank: 7, write(sync,parallel): 0.38746166229248047 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.47s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.46s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.2621486, rank: 3, write(sync,parallel): 0.48989439010620117 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.46s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.56s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.49s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.58s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 417849344, before: 1685790720, after: 2103640064 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.8242006, rank: 1, write(sync,parallel): 0.8458817005157471 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.94s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 417697792, before: 2007371776, after: 2425069568 +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully +DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543402.9479637, rank: 0, write(sync,parallel): 0.917926549911499 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.02s from forking +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9960682, 7, gather: 0.6890177726745605 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9962678, 4, gather: 0.7550363540649414 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9963377, 1, gather: 0.12447285652160645 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.996276, 5, gather: 0.7340695858001709 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.996354, 3, gather: 0.679426908493042 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9967859, 9, gather: 0.7375669479370117 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.996292, 6, gather: 0.7593562602996826 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9969337, 11, gather: 0.7806830406188965 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.996757, 2, gather: 0.6947276592254639 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9969447, 12, gather: 0.7660727500915527 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.996941, 10, gather: 0.7325725555419922 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9970806, 14, gather: 0.7988793849945068 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9971225, 13, gather: 0.8281552791595459 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9971464, 15, gather: 0.8172204494476318 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.99722, 8, gather: 0.7490251064300537 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543402.9986613, 0, gather: 0.005019187927246094 +DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543403.0078952, metadata_write: 0.009096145629882812 +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.1382s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0163s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.6931s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7476s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7688s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7079s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7620s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7937s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.8301s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7455s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.8119s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.8413s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7507s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7793s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7731s +DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.7029s + successfully saved checkpoint from iteration 10 to gpt-checkpoint [ t 1/2, p 1/1 ] +DEBUG:megatron.training.checkpointing:rank: 1, takes 0.002054452896118164 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 4, takes 0.0020432472229003906 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 7, takes 0.002052783966064453 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 5, takes 0.0020227432250976562 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 2, takes 0.002017974853515625 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 0, takes 0.0020580291748046875 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 6, takes 0.002032756805419922 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 3, takes 0.002058744430541992 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 9, takes 0.0020380020141601562 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 11, takes 0.0020444393157958984 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 12, takes 0.0020062923431396484 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 14, takes 0.001999378204345703 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 13, takes 0.0020210742950439453 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 10, takes 0.0020711421966552734 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 15, takes 0.0019910335540771484 to finalize ckpt save +DEBUG:megatron.training.checkpointing:rank: 8, takes 0.002062559127807617 to finalize ckpt save +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 10 +Done exporting trace 10 +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +(min, max) time across ranks (ms): + evaluate .......................................: (4370.94, 4371.70) +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +---------------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on validation set | lm loss value: 9.831114E+00 | lm loss PPL: 1.860366E+04 | +---------------------------------------------------------------------------------------------------------------- +Evaluating on 1 samples +Evaluating iter 1/1 +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +batch tensor: tokens torch.Size([8, 65536]) +batch tensor: labels torch.Size([8, 65536]) +batch tensor: loss_mask torch.Size([8, 65536]) +batch tensor: attention_mask torch.Size([8, 1, 65536, 65536]) +batch tensor: position_ids torch.Size([8, 65536]) +batch tensor after cp: tokens torch.Size([8, 8192]) +batch tensor after cp: labels torch.Size([8, 8192]) +batch tensor after cp: loss_mask torch.Size([8, 8192]) +batch tensor after cp: attention_mask torch.Size([8, 1, 8192, 65536]) +batch tensor after cp: position_ids torch.Size([8, 8192]) +Start exporting trace 11 +Done exporting trace 11 +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED +(min, max) time across ranks (ms): + evaluate .......................................: (1470.63, 1471.49) +---------------------------------------------------------------------------------------------------------- + validation loss at iteration 10 on test set | lm loss value: 9.831114E+00 | lm loss PPL: 1.860366E+04 | +---------------------------------------------------------------------------------------------------------- +Running ctx_length=12288, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 12288 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +-------------------------------- +CTX_LENGTH: 12288 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 8 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 12288 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 2 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 16 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 12288 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 12288 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 2 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 16 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... + > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 2 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.046 seconds +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 3.268 seconds +time to initialize megatron (seconds): 8.889 +[after megatron is initialized] datetime: 2025-06-21 22:04:08 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 329857024 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 329857024 +>>> embedding +>>> decoder +>>> output_layer>>> embedding + +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 329857024 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 329857024 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 329857024 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 329857024 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 329857024 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 329857024 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 329857024 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 329857024 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 329857024 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 329857024 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 329857024 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 329857024 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 329857024 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (329857024 elements, 329857024 padded size): + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 329857024 +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.82, 3.12) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:04:09 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=12288, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005371 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 5549 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001827 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 5546 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001650 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 5557 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 22:04:09 +done with setup ... +training ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (923.07, 968.60) + train/valid/test-data-iterators-setup ..........: (17.34, 173.00) +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 22:04:09 +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +batch tensor: tokens torch.Size([8, 98304]) +batch tensor: labels torch.Size([8, 98304]) +batch tensor: loss_mask torch.Size([8, 98304]) +batch tensor: attention_mask torch.Size([8, 1, 98304, 98304]) +batch tensor: position_ids torch.Size([8, 98304]) +batch tensor after cp: tokens torch.Size([8, 12288]) +batch tensor after cp: labels torch.Size([8, 12288]) +batch tensor after cp: loss_mask torch.Size([8, 12288]) +batch tensor after cp: attention_mask torch.Size([8, 1, 12288, 98304]) +batch tensor after cp: position_ids torch.Size([8, 12288]) +Start exporting trace 0 +Done exporting trace 0 +Number of parameters in transformer block in billions: 0.35 +Number of parameters in embedding layers in billions: 0.21 +Total number of parameters in billions: 0.56 +Number of parameters in most loaded shard in billions: 0.2795 +Theoretical memory footprints: weight and optimizer=4797.35 MB +[Rank 1] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123086.0 | max reserved: 123086.0 + [2025-06-21 22:04:30] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 20877.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 | +[Rank 7] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123090.0 | max reserved: 123090.0 +[Rank 5] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 122910.0 | max reserved: 122910.0[Rank 6] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 122898.0 | max reserved: 122898.0 + +[Rank 4] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 122718.0 | max reserved: 122718.0 +[Rank 3] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123282.0 | max reserved: 123282.0 +[Rank 9] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123882.0 | max reserved: 123882.0 +[Rank 2] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 122898.0 | max reserved: 122898.0 +[Rank 0] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 122318.0 | max reserved: 122318.0 +[Rank 11] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123666.0 | max reserved: 123666.0 +[Rank 13] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 124302.0 | max reserved: 124302.0[Rank 14] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123666.0 | max reserved: 123666.0 + +[Rank 12] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123534.0 | max reserved: 123534.0 +[Rank 10] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123282.0 | max reserved: 123282.0 +[Rank 15] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 124434.0 | max reserved: 124434.0 +[Rank 8] (after 1 iterations) memory (MB) | allocated: 87009.39306640625 | max allocated: 117431.15869140625 | reserved: 123114.0 | max reserved: 123114.0 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 51.26 GiB is free. Including non-PyTorch memory, this process has 88.54 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 51.26 GiB is free. Including non-PyTorch memory, this process has 88.54 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 51.27 GiB is free. Including non-PyTorch memory, this process has 88.53 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 51.27 GiB is free. Including non-PyTorch memory, this process has 88.53 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 51.79 GiB is free. Including non-PyTorch memory, this process has 88.00 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 51.79 GiB is free. Including non-PyTorch memory, this process has 88.00 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 51.78 GiB is free. Including non-PyTorch memory, this process has 88.02 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 51.78 GiB is free. Including non-PyTorch memory, this process has 88.02 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 51.79 GiB is free. Including non-PyTorch memory, this process has 88.00 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 51.79 GiB is free. Including non-PyTorch memory, this process has 88.00 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 51.59 GiB is free. Including non-PyTorch memory, this process has 88.21 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 51.59 GiB is free. Including non-PyTorch memory, this process has 88.21 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 51.26 GiB is free. Including non-PyTorch memory, this process has 88.54 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 51.26 GiB is free. Including non-PyTorch memory, this process has 88.54 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 51.78 GiB is free. Including non-PyTorch memory, this process has 88.02 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 51.78 GiB is free. Including non-PyTorch memory, this process has 88.02 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 51.27 GiB is free. Including non-PyTorch memory, this process has 88.53 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 51.27 GiB is free. Including non-PyTorch memory, this process has 88.53 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 200.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 51.59 GiB is free. Including non-PyTorch memory, this process has 88.21 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 51.59 GiB is free. Including non-PyTorch memory, this process has 88.21 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 51.07 GiB is free. Including non-PyTorch memory, this process has 88.73 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 51.07 GiB is free. Including non-PyTorch memory, this process has 88.73 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 51.09 GiB is free. Including non-PyTorch memory, this process has 88.71 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 51.09 GiB is free. Including non-PyTorch memory, this process has 88.71 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 51.07 GiB is free. Including non-PyTorch memory, this process has 88.73 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 51.07 GiB is free. Including non-PyTorch memory, this process has 88.73 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 51.61 GiB is free. Including non-PyTorch memory, this process has 88.19 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 51.61 GiB is free. Including non-PyTorch memory, this process has 88.19 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 51.09 GiB is free. Including non-PyTorch memory, this process has 88.71 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 51.09 GiB is free. Including non-PyTorch memory, this process has 88.71 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 72.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 51.61 GiB is free. Including non-PyTorch memory, this process has 88.19 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 72.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 51.61 GiB is free. Including non-PyTorch memory, this process has 88.19 GiB memory in use. Of the allocated memory 83.74 GiB is allocated by PyTorch, and 392.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=16384, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 16384 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 16384 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False +INFO:megatron.training.initialize:Setting logging level to 0 + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 8 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 16384 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 2 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 16 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 16384 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel +INFO:megatron.training.initialize:Setting logging level to 0 + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 16384 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 2 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 16 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 2 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.076 seconds +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.920 seconds +time to initialize megatron (seconds): 8.740 +[after megatron is initialized] datetime: 2025-06-21 22:05:11 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (346634240 elements, 346634240 padded size): + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.embedding.position_embeddings.weight + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.0.self_attention.linear_proj.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240 +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.57, 3.38) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:05:13 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=16384, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005208 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 4162 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001798 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 4160 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001522 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 4167 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 22:05:13 +done with setup ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (1073.12, 1092.10) + train/valid/test-data-iterators-setup ..........: (17.43, 161.36) +training ... +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 22:05:13 +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 7.63 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 7.63 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 7.63 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 7.63 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 7.63 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 7.63 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 7.63 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 7.63 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 7.62 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 7.62 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 7.62 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 7.62 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 7.62 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 7.62 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 7.62 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 7.62 GiB is free. Including non-PyTorch memory, this process has 132.18 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +batch tensor: tokens torch.Size([8, 131072]) +batch tensor: labels torch.Size([8, 131072]) +batch tensor: loss_mask torch.Size([8, 131072]) +batch tensor: attention_mask torch.Size([8, 1, 131072, 131072]) +batch tensor: position_ids torch.Size([8, 131072]) +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 16.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 293, in get_batch\n batch = get_batch_on_this_cp_rank(batch)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/utils.py", line 1765, in get_batch_on_this_cp_rank\n val = val.index_select(seq_dim, index)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 7.64 GiB is free. Including non-PyTorch memory, this process has 132.16 GiB memory in use. Of the allocated memory 130.61 GiB is allocated by PyTorch, and 69.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=24576, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 24576 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 24576 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 8 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 24576 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 2 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 16 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 24576 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 24576 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 2 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 16 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 2 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.041 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.328 seconds +time to initialize megatron (seconds): 10.243 +[after megatron is initialized] datetime: 2025-06-21 22:05:57 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672 +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (380188672 elements, 380188672 padded size): + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.embedding.position_embeddings.weight + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.weight + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.layers.0.self_attention.linear_proj.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.97, 3.21) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:05:59 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=24576, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.006464 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2774 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001791 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2773 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001470 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2778 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 22:05:59 +done with setup ... +training ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (1504.83, 1528.39) + train/valid/test-data-iterators-setup ..........: (20.15, 169.78) +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 22:05:59 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.40 GiB is free. Including non-PyTorch memory, this process has 4.41 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 288.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.38 GiB is free. Including non-PyTorch memory, this process has 4.42 GiB memory in use. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 67.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=32768, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 32768 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 32768 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 8 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 32768 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 2 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 16 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 32768 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 32768 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 2 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 16 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 + > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 2 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.042 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.870 seconds +time to initialize megatron (seconds): 9.233 +[after megatron is initialized] datetime: 2025-06-21 22:06:38 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104 + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (413743104 elements, 413743104 padded size): + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_proj.weight + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.final_layernorm.bias + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.self_attention.linear_qkv.weight + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.embedding.word_embeddings.weight + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104 +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104 +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (3.38, 3.68) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:06:40 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=32768, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004732 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2081 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001647 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2080 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001424 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2083 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 22:06:40 +done with setup ... +training ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (1752.28, 1777.38) + train/valid/test-data-iterators-setup ..........: (15.93, 156.92) +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 22:06:40 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.15 GiB is free. Including non-PyTorch memory, this process has 4.66 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 512.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.13 GiB is free. Including non-PyTorch memory, this process has 4.67 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +Running ctx_length=40960, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8 +Cleaning up checkpoint directory: gpt-checkpoint +Cleaning up checkpoint directory: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 40960 +TP_SIZE: 2 +CP_SIZE: 8 +CHECKPOINT_PATH: gpt-checkpoint +-------------------------------- +CTX_LENGTH: 40960 +TP_SIZE: 2 +CP_SIZE: 8 +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +CHECKPOINT_PATH: gpt-checkpoint +-------------------------------- +PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron +-------------------------------- +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written. +WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0 +Number of virtual stages per pipeline stage: None +WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used +using torch.float16 for parameters ... +------------------------ arguments ------------------------ + account_for_embedding_in_pipeline_split ......... False + account_for_loss_in_pipeline_split .............. False + accumulate_allreduce_grads_in_fp32 .............. False + adam_beta1 ...................................... 0.9 + adam_beta2 ...................................... 0.999 + adam_eps ........................................ 1e-08 + add_bias_linear ................................. True + add_position_embedding .......................... True + add_qkv_bias .................................... True + adlr_autoresume ................................. False + adlr_autoresume_interval ........................ 1000 + align_grad_reduce ............................... True + align_param_gather .............................. False + app_tag_run_name ................................ None + app_tag_run_version ............................. 0.0.0 + apply_layernorm_1p .............................. False + apply_query_key_layer_scaling ................... False + apply_residual_connection_post_layernorm ........ False + apply_rope_fusion ............................... False + async_save ...................................... None + async_tensor_model_parallel_allreduce ........... True + attention_backend ............................... AttnBackend.auto + attention_dropout ............................... 0.1 + attention_softmax_in_fp32 ....................... False + auto_detect_ckpt_format ......................... False + barrier_with_L1_time ............................ True + bert_binary_head ................................ True + bert_embedder_type .............................. megatron + bert_load ....................................... None + bf16 ............................................ False + bias_dropout_fusion ............................. True + bias_gelu_fusion ................................ True + bias_swiglu_fusion .............................. True + biencoder_projection_dim ........................ 0 + biencoder_shared_query_context_model ............ False + block_data_path ................................. None + calc_ft_timeouts ................................ False + calculate_per_token_loss ........................ False + check_for_large_grads ........................... False + check_for_nan_in_loss_and_grad .................. False + check_for_spiky_loss ............................ False + check_weight_hash_across_dp_replicas_interval ... None + ckpt_assume_constant_structure .................. False + ckpt_convert_format ............................. None + ckpt_convert_save ............................... None + ckpt_convert_update_legacy_dist_opt_format ...... False + ckpt_format ..................................... torch_dist + ckpt_fully_parallel_load ........................ False + ckpt_fully_parallel_save ........................ True + ckpt_fully_parallel_save_deprecated ............. False + ckpt_step ....................................... None + classes_fraction ................................ 1.0 + clip_grad ....................................... 1.0 + clone_scatter_output_in_embedding ............... True + config_logger_dir ............................... + consumed_train_samples .......................... 0 + consumed_valid_samples .......................... 0 + context_parallel_size ........................... 8 + cp_comm_type .................................... ['p2p'] + create_attention_mask_in_dataloader ............. True + cross_entropy_fusion_impl ....................... native + cross_entropy_loss_fusion ....................... False + cuda_graph_scope ................................ full + cuda_graph_warmup_steps ......................... 3 + data_args_path .................................. None + data_cache_path ................................. None + data_parallel_random_init ....................... False + data_parallel_sharding_strategy ................. no_shard + data_parallel_size .............................. 1 + data_path ....................................... None + data_per_class_fraction ......................... 1.0 + data_sharding ................................... True + dataloader_type ................................. single + ddp_average_in_collective ....................... False + ddp_bucket_size ................................. None + ddp_num_buckets ................................. None + ddp_pad_buckets_for_high_nccl_busbw ............. False + decoder_first_pipeline_num_layers ............... None + decoder_last_pipeline_num_layers ................ None + decoder_num_layers .............................. None + decoder_seq_length .............................. None + decoupled_lr .................................... None + decoupled_min_lr ................................ None + decrease_batch_size_if_needed ................... False + defer_embedding_wgrad_compute ................... False + deprecated_use_mcore_models ..................... False + deterministic_mode .............................. False + dino_bottleneck_size ............................ 256 + dino_freeze_last_layer .......................... 1 + dino_head_hidden_size ........................... 2048 + dino_local_crops_number ......................... 10 + dino_local_img_size ............................. 96 + dino_norm_last_layer ............................ False + dino_teacher_temp ............................... 0.07 + dino_warmup_teacher_temp ........................ 0.04 + dino_warmup_teacher_temp_epochs ................. 30 + disable_bf16_reduced_precision_matmul ........... False + disable_mamba_mem_eff_path ...................... False + disable_straggler_on_startup .................... False + dist_ckpt_format_deprecated ..................... None + dist_ckpt_strictness ............................ assume_ok_unexpected + distribute_saved_activations .................... False + distributed_backend ............................. nccl + distributed_timeout_minutes ..................... 10 + embedding_path .................................. None + empty_unused_memory_level ....................... 0 + enable_cuda_graph ............................... False + enable_ft_package ............................... False + enable_gloo_process_groups ...................... True + enable_msc ...................................... True + enable_one_logger ............................... True + encoder_num_layers .............................. 2 + encoder_pipeline_model_parallel_size ............ 0 + encoder_seq_length .............................. 40960 + encoder_tensor_model_parallel_size .............. 0 + end_weight_decay ................................ 0.1 + eod_mask_loss ................................... False + error_injection_rate ............................ 0 + error_injection_type ............................ transient_error + eval_interval ................................... 16 + eval_iters ...................................... 1 + evidence_data_path .............................. None + exit_duration_in_mins ........................... None + exit_interval ................................... None + exit_on_missing_checkpoint ...................... False + exit_signal_handler ............................. False + exp_avg_dtype ................................... torch.float32 + exp_avg_sq_dtype ................................ torch.float32 + expert_model_parallel_size ...................... 1 + expert_tensor_parallel_size ..................... 2 + external_cuda_graph ............................. False + ffn_hidden_size ................................. 16384 + finetune ........................................ False + first_last_layers_bf16 .......................... False + flash_decode .................................... False + fp16 ............................................ True + fp16_lm_cross_entropy ........................... False + fp32_residual_connection ........................ False + fp8 ............................................. None + fp8_amax_compute_algo ........................... most_recent + fp8_amax_history_len ............................ 1 + fp8_interval .................................... 1 + fp8_margin ...................................... 0 + fp8_param_gather ................................ False + fp8_recipe ...................................... delayed + fp8_wgrad ....................................... True + fsdp_double_buffer .............................. False + global_batch_size ............................... 1 + grad_reduce_in_bf16 ............................. False + gradient_accumulation_fusion .................... True + gradient_reduce_div_fusion ...................... True + group_query_attention ........................... True + head_lr_mult .................................... 1.0 + heterogeneous_layers_config_encoded_json ........ None + heterogeneous_layers_config_path ................ None + hidden_dropout .................................. 0.1 + hidden_size ..................................... 4096 + hierarchical_context_parallel_sizes ............. None + high_priority_stream_groups ..................... [] + hybrid_attention_ratio .......................... 0.0 + hybrid_mlp_ratio ................................ 0.0 + hybrid_override_pattern ......................... None + hysteresis ...................................... 2 + ict_head_size ................................... None + ict_load ........................................ None + img_h ........................................... 224 + img_w ........................................... 224 + indexer_batch_size .............................. 128 + indexer_log_interval ............................ 1000 + inference_batch_times_seqlen_threshold .......... -1 + inference_dynamic_batching ...................... False + inference_dynamic_batching_buffer_guaranteed_fraction 0.2 + inference_dynamic_batching_buffer_overflow_factor None + inference_dynamic_batching_buffer_size_gb ....... 40.0 + inference_dynamic_batching_chunk_size ........... 256 + inference_dynamic_batching_max_requests_override None + inference_dynamic_batching_max_tokens_override .. None + inference_max_batch_size ........................ 8 + inference_max_seq_length ........................ 2560 + inference_rng_tracker ........................... False + init_method_std ................................. 0.02 + init_method_xavier_uniform ...................... False + init_model_with_meta_device ..................... False + initial_loss_scale .............................. 4294967296 + inprocess_active_world_size ..................... 16 + inprocess_barrier_timeout ....................... 120 + inprocess_completion_timeout .................... 120 + inprocess_empty_cuda_cache ...................... False + inprocess_granularity ........................... node + inprocess_hard_timeout .......................... 90 + inprocess_heartbeat_interval .................... 30 + inprocess_heartbeat_timeout ..................... 60 + inprocess_last_call_wait ........................ 1 + inprocess_max_iterations ........................ None + inprocess_monitor_process_interval .............. 1.0 + inprocess_monitor_thread_interval ............... 1.0 + inprocess_progress_watchdog_interval ............ 1.0 + inprocess_restart ............................... False + inprocess_soft_timeout .......................... 60 + inprocess_termination_grace_time ................ 1 + is_hybrid_model ................................. False + iter_per_epoch .................................. 1250 + iterations_to_skip .............................. [] + keep_fp8_transpose_cache_when_using_custom_fsdp . False + kv_channels ..................................... 64 + kv_lora_rank .................................... 32 + lazy_mpu_init ................................... None + load ............................................ gpt-checkpoint + load_model_opt_format ........................... False + local_rank ...................................... 0 + log_interval .................................... 1 + log_loss_scale_to_tensorboard ................... True + log_memory_to_tensorboard ....................... False + log_num_zeros_in_grad ........................... False + log_params_norm ................................. False + log_progress .................................... False + log_straggler ................................... False + log_throughput .................................. False + log_timers_to_tensorboard ....................... False + log_validation_ppl_to_tensorboard ............... False + log_world_size_to_tensorboard ................... False + logging_level ................................... 0 + loss_scale ...................................... None + loss_scale_window ............................... 1000 + lr .............................................. 0.0005 + lr_decay_iters .................................. 150000 + lr_decay_samples ................................ None + lr_decay_style .................................. cosine + lr_warmup_fraction .............................. None + lr_warmup_init .................................. 0.0 + lr_warmup_iters ................................. 2 + lr_warmup_samples ............................... 0 + lr_wsd_decay_iters .............................. None + lr_wsd_decay_samples ............................ None + lr_wsd_decay_style .............................. exponential + main_grads_dtype ................................ torch.float32 + main_params_dtype ............................... torch.float32 + make_vocab_size_divisible_by .................... 128 + mamba_head_dim .................................. 64 + mamba_num_groups ................................ 8 + mamba_num_heads ................................. None + mamba_state_dim ................................. 128 + manual_gc ....................................... False + manual_gc_eval .................................. True + manual_gc_interval .............................. 0 + mask_factor ..................................... 1.0 + mask_prob ....................................... 0.15 + mask_type ....................................... random + masked_softmax_fusion ........................... True + max_position_embeddings ......................... 40960 + max_tokens_to_oom ............................... 12000 + memory_snapshot_path ............................ snapshot.pickle + merge_file ...................................... merges.txt + micro_batch_size ................................ 1 + microbatch_group_size_per_vp_stage .............. None + mid_level_dataset_surplus ....................... 0.005 + min_loss_scale .................................. 1.0 + min_lr .......................................... 0.0 + mlp_chunks_for_prefill .......................... 1 + mmap_bin_files .................................. True + mock_data ....................................... True + moe_apply_probs_on_input ........................ False + moe_aux_loss_coeff .............................. 0.0 + moe_enable_deepep ............................... False + moe_expert_capacity_factor ...................... None + moe_extended_tp ................................. False + moe_ffn_hidden_size ............................. None + moe_grouped_gemm ................................ False + moe_input_jitter_eps ............................ None + moe_layer_freq .................................. 1 + moe_layer_recompute ............................. False + moe_pad_expert_input_to_capacity ................ False + moe_per_layer_logging ........................... False + moe_permute_fusion .............................. False + moe_router_bias_update_rate ..................... 0.001 + moe_router_dtype ................................ None + moe_router_enable_expert_bias ................... False + moe_router_force_load_balancing ................. False + moe_router_group_topk ........................... None + moe_router_load_balancing_type .................. aux_loss + moe_router_num_groups ........................... None + moe_router_padding_for_fp8 ...................... False + moe_router_pre_softmax .......................... False + moe_router_score_function ....................... softmax + moe_router_topk ................................. 2 + moe_router_topk_scaling_factor .................. None + moe_shared_expert_intermediate_size ............. None + moe_shared_expert_overlap ....................... False + moe_token_dispatcher_type ....................... allgather + moe_token_drop_policy ........................... probs + moe_use_legacy_grouped_gemm ..................... False + moe_use_upcycling ............................... False + moe_z_loss_coeff ................................ None + mrope_section ................................... None + mscale .......................................... 1.0 + mscale_all_dim .................................. 1.0 + mtp_loss_scaling_factor ......................... 0.1 + mtp_num_layers .................................. None + multi_latent_attention .......................... False + nccl_all_reduce_for_prefill ..................... False + nccl_communicator_config_path ................... None + nccl_ub ......................................... False + no_load_optim ................................... None + no_load_rng ..................................... None + no_persist_layer_norm ........................... False + no_rope_freq .................................... None + no_save_optim ................................... None + no_save_rng ..................................... None + non_persistent_ckpt_type ........................ None + non_persistent_global_ckpt_dir .................. None + non_persistent_local_ckpt_algo .................. fully_parallel + non_persistent_local_ckpt_dir ................... None + non_persistent_save_interval .................... None + norm_epsilon .................................... 1e-05 + normalization ................................... LayerNorm + num_attention_heads ............................. 64 + num_channels .................................... 3 + num_classes ..................................... 1000 + num_dataset_builder_threads ..................... 1 + num_distributed_optimizer_instances ............. 1 + num_experts ..................................... None + num_layers ...................................... 2 + num_layers_at_end_in_bf16 ....................... 1 + num_layers_at_start_in_bf16 ..................... 1 + num_layers_per_virtual_pipeline_stage ........... None + num_query_groups ................................ 16 + num_virtual_stages_per_pipeline_rank ............ None + num_workers ..................................... 2 + object_storage_cache_path ....................... None + one_logger_async ................................ False + one_logger_project .............................. megatron-lm + one_logger_run_name ............................. None + onnx_safe ....................................... None + openai_gelu ..................................... False + optimizer ....................................... adam + optimizer_cpu_offload ........................... False + optimizer_offload_fraction ...................... 1.0 + output_bert_embeddings .......................... False + overlap_cpu_optimizer_d2h_h2d ................... False + overlap_grad_reduce ............................. False + overlap_p2p_comm ................................ False + overlap_p2p_comm_warmup_flush ................... False + overlap_param_gather ............................ False + overlap_param_gather_with_optimizer_step ........ False + override_opt_param_scheduler .................... False + params_dtype .................................... torch.float16 + patch_dim ....................................... 16 + per_split_data_args_path ........................ None + perform_initialization .......................... True + pin_cpu_grads ................................... True + pin_cpu_params .................................. True + pipeline_model_parallel_comm_backend ............ None + pipeline_model_parallel_size .................... 1 + pipeline_model_parallel_split_rank .............. None + position_embedding_type ......................... learned_absolute + pretrained_checkpoint ........................... None + profile ......................................... False + profile_ranks ................................... [0] + profile_step_end ................................ 12 + profile_step_start .............................. 10 + q_lora_rank ..................................... None + qk_head_dim ..................................... 128 + qk_l2_norm ...................................... False + qk_layernorm .................................... False + qk_pos_emb_head_dim ............................. 64 + query_in_block_prob ............................. 0.1 + rampup_batch_size ............................... None + rank ............................................ 0 + recompute_granularity ........................... None + recompute_method ................................ None + recompute_modules ............................... None + recompute_num_layers ............................ None + record_memory_history ........................... False + relative_attention_max_distance ................. 128 + relative_attention_num_buckets .................. 32 + replication ..................................... False + replication_factor .............................. 2 + replication_jump ................................ None + rerun_mode ...................................... disabled + reset_attention_mask ............................ False + reset_position_ids .............................. False + result_rejected_tracker_filename ................ None + retriever_report_topk_accuracies ................ [] + retriever_score_scaling ......................... False + retriever_seq_length ............................ 256 + retro_add_retriever ............................. False + retro_attention_gate ............................ 1 + retro_cyclic_train_iters ........................ None + retro_encoder_attention_dropout ................. 0.1 + retro_encoder_hidden_dropout .................... 0.1 + retro_encoder_layers ............................ 2 + retro_num_neighbors ............................. 2 + retro_num_retrieved_chunks ...................... 2 + retro_project_dir ............................... None + retro_verify_neighbor_count ..................... True + rope_scaling_factor ............................. 8.0 + rotary_base ..................................... 10000 + rotary_interleaved .............................. False + rotary_percent .................................. 1.0 + rotary_scaling_factor ........................... 1.0 + rotary_seq_len_interpolation_factor ............. None + run_workload_inspector_server ................... False + sample_rate ..................................... 1.0 + save ............................................ gpt-checkpoint + save_interval ................................... 16 + scatter_gather_tensors_in_pipeline .............. True + seed ............................................ 1234 + seq_length ...................................... 40960 + sequence_parallel ............................... False + sgd_momentum .................................... 0.9 + short_seq_prob .................................. 0.1 + skip_train ...................................... False + skipped_train_samples ........................... 0 + spec ............................................ None + split ........................................... None + squared_relu .................................... False + start_weight_decay .............................. 0.1 + straggler_ctrlr_port ............................ 65535 + straggler_minmax_count .......................... 1 + suggested_communication_unit_size ............... None + swiglu .......................................... False + swin_backbone_type .............................. tiny + symmetric_ar_type ............................... None + te_rng_tracker .................................. False + tensor_model_parallel_size ...................... 2 + tensorboard_dir ................................. tensorboard-logs/ + tensorboard_log_interval ........................ 1 + tensorboard_queue_size .......................... 1000 + test_data_path .................................. None + test_mode ....................................... False + tiktoken_num_special_tokens ..................... 1000 + tiktoken_pattern ................................ None + tiktoken_special_tokens ......................... None + timing_log_level ................................ 0 + timing_log_option ............................... minmax + titles_data_path ................................ None + tokenizer_model ................................. None + tokenizer_type .................................. GPT2BPETokenizer + torch_fsdp2_reshard_after_forward ............... True + tp_comm_bootstrap_backend ....................... nccl + tp_comm_bulk_dgrad .............................. True + tp_comm_bulk_wgrad .............................. True + tp_comm_overlap ................................. False + tp_comm_overlap_ag .............................. True + tp_comm_overlap_cfg ............................. None + tp_comm_overlap_rs .............................. True + tp_comm_overlap_rs_dgrad ........................ False + tp_comm_split_ag ................................ True + tp_comm_split_rs ................................ True + train_data_path ................................. None + train_iters ..................................... 10 + train_samples ................................... None + train_sync_interval ............................. None + transformer_impl ................................ transformer_engine + transformer_pipeline_model_parallel_size ........ 1 + untie_embeddings_and_output_weights ............. False + use_checkpoint_args ............................. False + use_checkpoint_opt_param_scheduler .............. False + use_cpu_initialization .......................... None + use_custom_fsdp ................................. False + use_dist_ckpt ................................... True + use_dist_ckpt_deprecated ........................ False + use_distributed_optimizer ....................... False + use_flash_attn .................................. False + use_legacy_models ............................... False + use_mp_args_from_checkpoint_args ................ False + use_one_sent_docs ............................... False + use_persistent_ckpt_worker ...................... False + use_precision_aware_optimizer ................... False + use_pytorch_profiler ............................ False + use_ring_exchange_p2p ........................... False + use_rope_scaling ................................ False + use_rotary_position_embeddings .................. False + use_sharp ....................................... False + use_tokenizer_model_from_checkpoint_args ........ True + use_torch_fsdp2 ................................. False + use_torch_optimizer_for_cpu_offload ............. False + use_tp_pp_dp_mapping ............................ False + v_head_dim ...................................... 128 + valid_data_path ................................. None + variable_seq_lengths ............................ False + virtual_pipeline_model_parallel_size ............ None + vision_backbone_type ............................ vit + vision_pretraining .............................. False + vision_pretraining_type ......................... classify + vocab_extra_ids ................................. 0 + vocab_file ...................................... vocab.json + vocab_size ...................................... None + wandb_exp_name .................................. + wandb_project ................................... + wandb_save_dir .................................. + weight_decay .................................... 0.1 + weight_decay_incr_style ......................... constant + wgrad_deferral_limit ............................ 0 + world_size ...................................... 16 + yaml_cfg ........................................ None +-------------------- end of arguments --------------------- +INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1 +> building GPT2BPETokenizer tokenizer ... + > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) +INFO:megatron.training.initialize:Setting logging level to 0 +WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED +> initializing torch distributed ... +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +INFO:megatron.training.initialize:Setting logging level to 0 +> initialized tensor model parallel with size 2 +> initialized pipeline model parallel with size 1 +> setting random seeds to 1234 ... +> compiling dataset index builder ... +make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +make: Nothing to be done for 'default'. +make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets' +>>> done with dataset index builder. Compilation time: 0.051 seconds +WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations. +> compiling and loading fused kernels ... +>>> done with compiling and loading fused kernels. Compilation time: 2.386 seconds +time to initialize megatron (seconds): 8.932 +[after megatron is initialized] datetime: 2025-06-21 22:07:18 +building GPT model ... +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536 + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536 +>>> embedding +>>> decoder +>>> output_layer + > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536 +INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False) +INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1 +Params for bucket 1 (447297536 elements, 447297536 padded size): + module.decoder.final_layernorm.bias + module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.1.self_attention.linear_qkv.bias + module.embedding.word_embeddings.weight + module.decoder.layers.1.mlp.linear_fc1.weight + module.decoder.layers.0.mlp.linear_fc2.weight + module.decoder.layers.0.mlp.linear_fc1.bias + module.decoder.final_layernorm.weight + module.decoder.layers.1.mlp.linear_fc2.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.bias + module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight + module.decoder.layers.1.mlp.linear_fc1.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.weight + module.embedding.position_embeddings.weight + module.decoder.layers.1.self_attention.linear_qkv.weight + module.decoder.layers.1.self_attention.linear_proj.weight + module.decoder.layers.0.mlp.linear_fc2.bias + module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight + module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias + module.decoder.layers.0.self_attention.linear_proj.bias + module.decoder.layers.1.mlp.linear_fc2.weight + module.decoder.layers.1.self_attention.linear_proj.bias + module.decoder.layers.0.mlp.linear_fc1.weight + module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias + module.decoder.layers.0.self_attention.linear_qkv.weight +INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=, config_logger_dir='') +INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine +WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt + will not load any checkpoints and will start from random +(min, max) time across ranks (ms): + load-checkpoint ................................: (2.63, 3.38) +[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:07:20 +> building train, validation, and test datasets ... + > datasets target sizes (minimum size): + train: 10 + validation: 1 + test: 1 +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True +INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)] +> building train, validation, and test datasets for GPT ... +INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=40960, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None) +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.006135 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1664 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001764 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1664 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices +DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False +WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None +DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001492 seconds +INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1667 +INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1 +> finished creating GPT datasets ... +[after dataloaders are built] datetime: 2025-06-21 22:07:21 +done with setup ... +training ... +(min, max) time across ranks (ms): + model-and-optimizer-setup ......................: (2406.24, 2429.68) + train/valid/test-data-iterators-setup ..........: (16.97, 160.51) +Setting rerun_state_machine.current_iteration to 0... +[before the start of training step] datetime: 2025-06-21 22:07:21 +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 134.83 GiB is free. Including non-PyTorch memory, this process has 4.98 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n'] +WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 800.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) +['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 134.84 GiB is free. Including non-PyTorch memory, this process has 4.96 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']