Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- attnserver.run_attnserver.slurm.sh.343207.out.log +249 -0
- attnserver.run_attnserver.slurm.sh.343213.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343213.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343214.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343214.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343215.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343215.out.log +37 -0
- attnserver.run_attnserver.slurm.sh.343225.err.log +143 -0
- attnserver.run_attnserver.slurm.sh.343225.out.log +1185 -0
- attnserver.run_attnserver.slurm.sh.343226.err.log +71 -0
- attnserver.run_attnserver.slurm.sh.343226.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343237.err.log +352 -0
- attnserver.run_attnserver.slurm.sh.343237.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343238.err.log +613 -0
- attnserver.run_attnserver.slurm.sh.343238.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343239.err.log +93 -0
- attnserver.run_attnserver.slurm.sh.343239.out.log +19 -0
- attnserver.run_attnserver.slurm.sh.343240.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343240.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343243.err.log +199 -0
- attnserver.run_attnserver.slurm.sh.343243.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343244.err.log +430 -0
- attnserver.run_attnserver.slurm.sh.343244.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343245.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343245.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343246.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343246.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343247.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343247.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343248.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343248.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343261.err.log +202 -0
- attnserver.run_attnserver.slurm.sh.343261.out.log +1507 -0
.gitattributes
CHANGED
|
@@ -63,3 +63,4 @@ attnserver.run_attnserver.slurm.sh.343192.err.log filter=lfs diff=lfs merge=lfs
|
|
| 63 |
attnserver.run_attnserver.slurm.sh.343194.err.log filter=lfs diff=lfs merge=lfs -text
|
| 64 |
attnserver.run_attnserver.slurm.sh.343196.err.log filter=lfs diff=lfs merge=lfs -text
|
| 65 |
attnserver.run_attnserver.slurm.sh.343205.err.log filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 63 |
attnserver.run_attnserver.slurm.sh.343194.err.log filter=lfs diff=lfs merge=lfs -text
|
| 64 |
attnserver.run_attnserver.slurm.sh.343196.err.log filter=lfs diff=lfs merge=lfs -text
|
| 65 |
attnserver.run_attnserver.slurm.sh.343205.err.log filter=lfs diff=lfs merge=lfs -text
|
| 66 |
+
attnserver.run_attnserver.slurm.sh.343215.err.log filter=lfs diff=lfs merge=lfs -text
|
attnserver.run_attnserver.slurm.sh.343207.out.log
CHANGED
|
@@ -19372,3 +19372,252 @@ batch tensor after cp: position_ids torch.Size([1, 131072])
|
|
| 19372 |
Start exporting trace 1
|
| 19373 |
Done exporting trace 1
|
| 19374 |
[2025-06-21 21:59:22] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 41689.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19372 |
Start exporting trace 1
|
| 19373 |
Done exporting trace 1
|
| 19374 |
[2025-06-21 21:59:22] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 41689.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 19375 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19376 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19377 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19378 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19379 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19380 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19381 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19382 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19383 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19384 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19385 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19386 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19387 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19388 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19389 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19390 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19391 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19392 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19393 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19394 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19395 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19396 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19397 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19398 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19399 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19400 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19401 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19402 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19403 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19404 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19405 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19406 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19407 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19408 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19409 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19410 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19411 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19412 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19413 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19414 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19415 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19416 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19417 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19418 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19419 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19420 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19421 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19422 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19423 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19424 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19425 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19426 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19427 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19428 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19429 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19430 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19431 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19432 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19433 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19434 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19435 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19436 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19437 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19438 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19439 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19440 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19441 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19442 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19443 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19444 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19445 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19446 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19447 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19448 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19449 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19450 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19451 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19452 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19453 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19454 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19455 |
+
Start exporting trace 2
|
| 19456 |
+
Done exporting trace 2
|
| 19457 |
+
[2025-06-21 22:02:29] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 187533.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 19458 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19459 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19460 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19461 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19462 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19463 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19464 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19465 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19466 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19467 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19468 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19469 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19470 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19471 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19472 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19473 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19474 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19475 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19476 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19477 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19478 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19479 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19480 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19481 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19482 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19483 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19484 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19485 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19486 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19487 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19488 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19489 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19490 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19491 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19492 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19493 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19494 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19495 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19496 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19497 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19498 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19499 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19500 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19501 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19502 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19503 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19504 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19505 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19506 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19507 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19508 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19509 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19510 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19511 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19512 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19513 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19514 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19515 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19516 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19517 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19518 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19519 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19520 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19521 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19522 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19523 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19524 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19525 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19526 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19527 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19528 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19529 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19530 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19531 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19532 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19533 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19534 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19535 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19536 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19537 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19538 |
+
Start exporting trace 3
|
| 19539 |
+
Done exporting trace 3
|
| 19540 |
+
[2025-06-21 22:04:53] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 144038.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 19541 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19542 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19543 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19544 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19545 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19546 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19547 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19548 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19549 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19550 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19551 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19552 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19553 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19554 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19555 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19556 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19557 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19558 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19559 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19560 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19561 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19562 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19563 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19564 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19565 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19566 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19567 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19568 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19569 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19570 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19571 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19572 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19573 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19574 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19575 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19576 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19577 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19578 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19579 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19580 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19581 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19582 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19583 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19584 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19585 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19586 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19587 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19588 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19589 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19590 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19591 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19592 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19593 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19594 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19595 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19596 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19597 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19598 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19599 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19600 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19601 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19602 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19603 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19604 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19605 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19606 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19607 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19608 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19609 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19610 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19611 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 19612 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 19613 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 19614 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19615 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 19616 |
+
batch tensor after cp: tokens torch.Size([1, 131072])
|
| 19617 |
+
batch tensor after cp: labels torch.Size([1, 131072])
|
| 19618 |
+
batch tensor after cp: loss_mask torch.Size([1, 131072])
|
| 19619 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 19620 |
+
batch tensor after cp: position_ids torch.Size([1, 131072])
|
| 19621 |
+
Start exporting trace 4
|
| 19622 |
+
Done exporting trace 4
|
| 19623 |
+
[2025-06-21 22:06:58] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 124682.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
attnserver.run_attnserver.slurm.sh.343213.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343213.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343214.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343214.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343215.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343215.out.log
CHANGED
|
@@ -4148,3 +4148,40 @@ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks
|
|
| 4148 |
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(103809024), 1), (np.int64(92274688), 2), (np.int64(92274688), 3), (np.int64(83919872), 4), (np.int64(83919872), 5), (np.int64(88080384), 6), (np.int64(88080384), 7)]
|
| 4149 |
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(103809024), 1), (np.int64(92274688), 2), (np.int64(92274688), 3), (np.int64(83919872), 4), (np.int64(83919872), 5), (np.int64(88080384), 6), (np.int64(88080384), 7)]
|
| 4150 |
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(103809024), 1), (np.int64(92274688), 2), (np.int64(92274688), 3), (np.int64(83919872), 4), (np.int64(83919872), 5), (np.int64(88080384), 6), (np.int64(88080384), 7)]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4148 |
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(103809024), 1), (np.int64(92274688), 2), (np.int64(92274688), 3), (np.int64(83919872), 4), (np.int64(83919872), 5), (np.int64(88080384), 6), (np.int64(88080384), 7)]
|
| 4149 |
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(103809024), 1), (np.int64(92274688), 2), (np.int64(92274688), 3), (np.int64(83919872), 4), (np.int64(83919872), 5), (np.int64(88080384), 6), (np.int64(88080384), 7)]
|
| 4150 |
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(103809024), 1), (np.int64(92274688), 2), (np.int64(92274688), 3), (np.int64(83919872), 4), (np.int64(83919872), 5), (np.int64(88080384), 6), (np.int64(88080384), 7)]
|
| 4151 |
+
Running ctx_length=2048, TP_SIZE=4, CP_SIZE=8, BATCH_SIZE=4
|
| 4152 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 4153 |
+
--------------------------------
|
| 4154 |
+
CTX_LENGTH: 2048
|
| 4155 |
+
TP_SIZE: 4
|
| 4156 |
+
CP_SIZE: 8
|
| 4157 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 4158 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 4159 |
+
--------------------------------
|
| 4160 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 4161 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 4162 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 4163 |
+
--------------------------------
|
| 4164 |
+
CTX_LENGTH: 2048
|
| 4165 |
+
TP_SIZE: 4
|
| 4166 |
+
CP_SIZE: 8
|
| 4167 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 4168 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 4169 |
+
--------------------------------
|
| 4170 |
+
CTX_LENGTH: 2048
|
| 4171 |
+
TP_SIZE: 4
|
| 4172 |
+
CP_SIZE: 8
|
| 4173 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 4174 |
+
--------------------------------
|
| 4175 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 4176 |
+
--------------------------------
|
| 4177 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 4178 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 4179 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 4180 |
+
--------------------------------
|
| 4181 |
+
CTX_LENGTH: 2048
|
| 4182 |
+
TP_SIZE: 4
|
| 4183 |
+
CP_SIZE: 8
|
| 4184 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 4185 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 4186 |
+
--------------------------------
|
| 4187 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
attnserver.run_attnserver.slurm.sh.343225.err.log
CHANGED
|
@@ -2613,3 +2613,146 @@ W0621 21:51:26.238000 2242503 site-packages/torch/distributed/run.py:766] ******
|
|
| 2613 |
warnings.warn(
|
| 2614 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2615 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2613 |
warnings.warn(
|
| 2614 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2615 |
warnings.warn(
|
| 2616 |
+
[rank0]: Traceback (most recent call last):
|
| 2617 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 2618 |
+
[rank0]: pretrain(
|
| 2619 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 2620 |
+
[rank0]: save_checkpoint(
|
| 2621 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 2622 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 2623 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2624 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 2625 |
+
[rank0]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 2626 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 2627 |
+
[rank0]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 2628 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2629 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 2630 |
+
[rank0]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 2631 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 2632 |
+
[rank0]: finalize_fn()
|
| 2633 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 2634 |
+
[rank0]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 2635 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 243, in save_state_dict_async_finalize
|
| 2636 |
+
[rank0]: storage_writer.finish(global_metadata, all_results)
|
| 2637 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 483, in finish
|
| 2638 |
+
[rank0]: super().finish(metadata, results)
|
| 2639 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py", line 697, in finish
|
| 2640 |
+
[rank0]: with self.fs.create_stream(tmp_path, "wb") as metadata_file:
|
| 2641 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2642 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/contextlib.py", line 137, in __enter__
|
| 2643 |
+
[rank0]: return next(self.gen)
|
| 2644 |
+
[rank0]: ^^^^^^^^^^^^^^
|
| 2645 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py", line 476, in create_stream
|
| 2646 |
+
[rank0]: with path.open(mode) as stream:
|
| 2647 |
+
[rank0]: ^^^^^^^^^^^^^^^
|
| 2648 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/pathlib.py", line 1013, in open
|
| 2649 |
+
[rank0]: return io.open(self, mode, buffering, encoding, errors, newline)
|
| 2650 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2651 |
+
[rank0]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/.metadata.tmp'
|
| 2652 |
+
[rank0]:[W621 22:04:56.500419579 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2653 |
+
W0621 22:05:35.880000 2242503 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2242575 closing signal SIGTERM
|
| 2654 |
+
W0621 22:05:35.885000 2242503 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2242576 closing signal SIGTERM
|
| 2655 |
+
W0621 22:05:35.888000 2242503 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2242577 closing signal SIGTERM
|
| 2656 |
+
W0621 22:05:35.891000 2242503 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2242578 closing signal SIGTERM
|
| 2657 |
+
W0621 22:05:35.892000 2242503 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2242579 closing signal SIGTERM
|
| 2658 |
+
W0621 22:05:35.904000 2242503 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2242580 closing signal SIGTERM
|
| 2659 |
+
W0621 22:05:35.929000 2242503 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2242581 closing signal SIGTERM
|
| 2660 |
+
E0621 22:06:00.973000 2242503 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 2242574) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 2661 |
+
Traceback (most recent call last):
|
| 2662 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 2663 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 2664 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 2665 |
+
main()
|
| 2666 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 2667 |
+
return arg(*args, **kwargs)
|
| 2668 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 2669 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 2670 |
+
launch(args)
|
| 2671 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 2672 |
+
run(args)
|
| 2673 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 2674 |
+
elastic_launch(
|
| 2675 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 2676 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 2677 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2678 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 2679 |
+
raise ChildFailedError(
|
| 2680 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 2681 |
+
============================================================
|
| 2682 |
+
./pretrain_gpt_profile.py FAILED
|
| 2683 |
+
------------------------------------------------------------
|
| 2684 |
+
Failures:
|
| 2685 |
+
<NO_OTHER_FAILURES>
|
| 2686 |
+
------------------------------------------------------------
|
| 2687 |
+
Root Cause (first observed failure):
|
| 2688 |
+
[0]:
|
| 2689 |
+
time : 2025-06-21_22:05:35
|
| 2690 |
+
host : fs-mbz-gpu-768
|
| 2691 |
+
rank : 0 (local_rank: 0)
|
| 2692 |
+
exitcode : 1 (pid: 2242574)
|
| 2693 |
+
error_file: <N/A>
|
| 2694 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 2695 |
+
============================================================
|
| 2696 |
+
+ set +x
|
| 2697 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 2698 |
+
+ export PROF_CTX_LENGTH=131072
|
| 2699 |
+
+ PROF_CTX_LENGTH=131072
|
| 2700 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L131072*tp4.cp2.bs1.json'
|
| 2701 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L131072*tp4.cp2.bs1.json' ']'
|
| 2702 |
+
+ echo 'Running ctx_length=131072, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=1'
|
| 2703 |
+
+ srun bash ./attnserver.sh
|
| 2704 |
+
+ which python3
|
| 2705 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343225 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-768:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 2706 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 2707 |
+
and will be removed in future. Use torchrun.
|
| 2708 |
+
Note that --use-env is set by default in torchrun.
|
| 2709 |
+
If your script expects `--local-rank` argument to be set, please
|
| 2710 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 2711 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 2712 |
+
further instructions
|
| 2713 |
+
|
| 2714 |
+
main()
|
| 2715 |
+
W0621 22:06:05.199000 2247188 site-packages/torch/distributed/run.py:766]
|
| 2716 |
+
W0621 22:06:05.199000 2247188 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2717 |
+
W0621 22:06:05.199000 2247188 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 2718 |
+
W0621 22:06:05.199000 2247188 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2719 |
+
[rank6]:[W621 22:06:27.689693214 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2720 |
+
[rank2]:[W621 22:06:27.689715067 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2721 |
+
[rank4]:[W621 22:06:27.695981890 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2722 |
+
[rank7]:[W621 22:06:27.696144469 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2723 |
+
[rank3]:[W621 22:06:27.696472936 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2724 |
+
[rank5]:[W621 22:06:27.697356152 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2725 |
+
[rank1]:[W621 22:06:27.697670162 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2726 |
+
[rank0]:[W621 22:06:27.853656033 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2727 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2728 |
+
warnings.warn(
|
| 2729 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2730 |
+
warnings.warn(
|
| 2731 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2732 |
+
warnings.warn(
|
| 2733 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2734 |
+
warnings.warn(
|
| 2735 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2736 |
+
warnings.warn(
|
| 2737 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2738 |
+
warnings.warn(
|
| 2739 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2740 |
+
warnings.warn(
|
| 2741 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2742 |
+
warnings.warn(
|
| 2743 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2744 |
+
warnings.warn(
|
| 2745 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2746 |
+
warnings.warn(
|
| 2747 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2748 |
+
warnings.warn(
|
| 2749 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2750 |
+
warnings.warn(
|
| 2751 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2752 |
+
warnings.warn(
|
| 2753 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2754 |
+
warnings.warn(
|
| 2755 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2756 |
+
warnings.warn(
|
| 2757 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2758 |
+
warnings.warn(
|
attnserver.run_attnserver.slurm.sh.343225.out.log
CHANGED
|
@@ -21443,3 +21443,1188 @@ batch tensor after cp: labels torch.Size([1, 49152])
|
|
| 21443 |
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21444 |
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21445 |
batch tensor after cp: position_ids torch.Size([1, 49152])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21443 |
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21444 |
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21445 |
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21446 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21447 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21448 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21449 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21450 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21451 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21452 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21453 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21454 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21455 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21456 |
+
Start exporting trace 6
|
| 21457 |
+
Done exporting trace 6
|
| 21458 |
+
[2025-06-21 21:59:39] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 75055.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 21459 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21460 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21461 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21462 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21463 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21464 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21465 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21466 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21467 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21468 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21469 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21470 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21471 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21472 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21473 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21474 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21475 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21476 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21477 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21478 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21479 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21480 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21481 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21482 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21483 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21484 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21485 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21486 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21487 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21488 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21489 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21490 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21491 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21492 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21493 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21494 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21495 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21496 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21497 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21498 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21499 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21500 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21501 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21502 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21503 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21504 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21505 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21506 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21507 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21508 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21509 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21510 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21511 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21512 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21513 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21514 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21515 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21516 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21517 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21518 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21519 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21520 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21521 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21522 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21523 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21524 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21525 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21526 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21527 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21528 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21529 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21530 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21531 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21532 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21533 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21534 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21535 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21536 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21537 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21538 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21539 |
+
Start exporting trace 7
|
| 21540 |
+
Done exporting trace 7
|
| 21541 |
+
[2025-06-21 22:01:00] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 80849.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 21542 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21543 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21544 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21545 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21546 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21547 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21548 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21549 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21550 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21551 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21552 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21553 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21554 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21555 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21556 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21557 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21558 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21559 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21560 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21561 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21562 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21563 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21564 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21565 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21566 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21567 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21568 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21569 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21570 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21571 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21572 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21573 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21574 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21575 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21576 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21577 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21578 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21579 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21580 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21581 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21582 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21583 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21584 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21585 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21586 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21587 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21588 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21589 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21590 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21591 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21592 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21593 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21594 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21595 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21596 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21597 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21598 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21599 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21600 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21601 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21602 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21603 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21604 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21605 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21606 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21607 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21608 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21609 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21610 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21611 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21612 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21613 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21614 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21615 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21616 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21617 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21618 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21619 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21620 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21621 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21622 |
+
Start exporting trace 8
|
| 21623 |
+
Done exporting trace 8
|
| 21624 |
+
[2025-06-21 22:02:04] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 64767.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 21625 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21626 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21627 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21628 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21629 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21630 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21631 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21632 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21633 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21634 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21635 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21636 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21637 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21638 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21639 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21640 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21641 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21642 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21643 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21644 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21645 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21646 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21647 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21648 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21649 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21650 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21651 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21652 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21653 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21654 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21655 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21656 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21657 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21658 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21659 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21660 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21661 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21662 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21663 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21664 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21665 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21666 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21667 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21668 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21669 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21670 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21671 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21672 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21673 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21674 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21675 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21676 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21677 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21678 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21679 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21680 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21681 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21682 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21683 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21684 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21685 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21686 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21687 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21688 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21689 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21690 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21691 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21692 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21693 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21694 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21695 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 21696 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 21697 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 21698 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 21699 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 21700 |
+
batch tensor after cp: tokens torch.Size([1, 49152])
|
| 21701 |
+
batch tensor after cp: labels torch.Size([1, 49152])
|
| 21702 |
+
batch tensor after cp: loss_mask torch.Size([1, 49152])
|
| 21703 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
|
| 21704 |
+
batch tensor after cp: position_ids torch.Size([1, 49152])
|
| 21705 |
+
Start exporting trace 9
|
| 21706 |
+
Done exporting trace 9
|
| 21707 |
+
[2025-06-21 22:03:38] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 93378.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 21708 |
+
[after training is done] datetime: 2025-06-21 22:03:38
|
| 21709 |
+
saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format
|
| 21710 |
+
DEBUG:megatron.training.checkpointing:rank: 2, takes 0.07908964157104492 to prepare state dict for ckpt
|
| 21711 |
+
DEBUG:megatron.training.checkpointing:rank: 6, takes 0.07910752296447754 to prepare state dict for ckpt
|
| 21712 |
+
DEBUG:megatron.training.checkpointing:rank: 1, takes 0.07910609245300293 to prepare state dict for ckpt
|
| 21713 |
+
DEBUG:megatron.training.checkpointing:rank: 5, takes 0.07912921905517578 to prepare state dict for ckpt
|
| 21714 |
+
DEBUG:megatron.training.checkpointing:rank: 7, takes 0.07905840873718262 to prepare state dict for ckpt
|
| 21715 |
+
DEBUG:megatron.training.checkpointing:rank: 3, takes 0.07906150817871094 to prepare state dict for ckpt
|
| 21716 |
+
DEBUG:megatron.training.checkpointing:rank: 0, takes 0.0825493335723877 to prepare state dict for ckpt
|
| 21717 |
+
DEBUG:megatron.training.checkpointing:rank: 4, takes 0.14761972427368164 to prepare state dict for ckpt
|
| 21718 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 21719 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 21720 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 21721 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 21722 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 21723 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 21724 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(417400832), 0), (np.int64(422576128), 1)]
|
| 21725 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(417400832), 0), (np.int64(422576128), 1)]
|
| 21726 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(417400832), 0), (np.int64(422576128), 1)]
|
| 21727 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(417400832), 0), (np.int64(422576128), 1)]
|
| 21728 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(417400832), 0), (np.int64(422576128), 1)]
|
| 21729 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(417400832), 0), (np.int64(422576128), 1)]
|
| 21730 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 21731 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 21732 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(1631584256), 0), (np.int64(1624655872), 1)]
|
| 21733 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(1631584256), 0), (np.int64(1624655872), 1)]
|
| 21734 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 37.76030206680298
|
| 21735 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 37.76056408882141
|
| 21736 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 37.760433197021484
|
| 21737 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 37.760650873184204
|
| 21738 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 37.760414123535156
|
| 21739 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 37.76091694831848
|
| 21740 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.028193950653076172
|
| 21741 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save
|
| 21742 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save
|
| 21743 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save
|
| 21744 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
|
| 21745 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
|
| 21746 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
|
| 21747 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
|
| 21748 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
|
| 21749 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
|
| 21750 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save
|
| 21751 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save
|
| 21752 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save
|
| 21753 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save
|
| 21754 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
|
| 21755 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
|
| 21756 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
|
| 21757 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
|
| 21758 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
|
| 21759 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
|
| 21760 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
|
| 21761 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
|
| 21762 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 37.775845527648926
|
| 21763 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save
|
| 21764 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
|
| 21765 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
|
| 21766 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.07368707656860352
|
| 21767 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543457.2063708
|
| 21768 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.07391166687011719
|
| 21769 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543457.2064087
|
| 21770 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.07307553291320801
|
| 21771 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543457.2065277
|
| 21772 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.06815838813781738
|
| 21773 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.06845450401306152
|
| 21774 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543457.2066731
|
| 21775 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543457.2066875
|
| 21776 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.5367431640625e-05
|
| 21777 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.0684041976928711
|
| 21778 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543457.206915
|
| 21779 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00032067298889160156
|
| 21780 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 9.72747802734375e-05
|
| 21781 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.0005800724029541016
|
| 21782 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.0006380081176757812
|
| 21783 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.0005955696105957031
|
| 21784 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.02703571319580078
|
| 21785 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543457.210689
|
| 21786 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.001257181167602539
|
| 21787 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.0778207778930664
|
| 21788 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750543457.2154331
|
| 21789 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.413459777832031e-05
|
| 21790 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05709362030029297
|
| 21791 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05720639228820801
|
| 21792 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543457.2645063 rank: 6, write(async) time: 0.05809736251831055
|
| 21793 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05711174011230469
|
| 21794 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543457.2645593 rank: 3, write(async) time: 0.057645559310913086
|
| 21795 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543457.2646434 rank: 7, write(async) time: 0.05811643600463867
|
| 21796 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.057874202728271484
|
| 21797 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05771374702453613
|
| 21798 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543457.264992 rank: 2, write(async) time: 0.05830240249633789
|
| 21799 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543457.265057 rank: 1, write(async) time: 0.05838346481323242
|
| 21800 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05812430381774902
|
| 21801 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543457.265517 rank: 5, write(async) time: 0.05914449691772461
|
| 21802 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.07155466079711914
|
| 21803 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543457.2875042 rank: 0, write(async) time: 0.07206296920776367
|
| 21804 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.1144871711730957
|
| 21805 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543457.3269794 rank: 4, write(async) time: 0.11631417274475098
|
| 21806 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 1.8358230590820312e-05 to finish D2H
|
| 21807 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 1.811981201171875e-05 to finish D2H
|
| 21808 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 1.8835067749023438e-05 to finish D2H
|
| 21809 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.0258786678314209 to schedule async ckpt
|
| 21810 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.025389671325683594 to schedule async ckpt
|
| 21811 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.026950359344482422 to schedule async ckpt
|
| 21812 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 1.8358230590820312e-05 to finish D2H
|
| 21813 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 1.6927719116210938e-05 to finish D2H
|
| 21814 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 1.9311904907226562e-05 to finish D2H
|
| 21815 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
|
| 21816 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
|
| 21817 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
|
| 21818 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
|
| 21819 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
|
| 21820 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
|
| 21821 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
|
| 21822 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
|
| 21823 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.02531123161315918 to schedule async ckpt
|
| 21824 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.025214672088623047 to schedule async ckpt
|
| 21825 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.025298357009887695 to schedule async ckpt
|
| 21826 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
|
| 21827 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
|
| 21828 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
|
| 21829 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
|
| 21830 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
|
| 21831 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
|
| 21832 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
|
| 21833 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
|
| 21834 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
|
| 21835 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
|
| 21836 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 217210880, before: 1648107520, after: 1865318400
|
| 21837 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 1.9311904907226562e-05 to finish D2H
|
| 21838 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 218206208, before: 1620013056, after: 1838219264
|
| 21839 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 218083328, before: 1636126720, after: 1854210048
|
| 21840 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.038712263107299805 to schedule async ckpt
|
| 21841 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 214204416, before: 1620013056, after: 1834217472
|
| 21842 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 214429696, before: 1614884864, after: 1829314560
|
| 21843 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 214933504, before: 1636126720, after: 1851060224
|
| 21844 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
|
| 21845 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
|
| 21846 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543458.416601, rank: 6, write(sync,parallel): 0.9993560314178467
|
| 21847 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 212500480, before: 1633570816, after: 1846071296
|
| 21848 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
|
| 21849 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 214368256, before: 1633574912, after: 1847943168
|
| 21850 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543458.4318566, rank: 5, write(sync,parallel): 1.011610507965088
|
| 21851 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
|
| 21852 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
|
| 21853 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 214933504, before: 1648107520, after: 1863041024
|
| 21854 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 212475904, before: 1614884864, after: 1827360768
|
| 21855 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
|
| 21856 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543458.4699008, rank: 3, write(sync,parallel): 0.9958508014678955
|
| 21857 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 212373504, before: 1617059840, after: 1829433344
|
| 21858 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
|
| 21859 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543458.481593, rank: 7, write(sync,parallel): 1.064600944519043
|
| 21860 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
|
| 21861 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543458.499954, rank: 2, write(sync,parallel): 1.0260334014892578
|
| 21862 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 214335488, before: 1617063936, after: 1831399424
|
| 21863 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
|
| 21864 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543458.5581553, rank: 1, write(sync,parallel): 1.0821151733398438
|
| 21865 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 6.389617919921875e-05 to finish D2H
|
| 21866 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.04180765151977539 to schedule async ckpt
|
| 21867 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process
|
| 21868 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process
|
| 21869 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.42s from forking
|
| 21870 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.42s from forking
|
| 21871 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process
|
| 21872 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process
|
| 21873 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process
|
| 21874 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.42s from forking
|
| 21875 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.47s from forking
|
| 21876 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.47s from forking
|
| 21877 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process
|
| 21878 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.47s from forking
|
| 21879 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process
|
| 21880 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
|
| 21881 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process
|
| 21882 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
|
| 21883 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
|
| 21884 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 25505792, before: 1899667456, after: 1925173248
|
| 21885 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 818884608, before: 1620201472, after: 2439086080
|
| 21886 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 816308224, before: 1620213760, after: 2436521984
|
| 21887 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
|
| 21888 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543461.565737, rank: 4, write(sync,parallel): 3.2021517753601074
|
| 21889 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 3.29s from forking
|
| 21890 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 1615142912, before: 1899667456, after: 3514810368
|
| 21891 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
|
| 21892 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750543464.5586467, rank: 0, write(sync,parallel): 5.696138143539429
|
| 21893 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 5.78s from forking
|
| 21894 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543464.6032062, 1, gather: 5.740522146224976
|
| 21895 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543464.6032386, 3, gather: 5.740435838699341
|
| 21896 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543464.6033263, 6, gather: 5.7400062084198
|
| 21897 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543464.603318, 5, gather: 5.740565776824951
|
| 21898 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543464.6032891, 2, gather: 5.741005897521973
|
| 21899 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543464.6035469, 7, gather: 5.740869045257568
|
| 21900 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543464.6084177, 0, gather: 0.008083581924438477
|
| 21901 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0105s
|
| 21902 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750543464.6084206, 4, gather: 2.9939053058624268
|
| 21903 |
+
Running ctx_length=131072, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=1
|
| 21904 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 21905 |
+
--------------------------------
|
| 21906 |
+
CTX_LENGTH: 131072
|
| 21907 |
+
TP_SIZE: 4
|
| 21908 |
+
CP_SIZE: 2
|
| 21909 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 21910 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 21911 |
+
--------------------------------
|
| 21912 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 21913 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 2, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 4, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 21914 |
+
Number of virtual stages per pipeline stage: None
|
| 21915 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 21916 |
+
using torch.float16 for parameters ...
|
| 21917 |
+
------------------------ arguments ------------------------
|
| 21918 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 21919 |
+
account_for_loss_in_pipeline_split .............. False
|
| 21920 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 21921 |
+
adam_beta1 ...................................... 0.9
|
| 21922 |
+
adam_beta2 ...................................... 0.999
|
| 21923 |
+
adam_eps ........................................ 1e-08
|
| 21924 |
+
add_bias_linear ................................. True
|
| 21925 |
+
add_position_embedding .......................... True
|
| 21926 |
+
add_qkv_bias .................................... True
|
| 21927 |
+
adlr_autoresume ................................. False
|
| 21928 |
+
adlr_autoresume_interval ........................ 1000
|
| 21929 |
+
align_grad_reduce ............................... True
|
| 21930 |
+
align_param_gather .............................. False
|
| 21931 |
+
app_tag_run_name ................................ None
|
| 21932 |
+
app_tag_run_version ............................. 0.0.0
|
| 21933 |
+
apply_layernorm_1p .............................. False
|
| 21934 |
+
apply_query_key_layer_scaling ................... False
|
| 21935 |
+
apply_residual_connection_post_layernorm ........ False
|
| 21936 |
+
apply_rope_fusion ............................... False
|
| 21937 |
+
async_save ...................................... None
|
| 21938 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 21939 |
+
attention_backend ............................... AttnBackend.auto
|
| 21940 |
+
attention_dropout ............................... 0.1
|
| 21941 |
+
attention_softmax_in_fp32 ....................... False
|
| 21942 |
+
auto_detect_ckpt_format ......................... False
|
| 21943 |
+
barrier_with_L1_time ............................ True
|
| 21944 |
+
bert_binary_head ................................ True
|
| 21945 |
+
bert_embedder_type .............................. megatron
|
| 21946 |
+
bert_load ....................................... None
|
| 21947 |
+
bf16 ............................................ False
|
| 21948 |
+
bias_dropout_fusion ............................. True
|
| 21949 |
+
bias_gelu_fusion ................................ True
|
| 21950 |
+
bias_swiglu_fusion .............................. True
|
| 21951 |
+
biencoder_projection_dim ........................ 0
|
| 21952 |
+
biencoder_shared_query_context_model ............ False
|
| 21953 |
+
block_data_path ................................. None
|
| 21954 |
+
calc_ft_timeouts ................................ False
|
| 21955 |
+
calculate_per_token_loss ........................ False
|
| 21956 |
+
check_for_large_grads ........................... False
|
| 21957 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 21958 |
+
check_for_spiky_loss ............................ False
|
| 21959 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 21960 |
+
ckpt_assume_constant_structure .................. False
|
| 21961 |
+
ckpt_convert_format ............................. None
|
| 21962 |
+
ckpt_convert_save ............................... None
|
| 21963 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 21964 |
+
ckpt_format ..................................... torch_dist
|
| 21965 |
+
ckpt_fully_parallel_load ........................ False
|
| 21966 |
+
ckpt_fully_parallel_save ........................ True
|
| 21967 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 21968 |
+
ckpt_step ....................................... None
|
| 21969 |
+
classes_fraction ................................ 1.0
|
| 21970 |
+
clip_grad ....................................... 1.0
|
| 21971 |
+
clone_scatter_output_in_embedding ............... True
|
| 21972 |
+
config_logger_dir ...............................
|
| 21973 |
+
consumed_train_samples .......................... 0
|
| 21974 |
+
consumed_valid_samples .......................... 0
|
| 21975 |
+
context_parallel_size ........................... 2
|
| 21976 |
+
cp_comm_type .................................... ['p2p']
|
| 21977 |
+
create_attention_mask_in_dataloader ............. True
|
| 21978 |
+
cross_entropy_fusion_impl ....................... native
|
| 21979 |
+
cross_entropy_loss_fusion ....................... False
|
| 21980 |
+
cuda_graph_scope ................................ full
|
| 21981 |
+
cuda_graph_warmup_steps ......................... 3
|
| 21982 |
+
data_args_path .................................. None
|
| 21983 |
+
data_cache_path ................................. None
|
| 21984 |
+
data_parallel_random_init ....................... False
|
| 21985 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 21986 |
+
data_parallel_size .............................. 1
|
| 21987 |
+
data_path ....................................... None
|
| 21988 |
+
data_per_class_fraction ......................... 1.0
|
| 21989 |
+
data_sharding ................................... True
|
| 21990 |
+
dataloader_type ................................. single
|
| 21991 |
+
ddp_average_in_collective ....................... False
|
| 21992 |
+
ddp_bucket_size ................................. None
|
| 21993 |
+
ddp_num_buckets ................................. None
|
| 21994 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 21995 |
+
decoder_first_pipeline_num_layers ............... None
|
| 21996 |
+
decoder_last_pipeline_num_layers ................ None
|
| 21997 |
+
decoder_num_layers .............................. None
|
| 21998 |
+
decoder_seq_length .............................. None
|
| 21999 |
+
decoupled_lr .................................... None
|
| 22000 |
+
decoupled_min_lr ................................ None
|
| 22001 |
+
decrease_batch_size_if_needed ................... False
|
| 22002 |
+
defer_embedding_wgrad_compute ................... False
|
| 22003 |
+
deprecated_use_mcore_models ..................... False
|
| 22004 |
+
deterministic_mode .............................. False
|
| 22005 |
+
dino_bottleneck_size ............................ 256
|
| 22006 |
+
dino_freeze_last_layer .......................... 1
|
| 22007 |
+
dino_head_hidden_size ........................... 2048
|
| 22008 |
+
dino_local_crops_number ......................... 10
|
| 22009 |
+
dino_local_img_size ............................. 96
|
| 22010 |
+
dino_norm_last_layer ............................ False
|
| 22011 |
+
dino_teacher_temp ............................... 0.07
|
| 22012 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 22013 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 22014 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 22015 |
+
disable_mamba_mem_eff_path ...................... False
|
| 22016 |
+
disable_straggler_on_startup .................... False
|
| 22017 |
+
dist_ckpt_format_deprecated ..................... None
|
| 22018 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 22019 |
+
distribute_saved_activations .................... False
|
| 22020 |
+
distributed_backend ............................. nccl
|
| 22021 |
+
distributed_timeout_minutes ..................... 10
|
| 22022 |
+
embedding_path .................................. None
|
| 22023 |
+
empty_unused_memory_level ....................... 0
|
| 22024 |
+
enable_cuda_graph ............................... False
|
| 22025 |
+
enable_ft_package ............................... False
|
| 22026 |
+
enable_gloo_process_groups ...................... True
|
| 22027 |
+
enable_msc ...................................... True
|
| 22028 |
+
enable_one_logger ............................... True
|
| 22029 |
+
encoder_num_layers .............................. 2
|
| 22030 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 22031 |
+
encoder_seq_length .............................. 131072
|
| 22032 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 22033 |
+
end_weight_decay ................................ 0.1
|
| 22034 |
+
eod_mask_loss ................................... False
|
| 22035 |
+
error_injection_rate ............................ 0
|
| 22036 |
+
error_injection_type ............................ transient_error
|
| 22037 |
+
eval_interval ................................... 16
|
| 22038 |
+
eval_iters ...................................... 1
|
| 22039 |
+
evidence_data_path .............................. None
|
| 22040 |
+
exit_duration_in_mins ........................... None
|
| 22041 |
+
exit_interval ................................... None
|
| 22042 |
+
exit_on_missing_checkpoint ...................... False
|
| 22043 |
+
exit_signal_handler ............................. False
|
| 22044 |
+
exp_avg_dtype ................................... torch.float32
|
| 22045 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 22046 |
+
expert_model_parallel_size ...................... 1
|
| 22047 |
+
expert_tensor_parallel_size ..................... 4
|
| 22048 |
+
external_cuda_graph ............................. False
|
| 22049 |
+
ffn_hidden_size ................................. 16384
|
| 22050 |
+
finetune ........................................ False
|
| 22051 |
+
first_last_layers_bf16 .......................... False
|
| 22052 |
+
flash_decode .................................... False
|
| 22053 |
+
fp16 ............................................ True
|
| 22054 |
+
fp16_lm_cross_entropy ........................... False
|
| 22055 |
+
fp32_residual_connection ........................ False
|
| 22056 |
+
fp8 ............................................. None
|
| 22057 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 22058 |
+
fp8_amax_history_len ............................ 1
|
| 22059 |
+
fp8_interval .................................... 1
|
| 22060 |
+
fp8_margin ...................................... 0
|
| 22061 |
+
fp8_param_gather ................................ False
|
| 22062 |
+
fp8_recipe ...................................... delayed
|
| 22063 |
+
fp8_wgrad ....................................... True
|
| 22064 |
+
fsdp_double_buffer .............................. False
|
| 22065 |
+
global_batch_size ............................... 1
|
| 22066 |
+
grad_reduce_in_bf16 ............................. False
|
| 22067 |
+
gradient_accumulation_fusion .................... True
|
| 22068 |
+
gradient_reduce_div_fusion ...................... True
|
| 22069 |
+
group_query_attention ........................... True
|
| 22070 |
+
head_lr_mult .................................... 1.0
|
| 22071 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 22072 |
+
heterogeneous_layers_config_path ................ None
|
| 22073 |
+
hidden_dropout .................................. 0.1
|
| 22074 |
+
hidden_size ..................................... 4096
|
| 22075 |
+
hierarchical_context_parallel_sizes ............. None
|
| 22076 |
+
high_priority_stream_groups ..................... []
|
| 22077 |
+
hybrid_attention_ratio .......................... 0.0
|
| 22078 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 22079 |
+
hybrid_override_pattern ......................... None
|
| 22080 |
+
hysteresis ...................................... 2
|
| 22081 |
+
ict_head_size ................................... None
|
| 22082 |
+
ict_load ........................................ None
|
| 22083 |
+
img_h ........................................... 224
|
| 22084 |
+
img_w ........................................... 224
|
| 22085 |
+
indexer_batch_size .............................. 128
|
| 22086 |
+
indexer_log_interval ............................ 1000
|
| 22087 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 22088 |
+
inference_dynamic_batching ...................... False
|
| 22089 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 22090 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 22091 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 22092 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 22093 |
+
inference_dynamic_batching_max_requests_override None
|
| 22094 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 22095 |
+
inference_max_batch_size ........................ 8
|
| 22096 |
+
inference_max_seq_length ........................ 2560
|
| 22097 |
+
inference_rng_tracker ........................... False
|
| 22098 |
+
init_method_std ................................. 0.02
|
| 22099 |
+
init_method_xavier_uniform ...................... False
|
| 22100 |
+
init_model_with_meta_device ..................... False
|
| 22101 |
+
initial_loss_scale .............................. 4294967296
|
| 22102 |
+
inprocess_active_world_size ..................... 8
|
| 22103 |
+
inprocess_barrier_timeout ....................... 120
|
| 22104 |
+
inprocess_completion_timeout .................... 120
|
| 22105 |
+
inprocess_empty_cuda_cache ...................... False
|
| 22106 |
+
inprocess_granularity ........................... node
|
| 22107 |
+
inprocess_hard_timeout .......................... 90
|
| 22108 |
+
inprocess_heartbeat_interval .................... 30
|
| 22109 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 22110 |
+
inprocess_last_call_wait ........................ 1
|
| 22111 |
+
inprocess_max_iterations ........................ None
|
| 22112 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 22113 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 22114 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 22115 |
+
inprocess_restart ............................... False
|
| 22116 |
+
inprocess_soft_timeout .......................... 60
|
| 22117 |
+
inprocess_termination_grace_time ................ 1
|
| 22118 |
+
is_hybrid_model ................................. False
|
| 22119 |
+
iter_per_epoch .................................. 1250
|
| 22120 |
+
iterations_to_skip .............................. []
|
| 22121 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 22122 |
+
kv_channels ..................................... 64
|
| 22123 |
+
kv_lora_rank .................................... 32
|
| 22124 |
+
lazy_mpu_init ................................... None
|
| 22125 |
+
load ............................................ gpt-checkpoint
|
| 22126 |
+
load_model_opt_format ........................... False
|
| 22127 |
+
local_rank ...................................... 0
|
| 22128 |
+
log_interval .................................... 1
|
| 22129 |
+
log_loss_scale_to_tensorboard ................... True
|
| 22130 |
+
log_memory_to_tensorboard ....................... False
|
| 22131 |
+
log_num_zeros_in_grad ........................... False
|
| 22132 |
+
log_params_norm ................................. False
|
| 22133 |
+
log_progress .................................... False
|
| 22134 |
+
log_straggler ................................... False
|
| 22135 |
+
log_throughput .................................. False
|
| 22136 |
+
log_timers_to_tensorboard ....................... False
|
| 22137 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 22138 |
+
log_world_size_to_tensorboard ................... False
|
| 22139 |
+
logging_level ................................... 0
|
| 22140 |
+
loss_scale ...................................... None
|
| 22141 |
+
loss_scale_window ............................... 1000
|
| 22142 |
+
lr .............................................. 0.0005
|
| 22143 |
+
lr_decay_iters .................................. 150000
|
| 22144 |
+
lr_decay_samples ................................ None
|
| 22145 |
+
lr_decay_style .................................. cosine
|
| 22146 |
+
lr_warmup_fraction .............................. None
|
| 22147 |
+
lr_warmup_init .................................. 0.0
|
| 22148 |
+
lr_warmup_iters ................................. 2
|
| 22149 |
+
lr_warmup_samples ............................... 0
|
| 22150 |
+
lr_wsd_decay_iters .............................. None
|
| 22151 |
+
lr_wsd_decay_samples ............................ None
|
| 22152 |
+
lr_wsd_decay_style .............................. exponential
|
| 22153 |
+
main_grads_dtype ................................ torch.float32
|
| 22154 |
+
main_params_dtype ............................... torch.float32
|
| 22155 |
+
make_vocab_size_divisible_by .................... 128
|
| 22156 |
+
mamba_head_dim .................................. 64
|
| 22157 |
+
mamba_num_groups ................................ 8
|
| 22158 |
+
mamba_num_heads ................................. None
|
| 22159 |
+
mamba_state_dim ................................. 128
|
| 22160 |
+
manual_gc ....................................... False
|
| 22161 |
+
manual_gc_eval .................................. True
|
| 22162 |
+
manual_gc_interval .............................. 0
|
| 22163 |
+
mask_factor ..................................... 1.0
|
| 22164 |
+
mask_prob ....................................... 0.15
|
| 22165 |
+
mask_type ....................................... random
|
| 22166 |
+
masked_softmax_fusion ........................... True
|
| 22167 |
+
max_position_embeddings ......................... 131072
|
| 22168 |
+
max_tokens_to_oom ............................... 12000
|
| 22169 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 22170 |
+
merge_file ...................................... merges.txt
|
| 22171 |
+
micro_batch_size ................................ 1
|
| 22172 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 22173 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 22174 |
+
min_loss_scale .................................. 1.0
|
| 22175 |
+
min_lr .......................................... 0.0
|
| 22176 |
+
mlp_chunks_for_prefill .......................... 1
|
| 22177 |
+
mmap_bin_files .................................. True
|
| 22178 |
+
mock_data ....................................... True
|
| 22179 |
+
moe_apply_probs_on_input ........................ False
|
| 22180 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 22181 |
+
moe_enable_deepep ............................... False
|
| 22182 |
+
moe_expert_capacity_factor ...................... None
|
| 22183 |
+
moe_extended_tp ................................. False
|
| 22184 |
+
moe_ffn_hidden_size ............................. None
|
| 22185 |
+
moe_grouped_gemm ................................ False
|
| 22186 |
+
moe_input_jitter_eps ............................ None
|
| 22187 |
+
moe_layer_freq .................................. 1
|
| 22188 |
+
moe_layer_recompute ............................. False
|
| 22189 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 22190 |
+
moe_per_layer_logging ........................... False
|
| 22191 |
+
moe_permute_fusion .............................. False
|
| 22192 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 22193 |
+
moe_router_dtype ................................ None
|
| 22194 |
+
moe_router_enable_expert_bias ................... False
|
| 22195 |
+
moe_router_force_load_balancing ................. False
|
| 22196 |
+
moe_router_group_topk ........................... None
|
| 22197 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 22198 |
+
moe_router_num_groups ........................... None
|
| 22199 |
+
moe_router_padding_for_fp8 ...................... False
|
| 22200 |
+
moe_router_pre_softmax .......................... False
|
| 22201 |
+
moe_router_score_function ....................... softmax
|
| 22202 |
+
moe_router_topk ................................. 2
|
| 22203 |
+
moe_router_topk_scaling_factor .................. None
|
| 22204 |
+
moe_shared_expert_intermediate_size ............. None
|
| 22205 |
+
moe_shared_expert_overlap ....................... False
|
| 22206 |
+
moe_token_dispatcher_type ....................... allgather
|
| 22207 |
+
moe_token_drop_policy ........................... probs
|
| 22208 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 22209 |
+
moe_use_upcycling ............................... False
|
| 22210 |
+
moe_z_loss_coeff ................................ None
|
| 22211 |
+
mrope_section ................................... None
|
| 22212 |
+
mscale .......................................... 1.0
|
| 22213 |
+
mscale_all_dim .................................. 1.0
|
| 22214 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 22215 |
+
mtp_num_layers .................................. None
|
| 22216 |
+
multi_latent_attention .......................... False
|
| 22217 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 22218 |
+
nccl_communicator_config_path ................... None
|
| 22219 |
+
nccl_ub ......................................... False
|
| 22220 |
+
no_load_optim ................................... None
|
| 22221 |
+
no_load_rng ..................................... None
|
| 22222 |
+
no_persist_layer_norm ........................... False
|
| 22223 |
+
no_rope_freq .................................... None
|
| 22224 |
+
no_save_optim ................................... None
|
| 22225 |
+
no_save_rng ..................................... None
|
| 22226 |
+
non_persistent_ckpt_type ........................ None
|
| 22227 |
+
non_persistent_global_ckpt_dir .................. None
|
| 22228 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 22229 |
+
non_persistent_local_ckpt_dir ................... None
|
| 22230 |
+
non_persistent_save_interval .................... None
|
| 22231 |
+
norm_epsilon .................................... 1e-05
|
| 22232 |
+
normalization ................................... LayerNorm
|
| 22233 |
+
num_attention_heads ............................. 64
|
| 22234 |
+
num_channels .................................... 3
|
| 22235 |
+
num_classes ..................................... 1000
|
| 22236 |
+
num_dataset_builder_threads ..................... 1
|
| 22237 |
+
num_distributed_optimizer_instances ............. 1
|
| 22238 |
+
num_experts ..................................... None
|
| 22239 |
+
num_layers ...................................... 2
|
| 22240 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 22241 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 22242 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 22243 |
+
num_query_groups ................................ 16
|
| 22244 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 22245 |
+
num_workers ..................................... 2
|
| 22246 |
+
object_storage_cache_path ....................... None
|
| 22247 |
+
one_logger_async ................................ False
|
| 22248 |
+
one_logger_project .............................. megatron-lm
|
| 22249 |
+
one_logger_run_name ............................. None
|
| 22250 |
+
onnx_safe ....................................... None
|
| 22251 |
+
openai_gelu ..................................... False
|
| 22252 |
+
optimizer ....................................... adam
|
| 22253 |
+
optimizer_cpu_offload ........................... False
|
| 22254 |
+
optimizer_offload_fraction ...................... 1.0
|
| 22255 |
+
output_bert_embeddings .......................... False
|
| 22256 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 22257 |
+
overlap_grad_reduce ............................. False
|
| 22258 |
+
overlap_p2p_comm ................................ False
|
| 22259 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 22260 |
+
overlap_param_gather ............................ False
|
| 22261 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 22262 |
+
override_opt_param_scheduler .................... False
|
| 22263 |
+
params_dtype .................................... torch.float16
|
| 22264 |
+
patch_dim ....................................... 16
|
| 22265 |
+
per_split_data_args_path ........................ None
|
| 22266 |
+
perform_initialization .......................... True
|
| 22267 |
+
pin_cpu_grads ................................... True
|
| 22268 |
+
pin_cpu_params .................................. True
|
| 22269 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 22270 |
+
pipeline_model_parallel_size .................... 1
|
| 22271 |
+
pipeline_model_parallel_split_rank .............. None
|
| 22272 |
+
position_embedding_type ......................... learned_absolute
|
| 22273 |
+
pretrained_checkpoint ........................... None
|
| 22274 |
+
profile ......................................... False
|
| 22275 |
+
profile_ranks ................................... [0]
|
| 22276 |
+
profile_step_end ................................ 12
|
| 22277 |
+
profile_step_start .............................. 10
|
| 22278 |
+
q_lora_rank ..................................... None
|
| 22279 |
+
qk_head_dim ..................................... 128
|
| 22280 |
+
qk_l2_norm ...................................... False
|
| 22281 |
+
qk_layernorm .................................... False
|
| 22282 |
+
qk_pos_emb_head_dim ............................. 64
|
| 22283 |
+
query_in_block_prob ............................. 0.1
|
| 22284 |
+
rampup_batch_size ............................... None
|
| 22285 |
+
rank ............................................ 0
|
| 22286 |
+
recompute_granularity ........................... None
|
| 22287 |
+
recompute_method ................................ None
|
| 22288 |
+
recompute_modules ............................... None
|
| 22289 |
+
recompute_num_layers ............................ None
|
| 22290 |
+
record_memory_history ........................... False
|
| 22291 |
+
relative_attention_max_distance ................. 128
|
| 22292 |
+
relative_attention_num_buckets .................. 32
|
| 22293 |
+
replication ..................................... False
|
| 22294 |
+
replication_factor .............................. 2
|
| 22295 |
+
replication_jump ................................ None
|
| 22296 |
+
rerun_mode ...................................... disabled
|
| 22297 |
+
reset_attention_mask ............................ False
|
| 22298 |
+
reset_position_ids .............................. False
|
| 22299 |
+
result_rejected_tracker_filename ................ None
|
| 22300 |
+
retriever_report_topk_accuracies ................ []
|
| 22301 |
+
retriever_score_scaling ......................... False
|
| 22302 |
+
retriever_seq_length ............................ 256
|
| 22303 |
+
retro_add_retriever ............................. False
|
| 22304 |
+
retro_attention_gate ............................ 1
|
| 22305 |
+
retro_cyclic_train_iters ........................ None
|
| 22306 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 22307 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 22308 |
+
retro_encoder_layers ............................ 2
|
| 22309 |
+
retro_num_neighbors ............................. 2
|
| 22310 |
+
retro_num_retrieved_chunks ...................... 2
|
| 22311 |
+
retro_project_dir ............................... None
|
| 22312 |
+
retro_verify_neighbor_count ..................... True
|
| 22313 |
+
rope_scaling_factor ............................. 8.0
|
| 22314 |
+
rotary_base ..................................... 10000
|
| 22315 |
+
rotary_interleaved .............................. False
|
| 22316 |
+
rotary_percent .................................. 1.0
|
| 22317 |
+
rotary_scaling_factor ........................... 1.0
|
| 22318 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 22319 |
+
run_workload_inspector_server ................... False
|
| 22320 |
+
sample_rate ..................................... 1.0
|
| 22321 |
+
save ............................................ gpt-checkpoint
|
| 22322 |
+
save_interval ................................... 16
|
| 22323 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 22324 |
+
seed ............................................ 1234
|
| 22325 |
+
seq_length ...................................... 131072
|
| 22326 |
+
sequence_parallel ............................... False
|
| 22327 |
+
sgd_momentum .................................... 0.9
|
| 22328 |
+
short_seq_prob .................................. 0.1
|
| 22329 |
+
skip_train ...................................... False
|
| 22330 |
+
skipped_train_samples ........................... 0
|
| 22331 |
+
spec ............................................ None
|
| 22332 |
+
split ........................................... None
|
| 22333 |
+
squared_relu .................................... False
|
| 22334 |
+
start_weight_decay .............................. 0.1
|
| 22335 |
+
straggler_ctrlr_port ............................ 65535
|
| 22336 |
+
straggler_minmax_count .......................... 1
|
| 22337 |
+
suggested_communication_unit_size ............... None
|
| 22338 |
+
swiglu .......................................... False
|
| 22339 |
+
swin_backbone_type .............................. tiny
|
| 22340 |
+
symmetric_ar_type ............................... None
|
| 22341 |
+
te_rng_tracker .................................. False
|
| 22342 |
+
tensor_model_parallel_size ...................... 4
|
| 22343 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 22344 |
+
tensorboard_log_interval ........................ 1
|
| 22345 |
+
tensorboard_queue_size .......................... 1000
|
| 22346 |
+
test_data_path .................................. None
|
| 22347 |
+
test_mode ....................................... False
|
| 22348 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 22349 |
+
tiktoken_pattern ................................ None
|
| 22350 |
+
tiktoken_special_tokens ......................... None
|
| 22351 |
+
timing_log_level ................................ 0
|
| 22352 |
+
timing_log_option ............................... minmax
|
| 22353 |
+
titles_data_path ................................ None
|
| 22354 |
+
tokenizer_model ................................. None
|
| 22355 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 22356 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 22357 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 22358 |
+
tp_comm_bulk_dgrad .............................. True
|
| 22359 |
+
tp_comm_bulk_wgrad .............................. True
|
| 22360 |
+
tp_comm_overlap ................................. False
|
| 22361 |
+
tp_comm_overlap_ag .............................. True
|
| 22362 |
+
tp_comm_overlap_cfg ............................. None
|
| 22363 |
+
tp_comm_overlap_rs .............................. True
|
| 22364 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 22365 |
+
tp_comm_split_ag ................................ True
|
| 22366 |
+
tp_comm_split_rs ................................ True
|
| 22367 |
+
train_data_path ................................. None
|
| 22368 |
+
train_iters ..................................... 10
|
| 22369 |
+
train_samples ................................... None
|
| 22370 |
+
train_sync_interval ............................. None
|
| 22371 |
+
transformer_impl ................................ transformer_engine
|
| 22372 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 22373 |
+
untie_embeddings_and_output_weights ............. False
|
| 22374 |
+
use_checkpoint_args ............................. False
|
| 22375 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 22376 |
+
use_cpu_initialization .......................... None
|
| 22377 |
+
use_custom_fsdp ................................. False
|
| 22378 |
+
use_dist_ckpt ................................... True
|
| 22379 |
+
use_dist_ckpt_deprecated ........................ False
|
| 22380 |
+
use_distributed_optimizer ....................... False
|
| 22381 |
+
use_flash_attn .................................. False
|
| 22382 |
+
use_legacy_models ............................... False
|
| 22383 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 22384 |
+
use_one_sent_docs ............................... False
|
| 22385 |
+
use_persistent_ckpt_worker ...................... False
|
| 22386 |
+
use_precision_aware_optimizer ................... False
|
| 22387 |
+
use_pytorch_profiler ............................ False
|
| 22388 |
+
use_ring_exchange_p2p ........................... False
|
| 22389 |
+
use_rope_scaling ................................ False
|
| 22390 |
+
use_rotary_position_embeddings .................. False
|
| 22391 |
+
use_sharp ....................................... False
|
| 22392 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 22393 |
+
use_torch_fsdp2 ................................. False
|
| 22394 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 22395 |
+
use_tp_pp_dp_mapping ............................ False
|
| 22396 |
+
v_head_dim ...................................... 128
|
| 22397 |
+
valid_data_path ................................. None
|
| 22398 |
+
variable_seq_lengths ............................ False
|
| 22399 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 22400 |
+
vision_backbone_type ............................ vit
|
| 22401 |
+
vision_pretraining .............................. False
|
| 22402 |
+
vision_pretraining_type ......................... classify
|
| 22403 |
+
vocab_extra_ids ................................. 0
|
| 22404 |
+
vocab_file ...................................... vocab.json
|
| 22405 |
+
vocab_size ...................................... None
|
| 22406 |
+
wandb_exp_name ..................................
|
| 22407 |
+
wandb_project ...................................
|
| 22408 |
+
wandb_save_dir ..................................
|
| 22409 |
+
weight_decay .................................... 0.1
|
| 22410 |
+
weight_decay_incr_style ......................... constant
|
| 22411 |
+
wgrad_deferral_limit ............................ 0
|
| 22412 |
+
world_size ...................................... 8
|
| 22413 |
+
yaml_cfg ........................................ None
|
| 22414 |
+
-------------------- end of arguments ---------------------
|
| 22415 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 22416 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 22417 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 22418 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 22419 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 22420 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 22421 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 22422 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 22423 |
+
> padded vocab (size: 50257) with 431 dummy tokens (new size: 50688)
|
| 22424 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 22425 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 22426 |
+
> initializing torch distributed ...
|
| 22427 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 22428 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 22429 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 22430 |
+
> initialized tensor model parallel with size 4
|
| 22431 |
+
> initialized pipeline model parallel with size 1
|
| 22432 |
+
> setting random seeds to 1234 ...
|
| 22433 |
+
> compiling dataset index builder ...
|
| 22434 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 22435 |
+
make: Nothing to be done for 'default'.
|
| 22436 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 22437 |
+
>>> done with dataset index builder. Compilation time: 0.046 seconds
|
| 22438 |
+
WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
|
| 22439 |
+
> compiling and loading fused kernels ...
|
| 22440 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.242 seconds
|
| 22441 |
+
time to initialize megatron (seconds): 7.855
|
| 22442 |
+
[after megatron is initialized] datetime: 2025-06-21 22:06:34
|
| 22443 |
+
building GPT model ...
|
| 22444 |
+
>>> embedding
|
| 22445 |
+
>>> decoder
|
| 22446 |
+
>>> output_layer
|
| 22447 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 676924416
|
| 22448 |
+
>>> embedding
|
| 22449 |
+
>>> decoder
|
| 22450 |
+
>>> output_layer
|
| 22451 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 676924416
|
| 22452 |
+
>>> embedding
|
| 22453 |
+
>>> decoder
|
| 22454 |
+
>>> output_layer
|
| 22455 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 676924416
|
| 22456 |
+
>>> embedding
|
| 22457 |
+
>>> decoder
|
| 22458 |
+
>>> output_layer
|
| 22459 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 676924416
|
| 22460 |
+
>>> embedding
|
| 22461 |
+
>>> decoder
|
| 22462 |
+
>>> output_layer
|
| 22463 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 676924416
|
| 22464 |
+
>>> embedding
|
| 22465 |
+
>>> decoder
|
| 22466 |
+
>>> output_layer
|
| 22467 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 676924416
|
| 22468 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 22469 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 22470 |
+
Params for bucket 1 (676924416 elements, 676924416 padded size):
|
| 22471 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 22472 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 22473 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 22474 |
+
module.embedding.word_embeddings.weight
|
| 22475 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 22476 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 22477 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 22478 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 22479 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 22480 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 22481 |
+
module.decoder.final_layernorm.bias
|
| 22482 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 22483 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 22484 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 22485 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 22486 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 22487 |
+
module.decoder.final_layernorm.weight
|
| 22488 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 22489 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 22490 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 22491 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 22492 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 22493 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 22494 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 22495 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 22496 |
+
module.embedding.position_embeddings.weight
|
| 22497 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 22498 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 22499 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14d42677eb10>, config_logger_dir='')
|
| 22500 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 22501 |
+
>>> embedding
|
| 22502 |
+
>>> decoder
|
| 22503 |
+
>>> output_layer
|
| 22504 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 676924416
|
| 22505 |
+
>>> embedding
|
| 22506 |
+
>>> decoder
|
| 22507 |
+
>>> output_layer
|
| 22508 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 676924416
|
| 22509 |
+
WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
|
| 22510 |
+
will not load any checkpoints and will start from random
|
| 22511 |
+
(min, max) time across ranks (ms):
|
| 22512 |
+
load-checkpoint ................................: (2.95, 3.93)
|
| 22513 |
+
[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:06:42
|
| 22514 |
+
> building train, validation, and test datasets ...
|
| 22515 |
+
> datasets target sizes (minimum size):
|
| 22516 |
+
train: 10
|
| 22517 |
+
validation: 1
|
| 22518 |
+
test: 1
|
| 22519 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
|
| 22520 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
|
| 22521 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
|
| 22522 |
+
> building train, validation, and test datasets for GPT ...
|
| 22523 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=131072, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14d42671f050>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
|
| 22524 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
|
| 22525 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 22526 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 22527 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004888 seconds
|
| 22528 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 22529 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 22530 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
|
| 22531 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 22532 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 22533 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001661 seconds
|
| 22534 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 22535 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 22536 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
|
| 22537 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 22538 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 22539 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001424 seconds
|
| 22540 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 22541 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 22542 |
+
> finished creating GPT datasets ...
|
| 22543 |
+
[after dataloaders are built] datetime: 2025-06-21 22:06:42
|
| 22544 |
+
done with setup ...
|
| 22545 |
+
(min, max) time across ranks (ms):
|
| 22546 |
+
model-and-optimizer-setup ......................: (8039.58, 8039.75)
|
| 22547 |
+
train/valid/test-data-iterators-setup ..........: (21.56, 114.70)
|
| 22548 |
+
training ...
|
| 22549 |
+
Setting rerun_state_machine.current_iteration to 0...
|
| 22550 |
+
[before the start of training step] datetime: 2025-06-21 22:06:42
|
| 22551 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 22552 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 22553 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 22554 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 22555 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 22556 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 22557 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 22558 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 22559 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 22560 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 22561 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 22562 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 22563 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 22564 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 22565 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 22566 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 22567 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 22568 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 22569 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 22570 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 22571 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 22572 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 22573 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 22574 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 22575 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 22576 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 22577 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 22578 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 22579 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 22580 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 22581 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 22582 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 22583 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 22584 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 22585 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 22586 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 22587 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 22588 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 22589 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 22590 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 22591 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 22592 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 22593 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 22594 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 22595 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 22596 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 22597 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 22598 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 22599 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 22600 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 22601 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 22602 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 22603 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 22604 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 22605 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 22606 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 22607 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 22608 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 22609 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 22610 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 22611 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 22612 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 22613 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 22614 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 22615 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 22616 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 22617 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 22618 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 22619 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 22620 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 22621 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 22622 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 22623 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 22624 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 22625 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 22626 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 22627 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 22628 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 22629 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 22630 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
attnserver.run_attnserver.slurm.sh.343226.err.log
CHANGED
|
@@ -1297,3 +1297,74 @@ W0621 21:55:51.007000 1996275 site-packages/torch/distributed/run.py:766] ******
|
|
| 1297 |
warnings.warn(
|
| 1298 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1299 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1297 |
warnings.warn(
|
| 1298 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1299 |
warnings.warn(
|
| 1300 |
+
[rank2]:[W621 22:04:59.407544011 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 1301 |
+
[rank3]:[W621 22:04:59.608853506 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 1302 |
+
[rank7]:[W621 22:04:59.678669926 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 1303 |
+
[rank1]:[W621 22:04:59.700049367 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 1304 |
+
[rank6]:[W621 22:04:59.938906957 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 1305 |
+
[rank5]:[W621 22:05:00.259868129 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 1306 |
+
[rank0]:[W621 22:05:00.324680781 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 1307 |
+
[rank4]:[W621 22:05:00.380660785 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 1308 |
+
+ set +x
|
| 1309 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 1310 |
+
+ export PROF_CTX_LENGTH=81920
|
| 1311 |
+
+ PROF_CTX_LENGTH=81920
|
| 1312 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L81920*tp4.cp2.bs2.json'
|
| 1313 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L81920*tp4.cp2.bs2.json' ']'
|
| 1314 |
+
+ echo 'Running ctx_length=81920, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=2'
|
| 1315 |
+
+ srun bash ./attnserver.sh
|
| 1316 |
+
+ which python3
|
| 1317 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343226 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-896:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 81920 --max-position-embeddings 81920 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 1318 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 1319 |
+
and will be removed in future. Use torchrun.
|
| 1320 |
+
Note that --use-env is set by default in torchrun.
|
| 1321 |
+
If your script expects `--local-rank` argument to be set, please
|
| 1322 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 1323 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 1324 |
+
further instructions
|
| 1325 |
+
|
| 1326 |
+
main()
|
| 1327 |
+
W0621 22:05:16.304000 2000339 site-packages/torch/distributed/run.py:766]
|
| 1328 |
+
W0621 22:05:16.304000 2000339 site-packages/torch/distributed/run.py:766] *****************************************
|
| 1329 |
+
W0621 22:05:16.304000 2000339 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 1330 |
+
W0621 22:05:16.304000 2000339 site-packages/torch/distributed/run.py:766] *****************************************
|
| 1331 |
+
[rank7]:[W621 22:05:38.315413964 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 1332 |
+
[rank3]:[W621 22:05:38.315420577 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 1333 |
+
[rank1]:[W621 22:05:38.339507253 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 1334 |
+
[rank5]:[W621 22:05:38.339635072 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 1335 |
+
[rank6]:[W621 22:05:38.340067732 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 1336 |
+
[rank2]:[W621 22:05:38.341018027 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 1337 |
+
[rank4]:[W621 22:05:38.344861448 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 1338 |
+
[rank0]:[W621 22:05:38.496511209 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 1339 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 1340 |
+
warnings.warn(
|
| 1341 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 1342 |
+
warnings.warn(
|
| 1343 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 1344 |
+
warnings.warn(
|
| 1345 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 1346 |
+
warnings.warn(
|
| 1347 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 1348 |
+
warnings.warn(
|
| 1349 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 1350 |
+
warnings.warn(
|
| 1351 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 1352 |
+
warnings.warn(
|
| 1353 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 1354 |
+
warnings.warn(
|
| 1355 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1356 |
+
warnings.warn(
|
| 1357 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1358 |
+
warnings.warn(
|
| 1359 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1360 |
+
warnings.warn(
|
| 1361 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1362 |
+
warnings.warn(
|
| 1363 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1364 |
+
warnings.warn(
|
| 1365 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1366 |
+
warnings.warn(
|
| 1367 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1368 |
+
warnings.warn(
|
| 1369 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 1370 |
+
warnings.warn(
|
attnserver.run_attnserver.slurm.sh.343226.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343237.err.log
CHANGED
|
@@ -2118,3 +2118,355 @@ W0621 21:59:08.581000 1124226 site-packages/torch/distributed/run.py:766] ******
|
|
| 2118 |
[rank13]:[W621 21:59:34.544805119 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2119 |
[rank8]:[W621 21:59:34.671912210 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2120 |
[rank0]:[W621 21:59:34.026347727 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2118 |
[rank13]:[W621 21:59:34.544805119 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2119 |
[rank8]:[W621 21:59:34.671912210 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2120 |
[rank0]:[W621 21:59:34.026347727 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2121 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2122 |
+
warnings.warn(
|
| 2123 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2124 |
+
warnings.warn(
|
| 2125 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2126 |
+
warnings.warn(
|
| 2127 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2128 |
+
warnings.warn(
|
| 2129 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2130 |
+
warnings.warn(
|
| 2131 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2132 |
+
warnings.warn(
|
| 2133 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2134 |
+
warnings.warn(
|
| 2135 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2136 |
+
warnings.warn(
|
| 2137 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2138 |
+
warnings.warn(
|
| 2139 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2140 |
+
warnings.warn(
|
| 2141 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2142 |
+
warnings.warn(
|
| 2143 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2144 |
+
warnings.warn(
|
| 2145 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2146 |
+
warnings.warn(
|
| 2147 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2148 |
+
warnings.warn(
|
| 2149 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2150 |
+
warnings.warn(
|
| 2151 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2152 |
+
warnings.warn(
|
| 2153 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2154 |
+
warnings.warn(
|
| 2155 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2156 |
+
warnings.warn(
|
| 2157 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2158 |
+
warnings.warn(
|
| 2159 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2160 |
+
warnings.warn(
|
| 2161 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2162 |
+
warnings.warn(
|
| 2163 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2164 |
+
warnings.warn(
|
| 2165 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2166 |
+
warnings.warn(
|
| 2167 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2168 |
+
warnings.warn(
|
| 2169 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2170 |
+
warnings.warn(
|
| 2171 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2172 |
+
warnings.warn(
|
| 2173 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2174 |
+
warnings.warn(
|
| 2175 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2176 |
+
warnings.warn(
|
| 2177 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2178 |
+
warnings.warn(
|
| 2179 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2180 |
+
warnings.warn(
|
| 2181 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2182 |
+
warnings.warn(
|
| 2183 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2184 |
+
warnings.warn(
|
| 2185 |
+
[rank0]: Traceback (most recent call last):
|
| 2186 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 2187 |
+
[rank0]: pretrain(
|
| 2188 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 2189 |
+
[rank0]: save_checkpoint(
|
| 2190 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 2191 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 2192 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2193 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
|
| 2194 |
+
[rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
|
| 2195 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
|
| 2196 |
+
[rank0]: torch.save(common_state_dict, path)
|
| 2197 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
|
| 2198 |
+
[rank0]: with _open_zipfile_writer(f) as opened_zipfile:
|
| 2199 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
|
| 2200 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
|
| 2201 |
+
[rank0]: return container(name_or_buffer)
|
| 2202 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2203 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
|
| 2204 |
+
[rank0]: torch._C.PyTorchFileWriter(
|
| 2205 |
+
[rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
|
| 2206 |
+
[rank0]:[W621 22:04:21.659783891 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2207 |
+
W0621 22:04:31.929000 849579 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 849652 closing signal SIGTERM
|
| 2208 |
+
W0621 22:04:31.932000 849579 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 849653 closing signal SIGTERM
|
| 2209 |
+
W0621 22:04:31.934000 849579 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 849654 closing signal SIGTERM
|
| 2210 |
+
W0621 22:04:31.937000 849579 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 849655 closing signal SIGTERM
|
| 2211 |
+
W0621 22:04:31.940000 849579 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 849656 closing signal SIGTERM
|
| 2212 |
+
W0621 22:04:31.943000 849579 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 849657 closing signal SIGTERM
|
| 2213 |
+
W0621 22:04:31.960000 849579 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 849658 closing signal SIGTERM
|
| 2214 |
+
E0621 22:04:35.819000 849579 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 849651) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 2215 |
+
Traceback (most recent call last):
|
| 2216 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 2217 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 2218 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 2219 |
+
main()
|
| 2220 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 2221 |
+
return arg(*args, **kwargs)
|
| 2222 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 2223 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 2224 |
+
launch(args)
|
| 2225 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 2226 |
+
run(args)
|
| 2227 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 2228 |
+
elastic_launch(
|
| 2229 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 2230 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 2231 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2232 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 2233 |
+
raise ChildFailedError(
|
| 2234 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 2235 |
+
============================================================
|
| 2236 |
+
./pretrain_gpt_profile.py FAILED
|
| 2237 |
+
------------------------------------------------------------
|
| 2238 |
+
Failures:
|
| 2239 |
+
<NO_OTHER_FAILURES>
|
| 2240 |
+
------------------------------------------------------------
|
| 2241 |
+
Root Cause (first observed failure):
|
| 2242 |
+
[0]:
|
| 2243 |
+
time : 2025-06-21_22:04:31
|
| 2244 |
+
host : fs-mbz-gpu-274
|
| 2245 |
+
rank : 0 (local_rank: 0)
|
| 2246 |
+
exitcode : 1 (pid: 849651)
|
| 2247 |
+
error_file: <N/A>
|
| 2248 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 2249 |
+
============================================================
|
| 2250 |
+
+ set +x
|
| 2251 |
+
W0621 22:04:36.170000 1124226 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1124296 closing signal SIGTERM
|
| 2252 |
+
W0621 22:04:36.171000 1124226 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1124297 closing signal SIGTERM
|
| 2253 |
+
W0621 22:04:36.176000 1124226 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1124298 closing signal SIGTERM
|
| 2254 |
+
W0621 22:04:36.179000 1124226 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1124299 closing signal SIGTERM
|
| 2255 |
+
W0621 22:04:36.182000 1124226 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1124300 closing signal SIGTERM
|
| 2256 |
+
W0621 22:04:36.200000 1124226 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1124301 closing signal SIGTERM
|
| 2257 |
+
W0621 22:04:36.204000 1124226 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1124302 closing signal SIGTERM
|
| 2258 |
+
W0621 22:04:36.213000 1124226 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1124303 closing signal SIGTERM
|
| 2259 |
+
[W621 22:04:39.022871617 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-476]:33082, remote=[fs-mbz-gpu-274]:29500): Broken pipe
|
| 2260 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 2261 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14b0815785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 2262 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14b06a45aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2263 |
+
frame #2: <unknown function> + 0x5baa358 (0x14b06a45c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2264 |
+
frame #3: <unknown function> + 0x5babb3e (0x14b06a45db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2265 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14b06a457ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2266 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14b06a457ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2267 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14b06a458f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2268 |
+
frame #7: <unknown function> + 0xc0f526 (0x14b07978b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 2269 |
+
frame #8: <unknown function> + 0x37f17d (0x14b078efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 2270 |
+
<omitting python frames>
|
| 2271 |
+
frame #17: <unknown function> + 0x94ac3 (0x14b082640ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 2272 |
+
frame #18: <unknown function> + 0x126850 (0x14b0826d2850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 2273 |
+
|
| 2274 |
+
W0621 22:04:39.114000 1124226 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-476_1124226_0' has failed to send a keep-alive heartbeat to the rendezvous '343237' due to an error of type RendezvousConnectionError.
|
| 2275 |
+
[W621 22:04:39.362549495 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-476]:33082, remote=[fs-mbz-gpu-274]:29500): Broken pipe
|
| 2276 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 2277 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14b0815785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 2278 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14b06a45aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2279 |
+
frame #2: <unknown function> + 0x5baa358 (0x14b06a45c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2280 |
+
frame #3: <unknown function> + 0x5babb3e (0x14b06a45db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2281 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14b06a457ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2282 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14b06a457ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2283 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14b06a458f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2284 |
+
frame #7: <unknown function> + 0xc0f526 (0x14b07978b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 2285 |
+
frame #8: <unknown function> + 0x37f17d (0x14b078efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 2286 |
+
<omitting python frames>
|
| 2287 |
+
frame #26: <unknown function> + 0x29d90 (0x14b0825d5d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 2288 |
+
frame #27: __libc_start_main + 0x80 (0x14b0825d5e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 2289 |
+
|
| 2290 |
+
W0621 22:04:39.457000 1124226 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-476_1124226_0' has failed to shutdown the rendezvous '343237' due to an error of type RendezvousConnectionError.
|
| 2291 |
+
[W621 22:04:39.377055685 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-476]:33082, remote=[fs-mbz-gpu-274]:29500): Broken pipe
|
| 2292 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 2293 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14b0815785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 2294 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14b06a45aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2295 |
+
frame #2: <unknown function> + 0x5baa358 (0x14b06a45c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2296 |
+
frame #3: <unknown function> + 0x5babb3e (0x14b06a45db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2297 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14b06a457ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2298 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14b06a457ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2299 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14b06a458f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 2300 |
+
frame #7: <unknown function> + 0xc0f526 (0x14b07978b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 2301 |
+
frame #8: <unknown function> + 0x37f17d (0x14b078efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 2302 |
+
<omitting python frames>
|
| 2303 |
+
frame #26: <unknown function> + 0x29d90 (0x14b0825d5d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 2304 |
+
frame #27: __libc_start_main + 0x80 (0x14b0825d5e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 2305 |
+
|
| 2306 |
+
W0621 22:04:39.469000 1124226 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-476_1124226_0' has failed to shutdown the rendezvous '343237' due to an error of type RendezvousConnectionError.
|
| 2307 |
+
Traceback (most recent call last):
|
| 2308 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
|
| 2309 |
+
return getattr(self._store, store_op)(*args, **kwargs)
|
| 2310 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2311 |
+
torch.distributed.DistNetworkError: failed to recv, got 0 bytes
|
| 2312 |
+
|
| 2313 |
+
The above exception was the direct cause of the following exception:
|
| 2314 |
+
|
| 2315 |
+
Traceback (most recent call last):
|
| 2316 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 2317 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 2318 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 2319 |
+
main()
|
| 2320 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 2321 |
+
return arg(*args, **kwargs)
|
| 2322 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 2323 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 2324 |
+
launch(args)
|
| 2325 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 2326 |
+
run(args)
|
| 2327 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 2328 |
+
elastic_launch(
|
| 2329 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 2330 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 2331 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2332 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
|
| 2333 |
+
result = agent.run()
|
| 2334 |
+
^^^^^^^^^^^
|
| 2335 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
|
| 2336 |
+
result = f(*args, **kwargs)
|
| 2337 |
+
^^^^^^^^^^^^^^^^^^
|
| 2338 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
|
| 2339 |
+
result = self._invoke_run(role)
|
| 2340 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 2341 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
|
| 2342 |
+
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
|
| 2343 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2344 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
|
| 2345 |
+
self._state_holder.sync()
|
| 2346 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 437, in sync
|
| 2347 |
+
get_response = self._backend.get_state()
|
| 2348 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2349 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 75, in get_state
|
| 2350 |
+
base64_state: bytes = self._call_store("get", self._key)
|
| 2351 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2352 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
|
| 2353 |
+
raise RendezvousConnectionError(
|
| 2354 |
+
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
|
| 2355 |
+
+ set +x
|
| 2356 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 2357 |
+
+ export PROF_CTX_LENGTH=81920
|
| 2358 |
+
+ PROF_CTX_LENGTH=81920
|
| 2359 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L81920*tp2.cp8.bs1.json'
|
| 2360 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L81920*tp2.cp8.bs1.json' ']'
|
| 2361 |
+
+ echo 'Running ctx_length=81920, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=1'
|
| 2362 |
+
+ srun bash ./attnserver.sh
|
| 2363 |
+
+ which python3
|
| 2364 |
+
+ which python3
|
| 2365 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343237 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 81920 --max-position-embeddings 81920 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 2366 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343237 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 81920 --max-position-embeddings 81920 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 2367 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 2368 |
+
and will be removed in future. Use torchrun.
|
| 2369 |
+
Note that --use-env is set by default in torchrun.
|
| 2370 |
+
If your script expects `--local-rank` argument to be set, please
|
| 2371 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 2372 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 2373 |
+
further instructions
|
| 2374 |
+
|
| 2375 |
+
main()
|
| 2376 |
+
W0621 22:05:02.807000 853291 site-packages/torch/distributed/run.py:766]
|
| 2377 |
+
W0621 22:05:02.807000 853291 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2378 |
+
W0621 22:05:02.807000 853291 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 2379 |
+
W0621 22:05:02.807000 853291 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2380 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 2381 |
+
and will be removed in future. Use torchrun.
|
| 2382 |
+
Note that --use-env is set by default in torchrun.
|
| 2383 |
+
If your script expects `--local-rank` argument to be set, please
|
| 2384 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 2385 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 2386 |
+
further instructions
|
| 2387 |
+
|
| 2388 |
+
main()
|
| 2389 |
+
W0621 22:05:02.863000 1127779 site-packages/torch/distributed/run.py:766]
|
| 2390 |
+
W0621 22:05:02.863000 1127779 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2391 |
+
W0621 22:05:02.863000 1127779 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 2392 |
+
W0621 22:05:02.863000 1127779 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2393 |
+
[rank9]:[W621 22:05:26.047374057 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2394 |
+
[rank1]:[W621 22:05:26.386654497 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2395 |
+
[rank5]:[W621 22:05:26.387130751 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2396 |
+
[rank2]:[W621 22:05:26.387149164 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2397 |
+
[rank3]:[W621 22:05:26.387344347 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2398 |
+
[rank7]:[W621 22:05:26.387535395 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2399 |
+
[rank6]:[W621 22:05:26.388452688 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2400 |
+
[rank4]:[W621 22:05:26.388552862 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2401 |
+
[rank11]:[W621 22:05:26.060819898 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2402 |
+
[rank10]:[W621 22:05:26.060864338 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2403 |
+
[rank15]:[W621 22:05:26.061040541 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2404 |
+
[rank12]:[W621 22:05:26.061040709 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2405 |
+
[rank14]:[W621 22:05:26.061049399 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2406 |
+
[rank13]:[W621 22:05:26.061080704 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2407 |
+
[rank0]:[W621 22:05:26.543826674 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2408 |
+
[rank8]:[W621 22:05:26.317480533 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2409 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2410 |
+
warnings.warn(
|
| 2411 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2412 |
+
warnings.warn(
|
| 2413 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2414 |
+
warnings.warn(
|
| 2415 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2416 |
+
warnings.warn(
|
| 2417 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2418 |
+
warnings.warn(
|
| 2419 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2420 |
+
warnings.warn(
|
| 2421 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2422 |
+
warnings.warn(
|
| 2423 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2424 |
+
warnings.warn(
|
| 2425 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2426 |
+
warnings.warn(
|
| 2427 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2428 |
+
warnings.warn(
|
| 2429 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2430 |
+
warnings.warn(
|
| 2431 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2432 |
+
warnings.warn(
|
| 2433 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2434 |
+
warnings.warn(
|
| 2435 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2436 |
+
warnings.warn(
|
| 2437 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2438 |
+
warnings.warn(
|
| 2439 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2440 |
+
warnings.warn(
|
| 2441 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2442 |
+
warnings.warn(
|
| 2443 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2444 |
+
warnings.warn(
|
| 2445 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2446 |
+
warnings.warn(
|
| 2447 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2448 |
+
warnings.warn(
|
| 2449 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2450 |
+
warnings.warn(
|
| 2451 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2452 |
+
warnings.warn(
|
| 2453 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2454 |
+
warnings.warn(
|
| 2455 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2456 |
+
warnings.warn(
|
| 2457 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2458 |
+
warnings.warn(
|
| 2459 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2460 |
+
warnings.warn(
|
| 2461 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2462 |
+
warnings.warn(
|
| 2463 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2464 |
+
warnings.warn(
|
| 2465 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2466 |
+
warnings.warn(
|
| 2467 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2468 |
+
warnings.warn(
|
| 2469 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2470 |
+
warnings.warn(
|
| 2471 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2472 |
+
warnings.warn(
|
attnserver.run_attnserver.slurm.sh.343237.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343238.err.log
CHANGED
|
@@ -6531,3 +6531,616 @@ W0621 21:59:31.558000 3515598 site-packages/torch/distributed/elastic/multiproce
|
|
| 6531 |
W0621 21:59:31.561000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515675 closing signal SIGTERM
|
| 6532 |
W0621 21:59:31.584000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515676 closing signal SIGTERM
|
| 6533 |
W0621 21:59:31.588000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515677 closing signal SIGTERM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6531 |
W0621 21:59:31.561000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515675 closing signal SIGTERM
|
| 6532 |
W0621 21:59:31.584000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515676 closing signal SIGTERM
|
| 6533 |
W0621 21:59:31.588000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515677 closing signal SIGTERM
|
| 6534 |
+
E0621 21:59:38.566000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 3515670) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 6535 |
+
Traceback (most recent call last):
|
| 6536 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 6537 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 6538 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 6539 |
+
main()
|
| 6540 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 6541 |
+
return arg(*args, **kwargs)
|
| 6542 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 6543 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 6544 |
+
launch(args)
|
| 6545 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 6546 |
+
run(args)
|
| 6547 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 6548 |
+
elastic_launch(
|
| 6549 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 6550 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 6551 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6552 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 6553 |
+
raise ChildFailedError(
|
| 6554 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 6555 |
+
============================================================
|
| 6556 |
+
./pretrain_gpt_profile.py FAILED
|
| 6557 |
+
------------------------------------------------------------
|
| 6558 |
+
Failures:
|
| 6559 |
+
<NO_OTHER_FAILURES>
|
| 6560 |
+
------------------------------------------------------------
|
| 6561 |
+
Root Cause (first observed failure):
|
| 6562 |
+
[0]:
|
| 6563 |
+
time : 2025-06-21_21:59:31
|
| 6564 |
+
host : fs-mbz-gpu-518
|
| 6565 |
+
rank : 0 (local_rank: 0)
|
| 6566 |
+
exitcode : 1 (pid: 3515670)
|
| 6567 |
+
error_file: <N/A>
|
| 6568 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 6569 |
+
============================================================
|
| 6570 |
+
W0621 21:59:38.797000 2747734 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2747805 closing signal SIGTERM
|
| 6571 |
+
W0621 21:59:38.800000 2747734 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2747806 closing signal SIGTERM
|
| 6572 |
+
W0621 21:59:38.803000 2747734 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2747807 closing signal SIGTERM
|
| 6573 |
+
W0621 21:59:38.805000 2747734 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2747808 closing signal SIGTERM
|
| 6574 |
+
W0621 21:59:38.808000 2747734 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2747809 closing signal SIGTERM
|
| 6575 |
+
W0621 21:59:38.812000 2747734 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2747810 closing signal SIGTERM
|
| 6576 |
+
W0621 21:59:38.815000 2747734 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2747811 closing signal SIGTERM
|
| 6577 |
+
W0621 21:59:38.817000 2747734 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2747812 closing signal SIGTERM
|
| 6578 |
+
+ set +x
|
| 6579 |
+
[W621 21:59:39.196576404 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:59960, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6580 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6581 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14fb1c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6582 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14fb0585aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6583 |
+
frame #2: <unknown function> + 0x5baa358 (0x14fb0585c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6584 |
+
frame #3: <unknown function> + 0x5babb3e (0x14fb0585db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6585 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14fb05857ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6586 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14fb05857ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6587 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14fb05858f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6588 |
+
frame #7: <unknown function> + 0xc0f526 (0x14fb14b8b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6589 |
+
frame #8: <unknown function> + 0x37f17d (0x14fb142fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6590 |
+
<omitting python frames>
|
| 6591 |
+
frame #17: <unknown function> + 0x94ac3 (0x14fb1d8abac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6592 |
+
frame #18: <unknown function> + 0x126850 (0x14fb1d93d850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6593 |
+
|
| 6594 |
+
W0621 21:59:39.729000 2747734 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-546_2747734_0' has failed to send a keep-alive heartbeat to the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6595 |
+
[W621 21:59:44.206199521 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:59960, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6596 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6597 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14fb1c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6598 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14fb0585aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6599 |
+
frame #2: <unknown function> + 0x5baa358 (0x14fb0585c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6600 |
+
frame #3: <unknown function> + 0x5babb3e (0x14fb0585db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6601 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14fb05857ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6602 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14fb05857ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6603 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14fb05858f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6604 |
+
frame #7: <unknown function> + 0xc0f526 (0x14fb14b8b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6605 |
+
frame #8: <unknown function> + 0x37f17d (0x14fb142fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6606 |
+
<omitting python frames>
|
| 6607 |
+
frame #17: <unknown function> + 0x94ac3 (0x14fb1d8abac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6608 |
+
frame #18: <unknown function> + 0x126850 (0x14fb1d93d850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6609 |
+
|
| 6610 |
+
W0621 21:59:44.737000 2747734 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-546_2747734_0' has failed to send a keep-alive heartbeat to the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6611 |
+
[W621 21:59:47.893825051 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:59960, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6612 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6613 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14fb1c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6614 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14fb0585aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6615 |
+
frame #2: <unknown function> + 0x5baa358 (0x14fb0585c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6616 |
+
frame #3: <unknown function> + 0x5babb3e (0x14fb0585db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6617 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14fb05857ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6618 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14fb05857ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6619 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14fb05858f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6620 |
+
frame #7: <unknown function> + 0xc0f526 (0x14fb14b8b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6621 |
+
frame #8: <unknown function> + 0x37f17d (0x14fb142fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6622 |
+
<omitting python frames>
|
| 6623 |
+
frame #26: <unknown function> + 0x29d90 (0x14fb1d840d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6624 |
+
frame #27: __libc_start_main + 0x80 (0x14fb1d840e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6625 |
+
|
| 6626 |
+
W0621 21:59:47.430000 2747734 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-546_2747734_0' has failed to shutdown the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6627 |
+
[W621 21:59:47.908207435 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:59960, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6628 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6629 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14fb1c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6630 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14fb0585aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6631 |
+
frame #2: <unknown function> + 0x5baa358 (0x14fb0585c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6632 |
+
frame #3: <unknown function> + 0x5babb3e (0x14fb0585db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6633 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14fb05857ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6634 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14fb05857ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6635 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14fb05858f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6636 |
+
frame #7: <unknown function> + 0xc0f526 (0x14fb14b8b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6637 |
+
frame #8: <unknown function> + 0x37f17d (0x14fb142fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6638 |
+
<omitting python frames>
|
| 6639 |
+
frame #26: <unknown function> + 0x29d90 (0x14fb1d840d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6640 |
+
frame #27: __libc_start_main + 0x80 (0x14fb1d840e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6641 |
+
|
| 6642 |
+
W0621 21:59:47.441000 2747734 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-546_2747734_0' has failed to shutdown the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6643 |
+
Traceback (most recent call last):
|
| 6644 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
|
| 6645 |
+
return getattr(self._store, store_op)(*args, **kwargs)
|
| 6646 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6647 |
+
torch.distributed.DistNetworkError: failed to recv, got 0 bytes
|
| 6648 |
+
|
| 6649 |
+
The above exception was the direct cause of the following exception:
|
| 6650 |
+
|
| 6651 |
+
Traceback (most recent call last):
|
| 6652 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 6653 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 6654 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 6655 |
+
main()
|
| 6656 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 6657 |
+
return arg(*args, **kwargs)
|
| 6658 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 6659 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 6660 |
+
launch(args)
|
| 6661 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 6662 |
+
run(args)
|
| 6663 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 6664 |
+
elastic_launch(
|
| 6665 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 6666 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 6667 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6668 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
|
| 6669 |
+
result = agent.run()
|
| 6670 |
+
^^^^^^^^^^^
|
| 6671 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
|
| 6672 |
+
result = f(*args, **kwargs)
|
| 6673 |
+
^^^^^^^^^^^^^^^^^^
|
| 6674 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
|
| 6675 |
+
result = self._invoke_run(role)
|
| 6676 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 6677 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
|
| 6678 |
+
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
|
| 6679 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6680 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
|
| 6681 |
+
self._state_holder.sync()
|
| 6682 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 437, in sync
|
| 6683 |
+
get_response = self._backend.get_state()
|
| 6684 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6685 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 75, in get_state
|
| 6686 |
+
base64_state: bytes = self._call_store("get", self._key)
|
| 6687 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6688 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
|
| 6689 |
+
raise RendezvousConnectionError(
|
| 6690 |
+
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
|
| 6691 |
+
+ set +x
|
| 6692 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 6693 |
+
+ export PROF_CTX_LENGTH=49152
|
| 6694 |
+
+ PROF_CTX_LENGTH=49152
|
| 6695 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L49152*tp2.cp8.bs2.json'
|
| 6696 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L49152*tp2.cp8.bs2.json' ']'
|
| 6697 |
+
+ echo 'Running ctx_length=49152, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=2'
|
| 6698 |
+
+ srun bash ./attnserver.sh
|
| 6699 |
+
+ which python3
|
| 6700 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343238 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 49152 --max-position-embeddings 49152 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 6701 |
+
+ which python3
|
| 6702 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343238 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 49152 --max-position-embeddings 49152 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 6703 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 6704 |
+
and will be removed in future. Use torchrun.
|
| 6705 |
+
Note that --use-env is set by default in torchrun.
|
| 6706 |
+
If your script expects `--local-rank` argument to be set, please
|
| 6707 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 6708 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 6709 |
+
further instructions
|
| 6710 |
+
|
| 6711 |
+
main()
|
| 6712 |
+
W0621 21:59:50.569000 2750889 site-packages/torch/distributed/run.py:766]
|
| 6713 |
+
W0621 21:59:50.569000 2750889 site-packages/torch/distributed/run.py:766] *****************************************
|
| 6714 |
+
W0621 21:59:50.569000 2750889 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 6715 |
+
W0621 21:59:50.569000 2750889 site-packages/torch/distributed/run.py:766] *****************************************
|
| 6716 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 6717 |
+
and will be removed in future. Use torchrun.
|
| 6718 |
+
Note that --use-env is set by default in torchrun.
|
| 6719 |
+
If your script expects `--local-rank` argument to be set, please
|
| 6720 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 6721 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 6722 |
+
further instructions
|
| 6723 |
+
|
| 6724 |
+
main()
|
| 6725 |
+
W0621 21:59:50.576000 3518819 site-packages/torch/distributed/run.py:766]
|
| 6726 |
+
W0621 21:59:50.576000 3518819 site-packages/torch/distributed/run.py:766] *****************************************
|
| 6727 |
+
W0621 21:59:50.576000 3518819 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 6728 |
+
W0621 21:59:50.576000 3518819 site-packages/torch/distributed/run.py:766] *****************************************
|
| 6729 |
+
[rank1]:[W621 22:00:12.751080946 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6730 |
+
[rank5]:[W621 22:00:12.751255399 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6731 |
+
[rank3]:[W621 22:00:12.752504114 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6732 |
+
[rank13]:[W621 22:00:12.401091126 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6733 |
+
[rank9]:[W621 22:00:12.401128675 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6734 |
+
[rank15]:[W621 22:00:12.402117847 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6735 |
+
[rank7]:[W621 22:00:12.754737124 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6736 |
+
[rank4]:[W621 22:00:12.755589107 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6737 |
+
[rank6]:[W621 22:00:12.756058804 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6738 |
+
[rank2]:[W621 22:00:12.758028576 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6739 |
+
[rank10]:[W621 22:00:12.418635679 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6740 |
+
[rank11]:[W621 22:00:12.418699043 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6741 |
+
[rank12]:[W621 22:00:12.418779495 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6742 |
+
[rank14]:[W621 22:00:12.420380263 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6743 |
+
[rank8]:[W621 22:00:13.525486738 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6744 |
+
[rank0]:[W621 22:00:13.927684252 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 6745 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6746 |
+
warnings.warn(
|
| 6747 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6748 |
+
warnings.warn(
|
| 6749 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6750 |
+
warnings.warn(
|
| 6751 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6752 |
+
warnings.warn(
|
| 6753 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6754 |
+
warnings.warn(
|
| 6755 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6756 |
+
warnings.warn(
|
| 6757 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6758 |
+
warnings.warn(
|
| 6759 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6760 |
+
warnings.warn(
|
| 6761 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6762 |
+
warnings.warn(
|
| 6763 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6764 |
+
warnings.warn(
|
| 6765 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6766 |
+
warnings.warn(
|
| 6767 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6768 |
+
warnings.warn(
|
| 6769 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6770 |
+
warnings.warn(
|
| 6771 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6772 |
+
warnings.warn(
|
| 6773 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6774 |
+
warnings.warn(
|
| 6775 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 6776 |
+
warnings.warn(
|
| 6777 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6778 |
+
warnings.warn(
|
| 6779 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6780 |
+
warnings.warn(
|
| 6781 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6782 |
+
warnings.warn(
|
| 6783 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6784 |
+
warnings.warn(
|
| 6785 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6786 |
+
warnings.warn(
|
| 6787 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6788 |
+
warnings.warn(
|
| 6789 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6790 |
+
warnings.warn(
|
| 6791 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6792 |
+
warnings.warn(
|
| 6793 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6794 |
+
warnings.warn(
|
| 6795 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6796 |
+
warnings.warn(
|
| 6797 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6798 |
+
warnings.warn(
|
| 6799 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6800 |
+
warnings.warn(
|
| 6801 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6802 |
+
warnings.warn(
|
| 6803 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6804 |
+
warnings.warn(
|
| 6805 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6806 |
+
warnings.warn(
|
| 6807 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6808 |
+
warnings.warn(
|
| 6809 |
+
[rank0]: Traceback (most recent call last):
|
| 6810 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 6811 |
+
[rank0]: pretrain(
|
| 6812 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 6813 |
+
[rank0]: save_checkpoint(
|
| 6814 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 6815 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 6816 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6817 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
|
| 6818 |
+
[rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
|
| 6819 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
|
| 6820 |
+
[rank0]: torch.save(common_state_dict, path)
|
| 6821 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
|
| 6822 |
+
[rank0]: with _open_zipfile_writer(f) as opened_zipfile:
|
| 6823 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
|
| 6824 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
|
| 6825 |
+
[rank0]: return container(name_or_buffer)
|
| 6826 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6827 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
|
| 6828 |
+
[rank0]: torch._C.PyTorchFileWriter(
|
| 6829 |
+
[rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
|
| 6830 |
+
[rank0]:[W621 22:04:04.242273949 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6831 |
+
W0621 22:04:14.016000 3518819 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3518893 closing signal SIGTERM
|
| 6832 |
+
W0621 22:04:14.019000 3518819 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3518894 closing signal SIGTERM
|
| 6833 |
+
W0621 22:04:14.023000 3518819 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3518895 closing signal SIGTERM
|
| 6834 |
+
W0621 22:04:14.026000 3518819 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3518896 closing signal SIGTERM
|
| 6835 |
+
W0621 22:04:14.055000 3518819 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3518897 closing signal SIGTERM
|
| 6836 |
+
W0621 22:04:14.071000 3518819 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3518898 closing signal SIGTERM
|
| 6837 |
+
W0621 22:04:14.077000 3518819 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3518899 closing signal SIGTERM
|
| 6838 |
+
E0621 22:04:26.395000 3518819 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 3518892) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 6839 |
+
Traceback (most recent call last):
|
| 6840 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 6841 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 6842 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 6843 |
+
main()
|
| 6844 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 6845 |
+
return arg(*args, **kwargs)
|
| 6846 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 6847 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 6848 |
+
launch(args)
|
| 6849 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 6850 |
+
run(args)
|
| 6851 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 6852 |
+
elastic_launch(
|
| 6853 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 6854 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 6855 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6856 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 6857 |
+
raise ChildFailedError(
|
| 6858 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 6859 |
+
============================================================
|
| 6860 |
+
./pretrain_gpt_profile.py FAILED
|
| 6861 |
+
------------------------------------------------------------
|
| 6862 |
+
Failures:
|
| 6863 |
+
<NO_OTHER_FAILURES>
|
| 6864 |
+
------------------------------------------------------------
|
| 6865 |
+
Root Cause (first observed failure):
|
| 6866 |
+
[0]:
|
| 6867 |
+
time : 2025-06-21_22:04:14
|
| 6868 |
+
host : fs-mbz-gpu-518
|
| 6869 |
+
rank : 0 (local_rank: 0)
|
| 6870 |
+
exitcode : 1 (pid: 3518892)
|
| 6871 |
+
error_file: <N/A>
|
| 6872 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 6873 |
+
============================================================
|
| 6874 |
+
[W621 22:04:26.058717060 TCPStore.cpp:115] [c10d] recvVector failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:33082, remote=[fs-mbz-gpu-518]:29500): failed to recv, got 0 bytes
|
| 6875 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6876 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14f78c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6877 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14f77545aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6878 |
+
frame #2: <unknown function> + 0x5baa0d0 (0x14f77545c0d0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6879 |
+
frame #3: <unknown function> + 0x5baa81d (0x14f77545c81d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6880 |
+
frame #4: <unknown function> + 0x5bab4a9 (0x14f77545d4a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6881 |
+
frame #5: c10d::TCPStore::compareSet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&) + 0x1fb (0x14f7754574cb in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6882 |
+
frame #6: <unknown function> + 0xc0f919 (0x14f78478b919 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6883 |
+
frame #7: <unknown function> + 0x37f17d (0x14f783efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6884 |
+
<omitting python frames>
|
| 6885 |
+
frame #16: <unknown function> + 0x94ac3 (0x14f78d661ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6886 |
+
frame #17: <unknown function> + 0x126850 (0x14f78d6f3850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6887 |
+
|
| 6888 |
+
W0621 22:04:26.588000 2750889 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-546_2750889_0' has failed to send a keep-alive heartbeat to the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6889 |
+
[W621 22:04:26.155624461 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:33082, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6890 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6891 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14f78c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6892 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14f77545aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6893 |
+
frame #2: <unknown function> + 0x5baa358 (0x14f77545c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6894 |
+
frame #3: <unknown function> + 0x5babb3e (0x14f77545db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6895 |
+
frame #4: c10d::TCPStore::compareSet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&) + 0x299 (0x14f775457569 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6896 |
+
frame #5: <unknown function> + 0xc0f919 (0x14f78478b919 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6897 |
+
frame #6: <unknown function> + 0x37f17d (0x14f783efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6898 |
+
<omitting python frames>
|
| 6899 |
+
frame #24: <unknown function> + 0x29d90 (0x14f78d5f6d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6900 |
+
frame #25: __libc_start_main + 0x80 (0x14f78d5f6e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6901 |
+
|
| 6902 |
+
+ set +x
|
| 6903 |
+
W0621 22:04:26.693000 2750889 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2750960 closing signal SIGTERM
|
| 6904 |
+
W0621 22:04:26.695000 2750889 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2750961 closing signal SIGTERM
|
| 6905 |
+
W0621 22:04:26.698000 2750889 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2750962 closing signal SIGTERM
|
| 6906 |
+
W0621 22:04:26.701000 2750889 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2750963 closing signal SIGTERM
|
| 6907 |
+
W0621 22:04:26.703000 2750889 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2750964 closing signal SIGTERM
|
| 6908 |
+
W0621 22:04:26.716000 2750889 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2750965 closing signal SIGTERM
|
| 6909 |
+
W0621 22:04:26.733000 2750889 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2750966 closing signal SIGTERM
|
| 6910 |
+
W0621 22:04:26.757000 2750889 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2750967 closing signal SIGTERM
|
| 6911 |
+
[W621 22:04:31.065120559 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:33082, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6912 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6913 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14f78c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6914 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14f77545aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6915 |
+
frame #2: <unknown function> + 0x5baa358 (0x14f77545c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6916 |
+
frame #3: <unknown function> + 0x5babb3e (0x14f77545db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6917 |
+
frame #4: c10d::TCPStore::compareSet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&) + 0x299 (0x14f775457569 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6918 |
+
frame #5: <unknown function> + 0xc0f919 (0x14f78478b919 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6919 |
+
frame #6: <unknown function> + 0x37f17d (0x14f783efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6920 |
+
<omitting python frames>
|
| 6921 |
+
frame #15: <unknown function> + 0x94ac3 (0x14f78d661ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6922 |
+
frame #16: <unknown function> + 0x126850 (0x14f78d6f3850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6923 |
+
|
| 6924 |
+
W0621 22:04:31.595000 2750889 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-546_2750889_0' has failed to send a keep-alive heartbeat to the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6925 |
+
[W621 22:04:36.071948889 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:33082, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6926 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6927 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14f78c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6928 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14f77545aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6929 |
+
frame #2: <unknown function> + 0x5baa358 (0x14f77545c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6930 |
+
frame #3: <unknown function> + 0x5babb3e (0x14f77545db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6931 |
+
frame #4: c10d::TCPStore::compareSet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&) + 0x299 (0x14f775457569 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6932 |
+
frame #5: <unknown function> + 0xc0f919 (0x14f78478b919 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6933 |
+
frame #6: <unknown function> + 0x37f17d (0x14f783efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6934 |
+
<omitting python frames>
|
| 6935 |
+
frame #15: <unknown function> + 0x94ac3 (0x14f78d661ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6936 |
+
frame #16: <unknown function> + 0x126850 (0x14f78d6f3850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6937 |
+
|
| 6938 |
+
W0621 22:04:36.603000 2750889 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-546_2750889_0' has failed to send a keep-alive heartbeat to the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6939 |
+
[W621 22:04:41.079968518 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:33082, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6940 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6941 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14f78c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6942 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14f77545aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6943 |
+
frame #2: <unknown function> + 0x5baa358 (0x14f77545c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6944 |
+
frame #3: <unknown function> + 0x5babb3e (0x14f77545db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6945 |
+
frame #4: c10d::TCPStore::compareSet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&) + 0x299 (0x14f775457569 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6946 |
+
frame #5: <unknown function> + 0xc0f919 (0x14f78478b919 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6947 |
+
frame #6: <unknown function> + 0x37f17d (0x14f783efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6948 |
+
<omitting python frames>
|
| 6949 |
+
frame #15: <unknown function> + 0x94ac3 (0x14f78d661ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6950 |
+
frame #16: <unknown function> + 0x126850 (0x14f78d6f3850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6951 |
+
|
| 6952 |
+
W0621 22:04:41.608000 2750889 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-546_2750889_0' has failed to send a keep-alive heartbeat to the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6953 |
+
[W621 22:04:42.644396618 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:33082, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6954 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6955 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14f78c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6956 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14f77545aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6957 |
+
frame #2: <unknown function> + 0x5baa358 (0x14f77545c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6958 |
+
frame #3: <unknown function> + 0x5babb3e (0x14f77545db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6959 |
+
frame #4: c10d::TCPStore::compareSet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&) + 0x299 (0x14f775457569 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6960 |
+
frame #5: <unknown function> + 0xc0f919 (0x14f78478b919 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6961 |
+
frame #6: <unknown function> + 0x37f17d (0x14f783efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6962 |
+
<omitting python frames>
|
| 6963 |
+
frame #24: <unknown function> + 0x29d90 (0x14f78d5f6d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6964 |
+
frame #25: __libc_start_main + 0x80 (0x14f78d5f6e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6965 |
+
|
| 6966 |
+
W0621 22:04:42.182000 2750889 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-546_2750889_0' has failed to shutdown the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6967 |
+
[W621 22:04:42.660351204 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-546]:33082, remote=[fs-mbz-gpu-518]:29500): Broken pipe
|
| 6968 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6969 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14f78c5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6970 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14f77545aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6971 |
+
frame #2: <unknown function> + 0x5baa358 (0x14f77545c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6972 |
+
frame #3: <unknown function> + 0x5babb3e (0x14f77545db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6973 |
+
frame #4: c10d::TCPStore::compareSet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&, std::vector<unsigned char, std::allocator<unsigned char> > const&) + 0x299 (0x14f775457569 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6974 |
+
frame #5: <unknown function> + 0xc0f919 (0x14f78478b919 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6975 |
+
frame #6: <unknown function> + 0x37f17d (0x14f783efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6976 |
+
<omitting python frames>
|
| 6977 |
+
frame #24: <unknown function> + 0x29d90 (0x14f78d5f6d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6978 |
+
frame #25: __libc_start_main + 0x80 (0x14f78d5f6e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6979 |
+
|
| 6980 |
+
W0621 22:04:42.193000 2750889 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-546_2750889_0' has failed to shutdown the rendezvous '343238' due to an error of type RendezvousConnectionError.
|
| 6981 |
+
Traceback (most recent call last):
|
| 6982 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
|
| 6983 |
+
return getattr(self._store, store_op)(*args, **kwargs)
|
| 6984 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6985 |
+
torch.distributed.DistNetworkError: Broken pipe
|
| 6986 |
+
|
| 6987 |
+
The above exception was the direct cause of the following exception:
|
| 6988 |
+
|
| 6989 |
+
Traceback (most recent call last):
|
| 6990 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 6991 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 6992 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 6993 |
+
main()
|
| 6994 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 6995 |
+
return arg(*args, **kwargs)
|
| 6996 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 6997 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 6998 |
+
launch(args)
|
| 6999 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 7000 |
+
run(args)
|
| 7001 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 7002 |
+
elastic_launch(
|
| 7003 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 7004 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 7005 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7006 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
|
| 7007 |
+
result = agent.run()
|
| 7008 |
+
^^^^^^^^^^^
|
| 7009 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
|
| 7010 |
+
result = f(*args, **kwargs)
|
| 7011 |
+
^^^^^^^^^^^^^^^^^^
|
| 7012 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
|
| 7013 |
+
result = self._invoke_run(role)
|
| 7014 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 7015 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
|
| 7016 |
+
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
|
| 7017 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7018 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
|
| 7019 |
+
self._state_holder.sync()
|
| 7020 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 423, in sync
|
| 7021 |
+
set_response = self._backend.set_state(state_bits, self._token)
|
| 7022 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7023 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 100, in set_state
|
| 7024 |
+
base64_state: bytes = self._call_store(
|
| 7025 |
+
^^^^^^^^^^^^^^^^^
|
| 7026 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
|
| 7027 |
+
raise RendezvousConnectionError(
|
| 7028 |
+
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
|
| 7029 |
+
+ set +x
|
| 7030 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 7031 |
+
+ export PROF_CTX_LENGTH=65536
|
| 7032 |
+
+ PROF_CTX_LENGTH=65536
|
| 7033 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp2.cp8.bs2.json'
|
| 7034 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp2.cp8.bs2.json' ']'
|
| 7035 |
+
+ echo 'Running ctx_length=65536, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=2'
|
| 7036 |
+
+ srun bash ./attnserver.sh
|
| 7037 |
+
+ which python3
|
| 7038 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343238 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 65536 --max-position-embeddings 65536 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 7039 |
+
+ which python3
|
| 7040 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343238 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 65536 --max-position-embeddings 65536 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 7041 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 7042 |
+
and will be removed in future. Use torchrun.
|
| 7043 |
+
Note that --use-env is set by default in torchrun.
|
| 7044 |
+
If your script expects `--local-rank` argument to be set, please
|
| 7045 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 7046 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 7047 |
+
further instructions
|
| 7048 |
+
|
| 7049 |
+
main()
|
| 7050 |
+
W0621 22:04:45.279000 3522238 site-packages/torch/distributed/run.py:766]
|
| 7051 |
+
W0621 22:04:45.279000 3522238 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7052 |
+
W0621 22:04:45.279000 3522238 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 7053 |
+
W0621 22:04:45.279000 3522238 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7054 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 7055 |
+
and will be removed in future. Use torchrun.
|
| 7056 |
+
Note that --use-env is set by default in torchrun.
|
| 7057 |
+
If your script expects `--local-rank` argument to be set, please
|
| 7058 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 7059 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 7060 |
+
further instructions
|
| 7061 |
+
|
| 7062 |
+
main()
|
| 7063 |
+
W0621 22:04:45.598000 2754239 site-packages/torch/distributed/run.py:766]
|
| 7064 |
+
W0621 22:04:45.598000 2754239 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7065 |
+
W0621 22:04:45.598000 2754239 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 7066 |
+
W0621 22:04:45.598000 2754239 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7067 |
+
[rank1]:[W621 22:05:08.863273627 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7068 |
+
[rank5]:[W621 22:05:08.864178359 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7069 |
+
[rank11]:[W621 22:05:08.511902344 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7070 |
+
[rank9]:[W621 22:05:08.511901889 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7071 |
+
[rank3]:[W621 22:05:08.865951320 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7072 |
+
[rank7]:[W621 22:05:08.867427894 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7073 |
+
[rank13]:[W621 22:05:08.518094352 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7074 |
+
[rank6]:[W621 22:05:08.874100135 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7075 |
+
[rank2]:[W621 22:05:08.874376739 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7076 |
+
[rank15]:[W621 22:05:08.522487048 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7077 |
+
[rank4]:[W621 22:05:08.877914995 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7078 |
+
[rank12]:[W621 22:05:08.528926215 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7079 |
+
[rank14]:[W621 22:05:08.529104503 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7080 |
+
[rank10]:[W621 22:05:08.529138894 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7081 |
+
[rank8]:[W621 22:05:08.616025184 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7082 |
+
[rank0]:[W621 22:05:08.001813088 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7083 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7084 |
+
warnings.warn(
|
| 7085 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7086 |
+
warnings.warn(
|
| 7087 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7088 |
+
warnings.warn(
|
| 7089 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7090 |
+
warnings.warn(
|
| 7091 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7092 |
+
warnings.warn(
|
| 7093 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7094 |
+
warnings.warn(
|
| 7095 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7096 |
+
warnings.warn(
|
| 7097 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7098 |
+
warnings.warn(
|
| 7099 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7100 |
+
warnings.warn(
|
| 7101 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7102 |
+
warnings.warn(
|
| 7103 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7104 |
+
warnings.warn(
|
| 7105 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7106 |
+
warnings.warn(
|
| 7107 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7108 |
+
warnings.warn(
|
| 7109 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7110 |
+
warnings.warn(
|
| 7111 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7112 |
+
warnings.warn(
|
| 7113 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7114 |
+
warnings.warn(
|
| 7115 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7116 |
+
warnings.warn(
|
| 7117 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7118 |
+
warnings.warn(
|
| 7119 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7120 |
+
warnings.warn(
|
| 7121 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7122 |
+
warnings.warn(
|
| 7123 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7124 |
+
warnings.warn(
|
| 7125 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7126 |
+
warnings.warn(
|
| 7127 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7128 |
+
warnings.warn(
|
| 7129 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7130 |
+
warnings.warn(
|
| 7131 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7132 |
+
warnings.warn(
|
| 7133 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7134 |
+
warnings.warn(
|
| 7135 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7136 |
+
warnings.warn(
|
| 7137 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7138 |
+
warnings.warn(
|
| 7139 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7140 |
+
warnings.warn(
|
| 7141 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7142 |
+
warnings.warn(
|
| 7143 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7144 |
+
warnings.warn(
|
| 7145 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7146 |
+
warnings.warn(
|
attnserver.run_attnserver.slurm.sh.343238.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343239.err.log
CHANGED
|
@@ -1093,3 +1093,96 @@ Root Cause (first observed failure):
|
|
| 1093 |
traceback : Signal 6 (SIGABRT) received by PID 2138453
|
| 1094 |
========================================================
|
| 1095 |
+ set +x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1093 |
traceback : Signal 6 (SIGABRT) received by PID 2138453
|
| 1094 |
========================================================
|
| 1095 |
+ set +x
|
| 1096 |
+
[rank14]:[F621 22:07:05.268307799 ProcessGroupNCCL.cpp:1554] [PG ID 0 PG GUID 0(default_pg) Rank 14] [PG ID 0 PG GUID 0(default_pg) Rank 14] Terminating the process after attempting to dump debug info, due to collective timeout or exception.
|
| 1097 |
+
[rank10]:[F621 22:07:05.268564615 ProcessGroupNCCL.cpp:1554] [PG ID 0 PG GUID 0(default_pg) Rank 10] [PG ID 0 PG GUID 0(default_pg) Rank 10] Terminating the process after attempting to dump debug info, due to collective timeout or exception.
|
| 1098 |
+
[rank12]:[F621 22:07:06.776354590 ProcessGroupNCCL.cpp:1554] [PG ID 0 PG GUID 0(default_pg) Rank 12] [PG ID 0 PG GUID 0(default_pg) Rank 12] Terminating the process after attempting to dump debug info, due to collective timeout or exception.
|
| 1099 |
+
[rank13]:[F621 22:07:06.776790180 ProcessGroupNCCL.cpp:1554] [PG ID 0 PG GUID 0(default_pg) Rank 13] [PG ID 0 PG GUID 0(default_pg) Rank 13] Terminating the process after attempting to dump debug info, due to collective timeout or exception.
|
| 1100 |
+
[rank9]:[F621 22:07:06.777117563 ProcessGroupNCCL.cpp:1554] [PG ID 0 PG GUID 0(default_pg) Rank 9] [PG ID 0 PG GUID 0(default_pg) Rank 9] Terminating the process after attempting to dump debug info, due to collective timeout or exception.
|
| 1101 |
+
[rank11]:[F621 22:07:06.777319662 ProcessGroupNCCL.cpp:1554] [PG ID 0 PG GUID 0(default_pg) Rank 11] [PG ID 0 PG GUID 0(default_pg) Rank 11] Terminating the process after attempting to dump debug info, due to collective timeout or exception.
|
| 1102 |
+
[rank15]:[F621 22:07:06.778624888 ProcessGroupNCCL.cpp:1554] [PG ID 0 PG GUID 0(default_pg) Rank 15] [PG ID 0 PG GUID 0(default_pg) Rank 15] Terminating the process after attempting to dump debug info, due to collective timeout or exception.
|
| 1103 |
+
W0621 22:07:06.349000 792627 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 792698 closing signal SIGTERM
|
| 1104 |
+
W0621 22:07:06.352000 792627 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 792699 closing signal SIGTERM
|
| 1105 |
+
W0621 22:07:06.353000 792627 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 792701 closing signal SIGTERM
|
| 1106 |
+
W0621 22:07:06.357000 792627 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 792702 closing signal SIGTERM
|
| 1107 |
+
W0621 22:07:06.359000 792627 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 792703 closing signal SIGTERM
|
| 1108 |
+
W0621 22:07:06.360000 792627 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 792705 closing signal SIGTERM
|
| 1109 |
+
E0621 22:07:07.268000 792627 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: -6) local_rank: 2 (pid: 792700) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 1110 |
+
Traceback (most recent call last):
|
| 1111 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 1112 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 1113 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 1114 |
+
main()
|
| 1115 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 1116 |
+
return arg(*args, **kwargs)
|
| 1117 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 1118 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 1119 |
+
launch(args)
|
| 1120 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 1121 |
+
run(args)
|
| 1122 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 1123 |
+
elastic_launch(
|
| 1124 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 1125 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 1126 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 1127 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 1128 |
+
raise ChildFailedError(
|
| 1129 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 1130 |
+
=======================================================
|
| 1131 |
+
./pretrain_gpt_profile.py FAILED
|
| 1132 |
+
-------------------------------------------------------
|
| 1133 |
+
Failures:
|
| 1134 |
+
[1]:
|
| 1135 |
+
time : 2025-06-21_22:07:06
|
| 1136 |
+
host : fs-mbz-gpu-188
|
| 1137 |
+
rank : 14 (local_rank: 6)
|
| 1138 |
+
exitcode : -6 (pid: 792704)
|
| 1139 |
+
error_file: <N/A>
|
| 1140 |
+
traceback : Signal 6 (SIGABRT) received by PID 792704
|
| 1141 |
+
-------------------------------------------------------
|
| 1142 |
+
Root Cause (first observed failure):
|
| 1143 |
+
[0]:
|
| 1144 |
+
time : 2025-06-21_22:07:06
|
| 1145 |
+
host : fs-mbz-gpu-188
|
| 1146 |
+
rank : 10 (local_rank: 2)
|
| 1147 |
+
exitcode : -6 (pid: 792700)
|
| 1148 |
+
error_file: <N/A>
|
| 1149 |
+
traceback : Signal 6 (SIGABRT) received by PID 792700
|
| 1150 |
+
=======================================================
|
| 1151 |
+
+ set +x
|
| 1152 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 1153 |
+
+ export PROF_CTX_LENGTH=12288
|
| 1154 |
+
+ PROF_CTX_LENGTH=12288
|
| 1155 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L12288*tp2.cp8.bs4.json'
|
| 1156 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L12288*tp2.cp8.bs4.json' ']'
|
| 1157 |
+
+ echo 'Running ctx_length=12288, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=4'
|
| 1158 |
+
+ srun bash ./attnserver.sh
|
| 1159 |
+
+ which python3
|
| 1160 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343239 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-188:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 12288 --max-position-embeddings 12288 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 1161 |
+
+ which python3
|
| 1162 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343239 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-188:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 12288 --max-position-embeddings 12288 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 1163 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 1164 |
+
and will be removed in future. Use torchrun.
|
| 1165 |
+
Note that --use-env is set by default in torchrun.
|
| 1166 |
+
If your script expects `--local-rank` argument to be set, please
|
| 1167 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 1168 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 1169 |
+
further instructions
|
| 1170 |
+
|
| 1171 |
+
main()
|
| 1172 |
+
W0621 22:07:10.219000 796503 site-packages/torch/distributed/run.py:766]
|
| 1173 |
+
W0621 22:07:10.219000 796503 site-packages/torch/distributed/run.py:766] *****************************************
|
| 1174 |
+
W0621 22:07:10.219000 796503 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 1175 |
+
W0621 22:07:10.219000 796503 site-packages/torch/distributed/run.py:766] *****************************************
|
| 1176 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 1177 |
+
and will be removed in future. Use torchrun.
|
| 1178 |
+
Note that --use-env is set by default in torchrun.
|
| 1179 |
+
If your script expects `--local-rank` argument to be set, please
|
| 1180 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 1181 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 1182 |
+
further instructions
|
| 1183 |
+
|
| 1184 |
+
main()
|
| 1185 |
+
W0621 22:07:10.319000 2142274 site-packages/torch/distributed/run.py:766]
|
| 1186 |
+
W0621 22:07:10.319000 2142274 site-packages/torch/distributed/run.py:766] *****************************************
|
| 1187 |
+
W0621 22:07:10.319000 2142274 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 1188 |
+
W0621 22:07:10.319000 2142274 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343239.out.log
CHANGED
|
@@ -9973,3 +9973,22 @@ Params for bucket 1 (313079808 elements, 313079808 padded size):
|
|
| 9973 |
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 9974 |
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x151e86832a80>, config_logger_dir='')
|
| 9975 |
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9973 |
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 9974 |
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x151e86832a80>, config_logger_dir='')
|
| 9975 |
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 9976 |
+
Running ctx_length=12288, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=4
|
| 9977 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 9978 |
+
--------------------------------
|
| 9979 |
+
CTX_LENGTH: 12288
|
| 9980 |
+
TP_SIZE: 2
|
| 9981 |
+
CP_SIZE: 8
|
| 9982 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 9983 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9984 |
+
--------------------------------
|
| 9985 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 9986 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 9987 |
+
--------------------------------
|
| 9988 |
+
CTX_LENGTH: 12288
|
| 9989 |
+
TP_SIZE: 2
|
| 9990 |
+
CP_SIZE: 8
|
| 9991 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 9992 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9993 |
+
--------------------------------
|
| 9994 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
attnserver.run_attnserver.slurm.sh.343240.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343240.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343243.err.log
CHANGED
|
@@ -2257,3 +2257,202 @@ W0621 21:58:29.557000 1699565 site-packages/torch/distributed/run.py:766] ******
|
|
| 2257 |
warnings.warn(
|
| 2258 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2259 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2257 |
warnings.warn(
|
| 2258 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2259 |
warnings.warn(
|
| 2260 |
+
[rank1]:[W621 22:01:03.195093101 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2261 |
+
[rank3]:[W621 22:01:04.317047889 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2262 |
+
[rank5]:[W621 22:01:04.319372678 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2263 |
+
[rank0]:[W621 22:01:04.825261599 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2264 |
+
[rank4]:[W621 22:01:04.855555890 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2265 |
+
[rank6]:[W621 22:01:04.875952870 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2266 |
+
[rank2]:[W621 22:01:04.896135105 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2267 |
+
[rank7]:[W621 22:01:04.933639941 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2268 |
+
+ set +x
|
| 2269 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 2270 |
+
+ export PROF_CTX_LENGTH=65536
|
| 2271 |
+
+ PROF_CTX_LENGTH=65536
|
| 2272 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp2.cp4.bs1.json'
|
| 2273 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp2.cp4.bs1.json' ']'
|
| 2274 |
+
+ echo 'Running ctx_length=65536, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=1'
|
| 2275 |
+
+ srun bash ./attnserver.sh
|
| 2276 |
+
+ which python3
|
| 2277 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343243 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-296:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 65536 --max-position-embeddings 65536 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 2278 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 2279 |
+
and will be removed in future. Use torchrun.
|
| 2280 |
+
Note that --use-env is set by default in torchrun.
|
| 2281 |
+
If your script expects `--local-rank` argument to be set, please
|
| 2282 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 2283 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 2284 |
+
further instructions
|
| 2285 |
+
|
| 2286 |
+
main()
|
| 2287 |
+
W0621 22:01:27.928000 1703273 site-packages/torch/distributed/run.py:766]
|
| 2288 |
+
W0621 22:01:27.928000 1703273 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2289 |
+
W0621 22:01:27.928000 1703273 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 2290 |
+
W0621 22:01:27.928000 1703273 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2291 |
+
[rank1]:[W621 22:01:49.427826166 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2292 |
+
[rank7]:[W621 22:01:49.427824730 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2293 |
+
[rank5]:[W621 22:01:49.427862574 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2294 |
+
[rank3]:[W621 22:01:49.428141012 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2295 |
+
[rank2]:[W621 22:01:49.433752632 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2296 |
+
[rank4]:[W621 22:01:49.433981739 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2297 |
+
[rank6]:[W621 22:01:49.436614926 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2298 |
+
[rank0]:[W621 22:01:49.574653227 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2299 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2300 |
+
warnings.warn(
|
| 2301 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2302 |
+
warnings.warn(
|
| 2303 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2304 |
+
warnings.warn(
|
| 2305 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2306 |
+
warnings.warn(
|
| 2307 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2308 |
+
warnings.warn(
|
| 2309 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2310 |
+
warnings.warn(
|
| 2311 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2312 |
+
warnings.warn(
|
| 2313 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2314 |
+
warnings.warn(
|
| 2315 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2316 |
+
warnings.warn(
|
| 2317 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2318 |
+
warnings.warn(
|
| 2319 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2320 |
+
warnings.warn(
|
| 2321 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2322 |
+
warnings.warn(
|
| 2323 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2324 |
+
warnings.warn(
|
| 2325 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2326 |
+
warnings.warn(
|
| 2327 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2328 |
+
warnings.warn(
|
| 2329 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2330 |
+
warnings.warn(
|
| 2331 |
+
[rank0]: Traceback (most recent call last):
|
| 2332 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 2333 |
+
[rank0]: pretrain(
|
| 2334 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 2335 |
+
[rank0]: save_checkpoint(
|
| 2336 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 2337 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 2338 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2339 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
|
| 2340 |
+
[rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
|
| 2341 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
|
| 2342 |
+
[rank0]: torch.save(common_state_dict, path)
|
| 2343 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
|
| 2344 |
+
[rank0]: with _open_zipfile_writer(f) as opened_zipfile:
|
| 2345 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
|
| 2346 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
|
| 2347 |
+
[rank0]: return container(name_or_buffer)
|
| 2348 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2349 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
|
| 2350 |
+
[rank0]: torch._C.PyTorchFileWriter(
|
| 2351 |
+
[rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
|
| 2352 |
+
[rank0]:[W621 22:05:13.224670980 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 2353 |
+
W0621 22:05:22.565000 1703273 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1703344 closing signal SIGTERM
|
| 2354 |
+
W0621 22:05:22.567000 1703273 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1703345 closing signal SIGTERM
|
| 2355 |
+
W0621 22:05:22.571000 1703273 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1703346 closing signal SIGTERM
|
| 2356 |
+
W0621 22:05:22.574000 1703273 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1703347 closing signal SIGTERM
|
| 2357 |
+
W0621 22:05:22.577000 1703273 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1703348 closing signal SIGTERM
|
| 2358 |
+
W0621 22:05:22.607000 1703273 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1703349 closing signal SIGTERM
|
| 2359 |
+
W0621 22:05:22.630000 1703273 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1703350 closing signal SIGTERM
|
| 2360 |
+
E0621 22:05:25.694000 1703273 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 1703343) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 2361 |
+
Traceback (most recent call last):
|
| 2362 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 2363 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 2364 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 2365 |
+
main()
|
| 2366 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 2367 |
+
return arg(*args, **kwargs)
|
| 2368 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 2369 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 2370 |
+
launch(args)
|
| 2371 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 2372 |
+
run(args)
|
| 2373 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 2374 |
+
elastic_launch(
|
| 2375 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 2376 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 2377 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 2378 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 2379 |
+
raise ChildFailedError(
|
| 2380 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 2381 |
+
============================================================
|
| 2382 |
+
./pretrain_gpt_profile.py FAILED
|
| 2383 |
+
------------------------------------------------------------
|
| 2384 |
+
Failures:
|
| 2385 |
+
<NO_OTHER_FAILURES>
|
| 2386 |
+
------------------------------------------------------------
|
| 2387 |
+
Root Cause (first observed failure):
|
| 2388 |
+
[0]:
|
| 2389 |
+
time : 2025-06-21_22:05:22
|
| 2390 |
+
host : fs-mbz-gpu-296
|
| 2391 |
+
rank : 0 (local_rank: 0)
|
| 2392 |
+
exitcode : 1 (pid: 1703343)
|
| 2393 |
+
error_file: <N/A>
|
| 2394 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 2395 |
+
============================================================
|
| 2396 |
+
+ set +x
|
| 2397 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 2398 |
+
+ export PROF_CTX_LENGTH=81920
|
| 2399 |
+
+ PROF_CTX_LENGTH=81920
|
| 2400 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L81920*tp2.cp4.bs1.json'
|
| 2401 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L81920*tp2.cp4.bs1.json' ']'
|
| 2402 |
+
+ echo 'Running ctx_length=81920, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=1'
|
| 2403 |
+
+ srun bash ./attnserver.sh
|
| 2404 |
+
+ which python3
|
| 2405 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343243 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-296:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 81920 --max-position-embeddings 81920 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 2406 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 2407 |
+
and will be removed in future. Use torchrun.
|
| 2408 |
+
Note that --use-env is set by default in torchrun.
|
| 2409 |
+
If your script expects `--local-rank` argument to be set, please
|
| 2410 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 2411 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 2412 |
+
further instructions
|
| 2413 |
+
|
| 2414 |
+
main()
|
| 2415 |
+
W0621 22:05:36.831000 1706637 site-packages/torch/distributed/run.py:766]
|
| 2416 |
+
W0621 22:05:36.831000 1706637 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2417 |
+
W0621 22:05:36.831000 1706637 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 2418 |
+
W0621 22:05:36.831000 1706637 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2419 |
+
[rank7]:[W621 22:06:00.848046576 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2420 |
+
[rank3]:[W621 22:06:00.848057369 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2421 |
+
[rank5]:[W621 22:06:00.848088736 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2422 |
+
[rank1]:[W621 22:06:00.848088743 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2423 |
+
[rank4]:[W621 22:06:00.858281884 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2424 |
+
[rank6]:[W621 22:06:00.864451804 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2425 |
+
[rank2]:[W621 22:06:00.871896915 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2426 |
+
[rank0]:[W621 22:06:00.145457891 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2427 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2428 |
+
warnings.warn(
|
| 2429 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2430 |
+
warnings.warn(
|
| 2431 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2432 |
+
warnings.warn(
|
| 2433 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2434 |
+
warnings.warn(
|
| 2435 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2436 |
+
warnings.warn(
|
| 2437 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2438 |
+
warnings.warn(
|
| 2439 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2440 |
+
warnings.warn(
|
| 2441 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2442 |
+
warnings.warn(
|
| 2443 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2444 |
+
warnings.warn(
|
| 2445 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2446 |
+
warnings.warn(
|
| 2447 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2448 |
+
warnings.warn(
|
| 2449 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2450 |
+
warnings.warn(
|
| 2451 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2452 |
+
warnings.warn(
|
| 2453 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2454 |
+
warnings.warn(
|
| 2455 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2456 |
+
warnings.warn(
|
| 2457 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2458 |
+
warnings.warn(
|
attnserver.run_attnserver.slurm.sh.343243.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343244.err.log
CHANGED
|
@@ -3506,3 +3506,433 @@ W0621 21:58:29.857000 455826 site-packages/torch/distributed/run.py:766] *******
|
|
| 3506 |
warnings.warn(
|
| 3507 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3508 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3506 |
warnings.warn(
|
| 3507 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3508 |
warnings.warn(
|
| 3509 |
+
[rank0]: Traceback (most recent call last):
|
| 3510 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3511 |
+
[rank0]: pretrain(
|
| 3512 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3513 |
+
[rank0]: save_checkpoint(
|
| 3514 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3515 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3516 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3517 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
|
| 3518 |
+
[rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
|
| 3519 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
|
| 3520 |
+
[rank0]: torch.save(common_state_dict, path)
|
| 3521 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
|
| 3522 |
+
[rank0]: with _open_zipfile_writer(f) as opened_zipfile:
|
| 3523 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
|
| 3524 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
|
| 3525 |
+
[rank0]: return container(name_or_buffer)
|
| 3526 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3527 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
|
| 3528 |
+
[rank0]: torch._C.PyTorchFileWriter(
|
| 3529 |
+
[rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
|
| 3530 |
+
[rank0]:[W621 22:02:15.081133392 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 3531 |
+
W0621 22:02:21.964000 455826 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 455898 closing signal SIGTERM
|
| 3532 |
+
W0621 22:02:21.972000 455826 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 455899 closing signal SIGTERM
|
| 3533 |
+
W0621 22:02:21.975000 455826 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 455900 closing signal SIGTERM
|
| 3534 |
+
W0621 22:02:21.978000 455826 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 455901 closing signal SIGTERM
|
| 3535 |
+
W0621 22:02:21.981000 455826 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 455902 closing signal SIGTERM
|
| 3536 |
+
W0621 22:02:21.999000 455826 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 455903 closing signal SIGTERM
|
| 3537 |
+
W0621 22:02:22.004000 455826 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 455904 closing signal SIGTERM
|
| 3538 |
+
E0621 22:02:26.735000 455826 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 455897) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 3539 |
+
Traceback (most recent call last):
|
| 3540 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 3541 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 3542 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 3543 |
+
main()
|
| 3544 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 3545 |
+
return arg(*args, **kwargs)
|
| 3546 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 3547 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 3548 |
+
launch(args)
|
| 3549 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 3550 |
+
run(args)
|
| 3551 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 3552 |
+
elastic_launch(
|
| 3553 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 3554 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 3555 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3556 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 3557 |
+
raise ChildFailedError(
|
| 3558 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 3559 |
+
============================================================
|
| 3560 |
+
./pretrain_gpt_profile.py FAILED
|
| 3561 |
+
------------------------------------------------------------
|
| 3562 |
+
Failures:
|
| 3563 |
+
<NO_OTHER_FAILURES>
|
| 3564 |
+
------------------------------------------------------------
|
| 3565 |
+
Root Cause (first observed failure):
|
| 3566 |
+
[0]:
|
| 3567 |
+
time : 2025-06-21_22:02:21
|
| 3568 |
+
host : fs-mbz-gpu-898
|
| 3569 |
+
rank : 0 (local_rank: 0)
|
| 3570 |
+
exitcode : 1 (pid: 455897)
|
| 3571 |
+
error_file: <N/A>
|
| 3572 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 3573 |
+
============================================================
|
| 3574 |
+
+ set +x
|
| 3575 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 3576 |
+
+ export PROF_CTX_LENGTH=49152
|
| 3577 |
+
+ PROF_CTX_LENGTH=49152
|
| 3578 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L49152*tp2.cp4.bs2.json'
|
| 3579 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L49152*tp2.cp4.bs2.json' ']'
|
| 3580 |
+
+ echo 'Running ctx_length=49152, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=2'
|
| 3581 |
+
+ srun bash ./attnserver.sh
|
| 3582 |
+
+ which python3
|
| 3583 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343244 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-898:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 49152 --max-position-embeddings 49152 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 3584 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 3585 |
+
and will be removed in future. Use torchrun.
|
| 3586 |
+
Note that --use-env is set by default in torchrun.
|
| 3587 |
+
If your script expects `--local-rank` argument to be set, please
|
| 3588 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 3589 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 3590 |
+
further instructions
|
| 3591 |
+
|
| 3592 |
+
main()
|
| 3593 |
+
W0621 22:02:32.110000 459115 site-packages/torch/distributed/run.py:766]
|
| 3594 |
+
W0621 22:02:32.110000 459115 site-packages/torch/distributed/run.py:766] *****************************************
|
| 3595 |
+
W0621 22:02:32.110000 459115 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 3596 |
+
W0621 22:02:32.110000 459115 site-packages/torch/distributed/run.py:766] *****************************************
|
| 3597 |
+
[rank1]:[W621 22:02:54.594917000 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 3598 |
+
[rank3]:[W621 22:02:54.594917126 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 3599 |
+
[rank5]:[W621 22:02:54.595014737 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 3600 |
+
[rank7]:[W621 22:02:54.601411024 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 3601 |
+
[rank2]:[W621 22:02:54.611145419 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 3602 |
+
[rank6]:[W621 22:02:54.611238887 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 3603 |
+
[rank4]:[W621 22:02:54.614530675 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 3604 |
+
[rank0]:[W621 22:02:54.768257391 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 3605 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 3606 |
+
warnings.warn(
|
| 3607 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 3608 |
+
warnings.warn(
|
| 3609 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 3610 |
+
warnings.warn(
|
| 3611 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 3612 |
+
warnings.warn(
|
| 3613 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 3614 |
+
warnings.warn(
|
| 3615 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 3616 |
+
warnings.warn(
|
| 3617 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 3618 |
+
warnings.warn(
|
| 3619 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 3620 |
+
warnings.warn(
|
| 3621 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3622 |
+
warnings.warn(
|
| 3623 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3624 |
+
warnings.warn(
|
| 3625 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3626 |
+
warnings.warn(
|
| 3627 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3628 |
+
warnings.warn(
|
| 3629 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3630 |
+
warnings.warn(
|
| 3631 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3632 |
+
warnings.warn(
|
| 3633 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3634 |
+
warnings.warn(
|
| 3635 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 3636 |
+
warnings.warn(
|
| 3637 |
+
[rank6]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__6_0.distcp'
|
| 3638 |
+
|
| 3639 |
+
[rank6]: The above exception was the direct cause of the following exception:
|
| 3640 |
+
|
| 3641 |
+
[rank6]: Traceback (most recent call last):
|
| 3642 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3643 |
+
[rank6]: pretrain(
|
| 3644 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3645 |
+
[rank6]: save_checkpoint(
|
| 3646 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3647 |
+
[rank6]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3648 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3649 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 3650 |
+
[rank6]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3651 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 3652 |
+
[rank6]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3653 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3654 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 3655 |
+
[rank6]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 3656 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 3657 |
+
[rank6]: finalize_fn()
|
| 3658 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 3659 |
+
[rank6]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 3660 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 229, in save_state_dict_async_finalize
|
| 3661 |
+
[rank6]: write_results = storage_writer.retrieve_write_results()
|
| 3662 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3663 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 436, in retrieve_write_results
|
| 3664 |
+
[rank6]: raise RuntimeError(f'Worker failure: {write_results_or_exc}') from write_results_or_exc
|
| 3665 |
+
[rank6]: RuntimeError: Worker failure: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__6_0.distcp'
|
| 3666 |
+
[rank7]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__7_0.distcp'
|
| 3667 |
+
|
| 3668 |
+
[rank7]: The above exception was the direct cause of the following exception:
|
| 3669 |
+
|
| 3670 |
+
[rank7]: Traceback (most recent call last):
|
| 3671 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3672 |
+
[rank7]: pretrain(
|
| 3673 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3674 |
+
[rank7]: save_checkpoint(
|
| 3675 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3676 |
+
[rank7]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3677 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3678 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 3679 |
+
[rank7]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3680 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 3681 |
+
[rank7]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3682 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3683 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 3684 |
+
[rank7]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 3685 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 3686 |
+
[rank7]: finalize_fn()
|
| 3687 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 3688 |
+
[rank7]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 3689 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 229, in save_state_dict_async_finalize
|
| 3690 |
+
[rank7]: write_results = storage_writer.retrieve_write_results()
|
| 3691 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3692 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 436, in retrieve_write_results
|
| 3693 |
+
[rank7]: raise RuntimeError(f'Worker failure: {write_results_or_exc}') from write_results_or_exc
|
| 3694 |
+
[rank7]: RuntimeError: Worker failure: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__7_0.distcp'
|
| 3695 |
+
[rank1]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__1_0.distcp'
|
| 3696 |
+
|
| 3697 |
+
[rank1]: The above exception was the direct cause of the following exception:
|
| 3698 |
+
|
| 3699 |
+
[rank1]: Traceback (most recent call last):
|
| 3700 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3701 |
+
[rank1]: pretrain(
|
| 3702 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3703 |
+
[rank1]: save_checkpoint(
|
| 3704 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3705 |
+
[rank1]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3706 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3707 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 3708 |
+
[rank1]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3709 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 3710 |
+
[rank1]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3711 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3712 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 3713 |
+
[rank1]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 3714 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 3715 |
+
[rank1]: finalize_fn()
|
| 3716 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 3717 |
+
[rank1]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 3718 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 229, in save_state_dict_async_finalize
|
| 3719 |
+
[rank1]: write_results = storage_writer.retrieve_write_results()
|
| 3720 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3721 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 436, in retrieve_write_results
|
| 3722 |
+
[rank1]: raise RuntimeError(f'Worker failure: {write_results_or_exc}') from write_results_or_exc
|
| 3723 |
+
[rank1]: RuntimeError: Worker failure: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__1_0.distcp'
|
| 3724 |
+
[rank5]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__5_0.distcp'
|
| 3725 |
+
|
| 3726 |
+
[rank5]: The above exception was the direct cause of the following exception:
|
| 3727 |
+
|
| 3728 |
+
[rank5]: Traceback (most recent call last):
|
| 3729 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3730 |
+
[rank5]: pretrain(
|
| 3731 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3732 |
+
[rank5]: save_checkpoint(
|
| 3733 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3734 |
+
[rank5]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3735 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3736 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 3737 |
+
[rank5]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3738 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 3739 |
+
[rank5]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3740 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3741 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 3742 |
+
[rank5]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 3743 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 3744 |
+
[rank5]: finalize_fn()
|
| 3745 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 3746 |
+
[rank5]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 3747 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 229, in save_state_dict_async_finalize
|
| 3748 |
+
[rank5]: write_results = storage_writer.retrieve_write_results()
|
| 3749 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3750 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 436, in retrieve_write_results
|
| 3751 |
+
[rank5]: raise RuntimeError(f'Worker failure: {write_results_or_exc}') from write_results_or_exc
|
| 3752 |
+
[rank5]: RuntimeError: Worker failure: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__5_0.distcp'
|
| 3753 |
+
[rank3]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__3_0.distcp'
|
| 3754 |
+
|
| 3755 |
+
[rank3]: The above exception was the direct cause of the following exception:
|
| 3756 |
+
|
| 3757 |
+
[rank3]: Traceback (most recent call last):
|
| 3758 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3759 |
+
[rank3]: pretrain(
|
| 3760 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3761 |
+
[rank3]: save_checkpoint(
|
| 3762 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3763 |
+
[rank3]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3764 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3765 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 3766 |
+
[rank3]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3767 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 3768 |
+
[rank3]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3769 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3770 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 3771 |
+
[rank3]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 3772 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 3773 |
+
[rank3]: finalize_fn()
|
| 3774 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 3775 |
+
[rank3]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 3776 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 229, in save_state_dict_async_finalize
|
| 3777 |
+
[rank3]: write_results = storage_writer.retrieve_write_results()
|
| 3778 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3779 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 436, in retrieve_write_results
|
| 3780 |
+
[rank3]: raise RuntimeError(f'Worker failure: {write_results_or_exc}') from write_results_or_exc
|
| 3781 |
+
[rank3]: RuntimeError: Worker failure: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__3_0.distcp'
|
| 3782 |
+
[rank2]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__2_0.distcp'
|
| 3783 |
+
|
| 3784 |
+
[rank2]: The above exception was the direct cause of the following exception:
|
| 3785 |
+
|
| 3786 |
+
[rank2]: Traceback (most recent call last):
|
| 3787 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3788 |
+
[rank2]: pretrain(
|
| 3789 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3790 |
+
[rank2]: save_checkpoint(
|
| 3791 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3792 |
+
[rank2]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3793 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3794 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 3795 |
+
[rank2]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3796 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 3797 |
+
[rank2]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3798 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3799 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 3800 |
+
[rank2]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 3801 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 3802 |
+
[rank2]: finalize_fn()
|
| 3803 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 3804 |
+
[rank2]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 3805 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 229, in save_state_dict_async_finalize
|
| 3806 |
+
[rank2]: write_results = storage_writer.retrieve_write_results()
|
| 3807 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3808 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 436, in retrieve_write_results
|
| 3809 |
+
[rank2]: raise RuntimeError(f'Worker failure: {write_results_or_exc}') from write_results_or_exc
|
| 3810 |
+
[rank2]: RuntimeError: Worker failure: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__2_0.distcp'
|
| 3811 |
+
[rank4]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__4_0.distcp'
|
| 3812 |
+
|
| 3813 |
+
[rank4]: The above exception was the direct cause of the following exception:
|
| 3814 |
+
|
| 3815 |
+
[rank4]: Traceback (most recent call last):
|
| 3816 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3817 |
+
[rank4]: pretrain(
|
| 3818 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3819 |
+
[rank4]: save_checkpoint(
|
| 3820 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3821 |
+
[rank4]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3822 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3823 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 3824 |
+
[rank4]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3825 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 3826 |
+
[rank4]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3827 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3828 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 3829 |
+
[rank4]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 3830 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 3831 |
+
[rank4]: finalize_fn()
|
| 3832 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 3833 |
+
[rank4]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 3834 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 229, in save_state_dict_async_finalize
|
| 3835 |
+
[rank4]: write_results = storage_writer.retrieve_write_results()
|
| 3836 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3837 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 436, in retrieve_write_results
|
| 3838 |
+
[rank4]: raise RuntimeError(f'Worker failure: {write_results_or_exc}') from write_results_or_exc
|
| 3839 |
+
[rank4]: RuntimeError: Worker failure: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__4_0.distcp'
|
| 3840 |
+
[rank0]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__0_0.distcp'
|
| 3841 |
+
|
| 3842 |
+
[rank0]: The above exception was the direct cause of the following exception:
|
| 3843 |
+
|
| 3844 |
+
[rank0]: Traceback (most recent call last):
|
| 3845 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 3846 |
+
[rank0]: pretrain(
|
| 3847 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 3848 |
+
[rank0]: save_checkpoint(
|
| 3849 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 3850 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 3851 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3852 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 3853 |
+
[rank0]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3854 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 3855 |
+
[rank0]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 3856 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3857 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 3858 |
+
[rank0]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 3859 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 3860 |
+
[rank0]: finalize_fn()
|
| 3861 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 3862 |
+
[rank0]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 3863 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 229, in save_state_dict_async_finalize
|
| 3864 |
+
[rank0]: write_results = storage_writer.retrieve_write_results()
|
| 3865 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3866 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 436, in retrieve_write_results
|
| 3867 |
+
[rank0]: raise RuntimeError(f'Worker failure: {write_results_or_exc}') from write_results_or_exc
|
| 3868 |
+
[rank0]: RuntimeError: Worker failure: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/__0_0.distcp'
|
| 3869 |
+
[rank5]:[W621 22:06:47.885164319 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 3870 |
+
[rank1]:[W621 22:06:47.900529072 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 3871 |
+
[rank7]:[W621 22:06:48.379264793 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 3872 |
+
[rank3]:[W621 22:06:48.540156179 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 3873 |
+
W0621 22:06:50.322000 459115 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 459186 closing signal SIGTERM
|
| 3874 |
+
W0621 22:06:50.331000 459115 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 459187 closing signal SIGTERM
|
| 3875 |
+
W0621 22:06:50.332000 459115 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 459188 closing signal SIGTERM
|
| 3876 |
+
W0621 22:06:50.334000 459115 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 459189 closing signal SIGTERM
|
| 3877 |
+
W0621 22:06:50.335000 459115 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 459190 closing signal SIGTERM
|
| 3878 |
+
W0621 22:06:50.339000 459115 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 459192 closing signal SIGTERM
|
| 3879 |
+
W0621 22:06:50.374000 459115 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 459193 closing signal SIGTERM
|
| 3880 |
+
E0621 22:07:10.031000 459115 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 5 (pid: 459191) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 3881 |
+
Traceback (most recent call last):
|
| 3882 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 3883 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 3884 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 3885 |
+
main()
|
| 3886 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 3887 |
+
return arg(*args, **kwargs)
|
| 3888 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 3889 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 3890 |
+
launch(args)
|
| 3891 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 3892 |
+
run(args)
|
| 3893 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 3894 |
+
elastic_launch(
|
| 3895 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 3896 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 3897 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 3898 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 3899 |
+
raise ChildFailedError(
|
| 3900 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 3901 |
+
============================================================
|
| 3902 |
+
./pretrain_gpt_profile.py FAILED
|
| 3903 |
+
------------------------------------------------------------
|
| 3904 |
+
Failures:
|
| 3905 |
+
<NO_OTHER_FAILURES>
|
| 3906 |
+
------------------------------------------------------------
|
| 3907 |
+
Root Cause (first observed failure):
|
| 3908 |
+
[0]:
|
| 3909 |
+
time : 2025-06-21_22:06:50
|
| 3910 |
+
host : fs-mbz-gpu-898
|
| 3911 |
+
rank : 5 (local_rank: 5)
|
| 3912 |
+
exitcode : 1 (pid: 459191)
|
| 3913 |
+
error_file: <N/A>
|
| 3914 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 3915 |
+
============================================================
|
| 3916 |
+
+ set +x
|
| 3917 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 3918 |
+
+ export PROF_CTX_LENGTH=65536
|
| 3919 |
+
+ PROF_CTX_LENGTH=65536
|
| 3920 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp2.cp4.bs2.json'
|
| 3921 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp2.cp4.bs2.json' ']'
|
| 3922 |
+
+ echo 'Running ctx_length=65536, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=2'
|
| 3923 |
+
+ srun bash ./attnserver.sh
|
| 3924 |
+
+ which python3
|
| 3925 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343244 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-898:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 65536 --max-position-embeddings 65536 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 3926 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 3927 |
+
and will be removed in future. Use torchrun.
|
| 3928 |
+
Note that --use-env is set by default in torchrun.
|
| 3929 |
+
If your script expects `--local-rank` argument to be set, please
|
| 3930 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 3931 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 3932 |
+
further instructions
|
| 3933 |
+
|
| 3934 |
+
main()
|
| 3935 |
+
W0621 22:07:13.434000 462698 site-packages/torch/distributed/run.py:766]
|
| 3936 |
+
W0621 22:07:13.434000 462698 site-packages/torch/distributed/run.py:766] *****************************************
|
| 3937 |
+
W0621 22:07:13.434000 462698 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 3938 |
+
W0621 22:07:13.434000 462698 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343244.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343245.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343245.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343246.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343246.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343247.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343247.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343248.err.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343248.out.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343261.err.log
ADDED
|
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
|
| 2 |
+
++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 3 |
+
++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
|
| 4 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 5 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 6 |
+
+++ export _CE_M=
|
| 7 |
+
+++ _CE_M=
|
| 8 |
+
+++ export _CE_CONDA=
|
| 9 |
+
+++ _CE_CONDA=
|
| 10 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 11 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 12 |
+
+++ '[' -z x ']'
|
| 13 |
+
++ conda activate
|
| 14 |
+
++ local cmd=activate
|
| 15 |
+
++ case "$cmd" in
|
| 16 |
+
++ __conda_activate activate
|
| 17 |
+
++ '[' -n '' ']'
|
| 18 |
+
++ local ask_conda
|
| 19 |
+
+++ PS1=
|
| 20 |
+
+++ __conda_exe shell.posix activate
|
| 21 |
+
+++ '[' -n '' ']'
|
| 22 |
+
+++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
|
| 23 |
+
++ ask_conda='unset _CE_M
|
| 24 |
+
unset _CE_CONDA
|
| 25 |
+
PS1='\''(base) '\''
|
| 26 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 27 |
+
export CONDA_SHLVL='\''1'\''
|
| 28 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 29 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 30 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 31 |
+
++ eval 'unset _CE_M
|
| 32 |
+
unset _CE_CONDA
|
| 33 |
+
PS1='\''(base) '\''
|
| 34 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 35 |
+
export CONDA_SHLVL='\''1'\''
|
| 36 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 37 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 38 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 39 |
+
+++ unset _CE_M
|
| 40 |
+
+++ unset _CE_CONDA
|
| 41 |
+
+++ PS1='(base) '
|
| 42 |
+
+++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 43 |
+
+++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 44 |
+
+++ export CONDA_SHLVL=1
|
| 45 |
+
+++ CONDA_SHLVL=1
|
| 46 |
+
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
|
| 47 |
+
+++ CONDA_PROMPT_MODIFIER='(base) '
|
| 48 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 49 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 50 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 51 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 52 |
+
++ __conda_hashr
|
| 53 |
+
++ '[' -n '' ']'
|
| 54 |
+
++ '[' -n '' ']'
|
| 55 |
+
++ hash -r
|
| 56 |
+
+ conda activate junda-attnserver
|
| 57 |
+
+ local cmd=activate
|
| 58 |
+
+ case "$cmd" in
|
| 59 |
+
+ __conda_activate activate junda-attnserver
|
| 60 |
+
+ '[' -n '' ']'
|
| 61 |
+
+ local ask_conda
|
| 62 |
+
++ PS1='(base) '
|
| 63 |
+
++ __conda_exe shell.posix activate junda-attnserver
|
| 64 |
+
++ '[' -n '' ']'
|
| 65 |
+
++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
|
| 66 |
+
+ ask_conda='unset _CE_M
|
| 67 |
+
unset _CE_CONDA
|
| 68 |
+
PS1='\''(junda-attnserver) '\''
|
| 69 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 70 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 71 |
+
export CONDA_SHLVL='\''2'\''
|
| 72 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 73 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 74 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 75 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 76 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 77 |
+
+ eval 'unset _CE_M
|
| 78 |
+
unset _CE_CONDA
|
| 79 |
+
PS1='\''(junda-attnserver) '\''
|
| 80 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 81 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 82 |
+
export CONDA_SHLVL='\''2'\''
|
| 83 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 84 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 85 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 86 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 87 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 88 |
+
++ unset _CE_M
|
| 89 |
+
++ unset _CE_CONDA
|
| 90 |
+
++ PS1='(junda-attnserver) '
|
| 91 |
+
++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 92 |
+
++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 93 |
+
++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 94 |
+
++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 95 |
+
++ export CONDA_SHLVL=2
|
| 96 |
+
++ CONDA_SHLVL=2
|
| 97 |
+
++ export CONDA_DEFAULT_ENV=junda-attnserver
|
| 98 |
+
++ CONDA_DEFAULT_ENV=junda-attnserver
|
| 99 |
+
++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
|
| 100 |
+
++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
|
| 101 |
+
++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 102 |
+
++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 103 |
+
++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 104 |
+
++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 105 |
+
++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 106 |
+
++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 107 |
+
+ __conda_hashr
|
| 108 |
+
+ '[' -n '' ']'
|
| 109 |
+
+ '[' -n '' ']'
|
| 110 |
+
+ hash -r
|
| 111 |
+
+ export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 112 |
+
+ CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 113 |
+
+ mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 114 |
+
+ export PROF_TP_SIZE=1
|
| 115 |
+
+ PROF_TP_SIZE=1
|
| 116 |
+
+ export PROF_CP_SIZE=8
|
| 117 |
+
+ PROF_CP_SIZE=8
|
| 118 |
+
+ export PROF_BS=1
|
| 119 |
+
+ PROF_BS=1
|
| 120 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 121 |
+
+ export PROF_CTX_LENGTH=1024
|
| 122 |
+
+ PROF_CTX_LENGTH=1024
|
| 123 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp1.cp8.bs1.json'
|
| 124 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp1.cp8.bs1.json' ']'
|
| 125 |
+
+ echo 'Running ctx_length=1024, TP_SIZE=1, CP_SIZE=8, BATCH_SIZE=1'
|
| 126 |
+
+ srun bash ./attnserver.sh
|
| 127 |
+
+ which python3
|
| 128 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343261 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-830:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 1 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 129 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 130 |
+
and will be removed in future. Use torchrun.
|
| 131 |
+
Note that --use-env is set by default in torchrun.
|
| 132 |
+
If your script expects `--local-rank` argument to be set, please
|
| 133 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 134 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 135 |
+
further instructions
|
| 136 |
+
|
| 137 |
+
main()
|
| 138 |
+
W0621 22:06:13.082000 2070539 site-packages/torch/distributed/run.py:766]
|
| 139 |
+
W0621 22:06:13.082000 2070539 site-packages/torch/distributed/run.py:766] *****************************************
|
| 140 |
+
W0621 22:06:13.082000 2070539 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 141 |
+
W0621 22:06:13.082000 2070539 site-packages/torch/distributed/run.py:766] *****************************************
|
| 142 |
+
[rank2]:[W621 22:06:35.957817612 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 143 |
+
[rank5]:[W621 22:06:35.957825745 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 144 |
+
[rank1]:[W621 22:06:35.957834527 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 145 |
+
[rank3]:[W621 22:06:35.958544022 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 146 |
+
[rank4]:[W621 22:06:35.960944235 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 147 |
+
[rank6]:[W621 22:06:35.963661455 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 148 |
+
[rank7]:[W621 22:06:35.963839061 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 149 |
+
[rank0]:[W621 22:06:35.101598532 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 150 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 151 |
+
warnings.warn(
|
| 152 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 153 |
+
warnings.warn(
|
| 154 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 155 |
+
warnings.warn(
|
| 156 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 157 |
+
warnings.warn(
|
| 158 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 159 |
+
warnings.warn(
|
| 160 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 161 |
+
warnings.warn(
|
| 162 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 163 |
+
warnings.warn(
|
| 164 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 165 |
+
warnings.warn(
|
| 166 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 167 |
+
warnings.warn(
|
| 168 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 169 |
+
warnings.warn(
|
| 170 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 171 |
+
warnings.warn(
|
| 172 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 173 |
+
warnings.warn(
|
| 174 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 175 |
+
warnings.warn(
|
| 176 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 177 |
+
warnings.warn(
|
| 178 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 179 |
+
warnings.warn(
|
| 180 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 181 |
+
warnings.warn(
|
| 182 |
+
[rank0]: Traceback (most recent call last):
|
| 183 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 184 |
+
[rank0]: pretrain(
|
| 185 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 186 |
+
[rank0]: save_checkpoint(
|
| 187 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 188 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 189 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 190 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
|
| 191 |
+
[rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
|
| 192 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
|
| 193 |
+
[rank0]: torch.save(common_state_dict, path)
|
| 194 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
|
| 195 |
+
[rank0]: with _open_zipfile_writer(f) as opened_zipfile:
|
| 196 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
|
| 197 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
|
| 198 |
+
[rank0]: return container(name_or_buffer)
|
| 199 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 200 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
|
| 201 |
+
[rank0]: torch._C.PyTorchFileWriter(
|
| 202 |
+
[rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
|
attnserver.run_attnserver.slurm.sh.343261.out.log
ADDED
|
@@ -0,0 +1,1507 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Running ctx_length=1024, TP_SIZE=1, CP_SIZE=8, BATCH_SIZE=1
|
| 2 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 3 |
+
--------------------------------
|
| 4 |
+
CTX_LENGTH: 1024
|
| 5 |
+
TP_SIZE: 1
|
| 6 |
+
CP_SIZE: 8
|
| 7 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 8 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9 |
+
--------------------------------
|
| 10 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 11 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 1, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 13 |
+
Number of virtual stages per pipeline stage: None
|
| 14 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 15 |
+
using torch.float16 for parameters ...
|
| 16 |
+
------------------------ arguments ------------------------
|
| 17 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 18 |
+
account_for_loss_in_pipeline_split .............. False
|
| 19 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 20 |
+
adam_beta1 ...................................... 0.9
|
| 21 |
+
adam_beta2 ...................................... 0.999
|
| 22 |
+
adam_eps ........................................ 1e-08
|
| 23 |
+
add_bias_linear ................................. True
|
| 24 |
+
add_position_embedding .......................... True
|
| 25 |
+
add_qkv_bias .................................... True
|
| 26 |
+
adlr_autoresume ................................. False
|
| 27 |
+
adlr_autoresume_interval ........................ 1000
|
| 28 |
+
align_grad_reduce ............................... True
|
| 29 |
+
align_param_gather .............................. False
|
| 30 |
+
app_tag_run_name ................................ None
|
| 31 |
+
app_tag_run_version ............................. 0.0.0
|
| 32 |
+
apply_layernorm_1p .............................. False
|
| 33 |
+
apply_query_key_layer_scaling ................... False
|
| 34 |
+
apply_residual_connection_post_layernorm ........ False
|
| 35 |
+
apply_rope_fusion ............................... False
|
| 36 |
+
async_save ...................................... None
|
| 37 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 38 |
+
attention_backend ............................... AttnBackend.auto
|
| 39 |
+
attention_dropout ............................... 0.1
|
| 40 |
+
attention_softmax_in_fp32 ....................... False
|
| 41 |
+
auto_detect_ckpt_format ......................... False
|
| 42 |
+
barrier_with_L1_time ............................ True
|
| 43 |
+
bert_binary_head ................................ True
|
| 44 |
+
bert_embedder_type .............................. megatron
|
| 45 |
+
bert_load ....................................... None
|
| 46 |
+
bf16 ............................................ False
|
| 47 |
+
bias_dropout_fusion ............................. True
|
| 48 |
+
bias_gelu_fusion ................................ True
|
| 49 |
+
bias_swiglu_fusion .............................. True
|
| 50 |
+
biencoder_projection_dim ........................ 0
|
| 51 |
+
biencoder_shared_query_context_model ............ False
|
| 52 |
+
block_data_path ................................. None
|
| 53 |
+
calc_ft_timeouts ................................ False
|
| 54 |
+
calculate_per_token_loss ........................ False
|
| 55 |
+
check_for_large_grads ........................... False
|
| 56 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 57 |
+
check_for_spiky_loss ............................ False
|
| 58 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 59 |
+
ckpt_assume_constant_structure .................. False
|
| 60 |
+
ckpt_convert_format ............................. None
|
| 61 |
+
ckpt_convert_save ............................... None
|
| 62 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 63 |
+
ckpt_format ..................................... torch_dist
|
| 64 |
+
ckpt_fully_parallel_load ........................ False
|
| 65 |
+
ckpt_fully_parallel_save ........................ True
|
| 66 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 67 |
+
ckpt_step ....................................... None
|
| 68 |
+
classes_fraction ................................ 1.0
|
| 69 |
+
clip_grad ....................................... 1.0
|
| 70 |
+
clone_scatter_output_in_embedding ............... True
|
| 71 |
+
config_logger_dir ...............................
|
| 72 |
+
consumed_train_samples .......................... 0
|
| 73 |
+
consumed_valid_samples .......................... 0
|
| 74 |
+
context_parallel_size ........................... 8
|
| 75 |
+
cp_comm_type .................................... ['p2p']
|
| 76 |
+
create_attention_mask_in_dataloader ............. True
|
| 77 |
+
cross_entropy_fusion_impl ....................... native
|
| 78 |
+
cross_entropy_loss_fusion ....................... False
|
| 79 |
+
cuda_graph_scope ................................ full
|
| 80 |
+
cuda_graph_warmup_steps ......................... 3
|
| 81 |
+
data_args_path .................................. None
|
| 82 |
+
data_cache_path ................................. None
|
| 83 |
+
data_parallel_random_init ....................... False
|
| 84 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 85 |
+
data_parallel_size .............................. 1
|
| 86 |
+
data_path ....................................... None
|
| 87 |
+
data_per_class_fraction ......................... 1.0
|
| 88 |
+
data_sharding ................................... True
|
| 89 |
+
dataloader_type ................................. single
|
| 90 |
+
ddp_average_in_collective ....................... False
|
| 91 |
+
ddp_bucket_size ................................. None
|
| 92 |
+
ddp_num_buckets ................................. None
|
| 93 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 94 |
+
decoder_first_pipeline_num_layers ............... None
|
| 95 |
+
decoder_last_pipeline_num_layers ................ None
|
| 96 |
+
decoder_num_layers .............................. None
|
| 97 |
+
decoder_seq_length .............................. None
|
| 98 |
+
decoupled_lr .................................... None
|
| 99 |
+
decoupled_min_lr ................................ None
|
| 100 |
+
decrease_batch_size_if_needed ................... False
|
| 101 |
+
defer_embedding_wgrad_compute ................... False
|
| 102 |
+
deprecated_use_mcore_models ..................... False
|
| 103 |
+
deterministic_mode .............................. False
|
| 104 |
+
dino_bottleneck_size ............................ 256
|
| 105 |
+
dino_freeze_last_layer .......................... 1
|
| 106 |
+
dino_head_hidden_size ........................... 2048
|
| 107 |
+
dino_local_crops_number ......................... 10
|
| 108 |
+
dino_local_img_size ............................. 96
|
| 109 |
+
dino_norm_last_layer ............................ False
|
| 110 |
+
dino_teacher_temp ............................... 0.07
|
| 111 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 112 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 113 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 114 |
+
disable_mamba_mem_eff_path ...................... False
|
| 115 |
+
disable_straggler_on_startup .................... False
|
| 116 |
+
dist_ckpt_format_deprecated ..................... None
|
| 117 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 118 |
+
distribute_saved_activations .................... False
|
| 119 |
+
distributed_backend ............................. nccl
|
| 120 |
+
distributed_timeout_minutes ..................... 10
|
| 121 |
+
embedding_path .................................. None
|
| 122 |
+
empty_unused_memory_level ....................... 0
|
| 123 |
+
enable_cuda_graph ............................... False
|
| 124 |
+
enable_ft_package ............................... False
|
| 125 |
+
enable_gloo_process_groups ...................... True
|
| 126 |
+
enable_msc ...................................... True
|
| 127 |
+
enable_one_logger ............................... True
|
| 128 |
+
encoder_num_layers .............................. 2
|
| 129 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 130 |
+
encoder_seq_length .............................. 1024
|
| 131 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 132 |
+
end_weight_decay ................................ 0.1
|
| 133 |
+
eod_mask_loss ................................... False
|
| 134 |
+
error_injection_rate ............................ 0
|
| 135 |
+
error_injection_type ............................ transient_error
|
| 136 |
+
eval_interval ................................... 16
|
| 137 |
+
eval_iters ...................................... 1
|
| 138 |
+
evidence_data_path .............................. None
|
| 139 |
+
exit_duration_in_mins ........................... None
|
| 140 |
+
exit_interval ................................... None
|
| 141 |
+
exit_on_missing_checkpoint ...................... False
|
| 142 |
+
exit_signal_handler ............................. False
|
| 143 |
+
exp_avg_dtype ................................... torch.float32
|
| 144 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 145 |
+
expert_model_parallel_size ...................... 1
|
| 146 |
+
expert_tensor_parallel_size ..................... 1
|
| 147 |
+
external_cuda_graph ............................. False
|
| 148 |
+
ffn_hidden_size ................................. 16384
|
| 149 |
+
finetune ........................................ False
|
| 150 |
+
first_last_layers_bf16 .......................... False
|
| 151 |
+
flash_decode .................................... False
|
| 152 |
+
fp16 ............................................ True
|
| 153 |
+
fp16_lm_cross_entropy ........................... False
|
| 154 |
+
fp32_residual_connection ........................ False
|
| 155 |
+
fp8 ............................................. None
|
| 156 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 157 |
+
fp8_amax_history_len ............................ 1
|
| 158 |
+
fp8_interval .................................... 1
|
| 159 |
+
fp8_margin ...................................... 0
|
| 160 |
+
fp8_param_gather ................................ False
|
| 161 |
+
fp8_recipe ...................................... delayed
|
| 162 |
+
fp8_wgrad ....................................... True
|
| 163 |
+
fsdp_double_buffer .............................. False
|
| 164 |
+
global_batch_size ............................... 1
|
| 165 |
+
grad_reduce_in_bf16 ............................. False
|
| 166 |
+
gradient_accumulation_fusion .................... True
|
| 167 |
+
gradient_reduce_div_fusion ...................... True
|
| 168 |
+
group_query_attention ........................... True
|
| 169 |
+
head_lr_mult .................................... 1.0
|
| 170 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 171 |
+
heterogeneous_layers_config_path ................ None
|
| 172 |
+
hidden_dropout .................................. 0.1
|
| 173 |
+
hidden_size ..................................... 4096
|
| 174 |
+
hierarchical_context_parallel_sizes ............. None
|
| 175 |
+
high_priority_stream_groups ..................... []
|
| 176 |
+
hybrid_attention_ratio .......................... 0.0
|
| 177 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 178 |
+
hybrid_override_pattern ......................... None
|
| 179 |
+
hysteresis ...................................... 2
|
| 180 |
+
ict_head_size ................................... None
|
| 181 |
+
ict_load ........................................ None
|
| 182 |
+
img_h ........................................... 224
|
| 183 |
+
img_w ........................................... 224
|
| 184 |
+
indexer_batch_size .............................. 128
|
| 185 |
+
indexer_log_interval ............................ 1000
|
| 186 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 187 |
+
inference_dynamic_batching ...................... False
|
| 188 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 189 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 190 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 191 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 192 |
+
inference_dynamic_batching_max_requests_override None
|
| 193 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 194 |
+
inference_max_batch_size ........................ 8
|
| 195 |
+
inference_max_seq_length ........................ 2560
|
| 196 |
+
inference_rng_tracker ........................... False
|
| 197 |
+
init_method_std ................................. 0.02
|
| 198 |
+
init_method_xavier_uniform ...................... False
|
| 199 |
+
init_model_with_meta_device ..................... False
|
| 200 |
+
initial_loss_scale .............................. 4294967296
|
| 201 |
+
inprocess_active_world_size ..................... 8
|
| 202 |
+
inprocess_barrier_timeout ....................... 120
|
| 203 |
+
inprocess_completion_timeout .................... 120
|
| 204 |
+
inprocess_empty_cuda_cache ...................... False
|
| 205 |
+
inprocess_granularity ........................... node
|
| 206 |
+
inprocess_hard_timeout .......................... 90
|
| 207 |
+
inprocess_heartbeat_interval .................... 30
|
| 208 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 209 |
+
inprocess_last_call_wait ........................ 1
|
| 210 |
+
inprocess_max_iterations ........................ None
|
| 211 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 212 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 213 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 214 |
+
inprocess_restart ............................... False
|
| 215 |
+
inprocess_soft_timeout .......................... 60
|
| 216 |
+
inprocess_termination_grace_time ................ 1
|
| 217 |
+
is_hybrid_model ................................. False
|
| 218 |
+
iter_per_epoch .................................. 1250
|
| 219 |
+
iterations_to_skip .............................. []
|
| 220 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 221 |
+
kv_channels ..................................... 64
|
| 222 |
+
kv_lora_rank .................................... 32
|
| 223 |
+
lazy_mpu_init ................................... None
|
| 224 |
+
load ............................................ gpt-checkpoint
|
| 225 |
+
load_model_opt_format ........................... False
|
| 226 |
+
local_rank ...................................... 0
|
| 227 |
+
log_interval .................................... 1
|
| 228 |
+
log_loss_scale_to_tensorboard ................... True
|
| 229 |
+
log_memory_to_tensorboard ....................... False
|
| 230 |
+
log_num_zeros_in_grad ........................... False
|
| 231 |
+
log_params_norm ................................. False
|
| 232 |
+
log_progress .................................... False
|
| 233 |
+
log_straggler ................................... False
|
| 234 |
+
log_throughput .................................. False
|
| 235 |
+
log_timers_to_tensorboard ....................... False
|
| 236 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 237 |
+
log_world_size_to_tensorboard ................... False
|
| 238 |
+
logging_level ................................... 0
|
| 239 |
+
loss_scale ...................................... None
|
| 240 |
+
loss_scale_window ............................... 1000
|
| 241 |
+
lr .............................................. 0.0005
|
| 242 |
+
lr_decay_iters .................................. 150000
|
| 243 |
+
lr_decay_samples ................................ None
|
| 244 |
+
lr_decay_style .................................. cosine
|
| 245 |
+
lr_warmup_fraction .............................. None
|
| 246 |
+
lr_warmup_init .................................. 0.0
|
| 247 |
+
lr_warmup_iters ................................. 2
|
| 248 |
+
lr_warmup_samples ............................... 0
|
| 249 |
+
lr_wsd_decay_iters .............................. None
|
| 250 |
+
lr_wsd_decay_samples ............................ None
|
| 251 |
+
lr_wsd_decay_style .............................. exponential
|
| 252 |
+
main_grads_dtype ................................ torch.float32
|
| 253 |
+
main_params_dtype ............................... torch.float32
|
| 254 |
+
make_vocab_size_divisible_by .................... 128
|
| 255 |
+
mamba_head_dim .................................. 64
|
| 256 |
+
mamba_num_groups ................................ 8
|
| 257 |
+
mamba_num_heads ................................. None
|
| 258 |
+
mamba_state_dim ................................. 128
|
| 259 |
+
manual_gc ....................................... False
|
| 260 |
+
manual_gc_eval .................................. True
|
| 261 |
+
manual_gc_interval .............................. 0
|
| 262 |
+
mask_factor ..................................... 1.0
|
| 263 |
+
mask_prob ....................................... 0.15
|
| 264 |
+
mask_type ....................................... random
|
| 265 |
+
masked_softmax_fusion ........................... True
|
| 266 |
+
max_position_embeddings ......................... 1024
|
| 267 |
+
max_tokens_to_oom ............................... 12000
|
| 268 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 269 |
+
merge_file ...................................... merges.txt
|
| 270 |
+
micro_batch_size ................................ 1
|
| 271 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 272 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 273 |
+
min_loss_scale .................................. 1.0
|
| 274 |
+
min_lr .......................................... 0.0
|
| 275 |
+
mlp_chunks_for_prefill .......................... 1
|
| 276 |
+
mmap_bin_files .................................. True
|
| 277 |
+
mock_data ....................................... True
|
| 278 |
+
moe_apply_probs_on_input ........................ False
|
| 279 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 280 |
+
moe_enable_deepep ............................... False
|
| 281 |
+
moe_expert_capacity_factor ...................... None
|
| 282 |
+
moe_extended_tp ................................. False
|
| 283 |
+
moe_ffn_hidden_size ............................. None
|
| 284 |
+
moe_grouped_gemm ................................ False
|
| 285 |
+
moe_input_jitter_eps ............................ None
|
| 286 |
+
moe_layer_freq .................................. 1
|
| 287 |
+
moe_layer_recompute ............................. False
|
| 288 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 289 |
+
moe_per_layer_logging ........................... False
|
| 290 |
+
moe_permute_fusion .............................. False
|
| 291 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 292 |
+
moe_router_dtype ................................ None
|
| 293 |
+
moe_router_enable_expert_bias ................... False
|
| 294 |
+
moe_router_force_load_balancing ................. False
|
| 295 |
+
moe_router_group_topk ........................... None
|
| 296 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 297 |
+
moe_router_num_groups ........................... None
|
| 298 |
+
moe_router_padding_for_fp8 ...................... False
|
| 299 |
+
moe_router_pre_softmax .......................... False
|
| 300 |
+
moe_router_score_function ....................... softmax
|
| 301 |
+
moe_router_topk ................................. 2
|
| 302 |
+
moe_router_topk_scaling_factor .................. None
|
| 303 |
+
moe_shared_expert_intermediate_size ............. None
|
| 304 |
+
moe_shared_expert_overlap ....................... False
|
| 305 |
+
moe_token_dispatcher_type ....................... allgather
|
| 306 |
+
moe_token_drop_policy ........................... probs
|
| 307 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 308 |
+
moe_use_upcycling ............................... False
|
| 309 |
+
moe_z_loss_coeff ................................ None
|
| 310 |
+
mrope_section ................................... None
|
| 311 |
+
mscale .......................................... 1.0
|
| 312 |
+
mscale_all_dim .................................. 1.0
|
| 313 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 314 |
+
mtp_num_layers .................................. None
|
| 315 |
+
multi_latent_attention .......................... False
|
| 316 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 317 |
+
nccl_communicator_config_path ................... None
|
| 318 |
+
nccl_ub ......................................... False
|
| 319 |
+
no_load_optim ................................... None
|
| 320 |
+
no_load_rng ..................................... None
|
| 321 |
+
no_persist_layer_norm ........................... False
|
| 322 |
+
no_rope_freq .................................... None
|
| 323 |
+
no_save_optim ................................... None
|
| 324 |
+
no_save_rng ..................................... None
|
| 325 |
+
non_persistent_ckpt_type ........................ None
|
| 326 |
+
non_persistent_global_ckpt_dir .................. None
|
| 327 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 328 |
+
non_persistent_local_ckpt_dir ................... None
|
| 329 |
+
non_persistent_save_interval .................... None
|
| 330 |
+
norm_epsilon .................................... 1e-05
|
| 331 |
+
normalization ................................... LayerNorm
|
| 332 |
+
num_attention_heads ............................. 64
|
| 333 |
+
num_channels .................................... 3
|
| 334 |
+
num_classes ..................................... 1000
|
| 335 |
+
num_dataset_builder_threads ..................... 1
|
| 336 |
+
num_distributed_optimizer_instances ............. 1
|
| 337 |
+
num_experts ..................................... None
|
| 338 |
+
num_layers ...................................... 2
|
| 339 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 340 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 341 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 342 |
+
num_query_groups ................................ 16
|
| 343 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 344 |
+
num_workers ..................................... 2
|
| 345 |
+
object_storage_cache_path ....................... None
|
| 346 |
+
one_logger_async ................................ False
|
| 347 |
+
one_logger_project .............................. megatron-lm
|
| 348 |
+
one_logger_run_name ............................. None
|
| 349 |
+
onnx_safe ....................................... None
|
| 350 |
+
openai_gelu ..................................... False
|
| 351 |
+
optimizer ....................................... adam
|
| 352 |
+
optimizer_cpu_offload ........................... False
|
| 353 |
+
optimizer_offload_fraction ...................... 1.0
|
| 354 |
+
output_bert_embeddings .......................... False
|
| 355 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 356 |
+
overlap_grad_reduce ............................. False
|
| 357 |
+
overlap_p2p_comm ................................ False
|
| 358 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 359 |
+
overlap_param_gather ............................ False
|
| 360 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 361 |
+
override_opt_param_scheduler .................... False
|
| 362 |
+
params_dtype .................................... torch.float16
|
| 363 |
+
patch_dim ....................................... 16
|
| 364 |
+
per_split_data_args_path ........................ None
|
| 365 |
+
perform_initialization .......................... True
|
| 366 |
+
pin_cpu_grads ................................... True
|
| 367 |
+
pin_cpu_params .................................. True
|
| 368 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 369 |
+
pipeline_model_parallel_size .................... 1
|
| 370 |
+
pipeline_model_parallel_split_rank .............. None
|
| 371 |
+
position_embedding_type ......................... learned_absolute
|
| 372 |
+
pretrained_checkpoint ........................... None
|
| 373 |
+
profile ......................................... False
|
| 374 |
+
profile_ranks ................................... [0]
|
| 375 |
+
profile_step_end ................................ 12
|
| 376 |
+
profile_step_start .............................. 10
|
| 377 |
+
q_lora_rank ..................................... None
|
| 378 |
+
qk_head_dim ..................................... 128
|
| 379 |
+
qk_l2_norm ...................................... False
|
| 380 |
+
qk_layernorm .................................... False
|
| 381 |
+
qk_pos_emb_head_dim ............................. 64
|
| 382 |
+
query_in_block_prob ............................. 0.1
|
| 383 |
+
rampup_batch_size ............................... None
|
| 384 |
+
rank ............................................ 0
|
| 385 |
+
recompute_granularity ........................... None
|
| 386 |
+
recompute_method ................................ None
|
| 387 |
+
recompute_modules ............................... None
|
| 388 |
+
recompute_num_layers ............................ None
|
| 389 |
+
record_memory_history ........................... False
|
| 390 |
+
relative_attention_max_distance ................. 128
|
| 391 |
+
relative_attention_num_buckets .................. 32
|
| 392 |
+
replication ..................................... False
|
| 393 |
+
replication_factor .............................. 2
|
| 394 |
+
replication_jump ................................ None
|
| 395 |
+
rerun_mode ...................................... disabled
|
| 396 |
+
reset_attention_mask ............................ False
|
| 397 |
+
reset_position_ids .............................. False
|
| 398 |
+
result_rejected_tracker_filename ................ None
|
| 399 |
+
retriever_report_topk_accuracies ................ []
|
| 400 |
+
retriever_score_scaling ......................... False
|
| 401 |
+
retriever_seq_length ............................ 256
|
| 402 |
+
retro_add_retriever ............................. False
|
| 403 |
+
retro_attention_gate ............................ 1
|
| 404 |
+
retro_cyclic_train_iters ........................ None
|
| 405 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 406 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 407 |
+
retro_encoder_layers ............................ 2
|
| 408 |
+
retro_num_neighbors ............................. 2
|
| 409 |
+
retro_num_retrieved_chunks ...................... 2
|
| 410 |
+
retro_project_dir ............................... None
|
| 411 |
+
retro_verify_neighbor_count ..................... True
|
| 412 |
+
rope_scaling_factor ............................. 8.0
|
| 413 |
+
rotary_base ..................................... 10000
|
| 414 |
+
rotary_interleaved .............................. False
|
| 415 |
+
rotary_percent .................................. 1.0
|
| 416 |
+
rotary_scaling_factor ........................... 1.0
|
| 417 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 418 |
+
run_workload_inspector_server ................... False
|
| 419 |
+
sample_rate ..................................... 1.0
|
| 420 |
+
save ............................................ gpt-checkpoint
|
| 421 |
+
save_interval ................................... 16
|
| 422 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 423 |
+
seed ............................................ 1234
|
| 424 |
+
seq_length ...................................... 1024
|
| 425 |
+
sequence_parallel ............................... False
|
| 426 |
+
sgd_momentum .................................... 0.9
|
| 427 |
+
short_seq_prob .................................. 0.1
|
| 428 |
+
skip_train ...................................... False
|
| 429 |
+
skipped_train_samples ........................... 0
|
| 430 |
+
spec ............................................ None
|
| 431 |
+
split ........................................... None
|
| 432 |
+
squared_relu .................................... False
|
| 433 |
+
start_weight_decay .............................. 0.1
|
| 434 |
+
straggler_ctrlr_port ............................ 65535
|
| 435 |
+
straggler_minmax_count .......................... 1
|
| 436 |
+
suggested_communication_unit_size ............... None
|
| 437 |
+
swiglu .......................................... False
|
| 438 |
+
swin_backbone_type .............................. tiny
|
| 439 |
+
symmetric_ar_type ............................... None
|
| 440 |
+
te_rng_tracker .................................. False
|
| 441 |
+
tensor_model_parallel_size ...................... 1
|
| 442 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 443 |
+
tensorboard_log_interval ........................ 1
|
| 444 |
+
tensorboard_queue_size .......................... 1000
|
| 445 |
+
test_data_path .................................. None
|
| 446 |
+
test_mode ....................................... False
|
| 447 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 448 |
+
tiktoken_pattern ................................ None
|
| 449 |
+
tiktoken_special_tokens ......................... None
|
| 450 |
+
timing_log_level ................................ 0
|
| 451 |
+
timing_log_option ............................... minmax
|
| 452 |
+
titles_data_path ................................ None
|
| 453 |
+
tokenizer_model ................................. None
|
| 454 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 455 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 456 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 457 |
+
tp_comm_bulk_dgrad .............................. True
|
| 458 |
+
tp_comm_bulk_wgrad .............................. True
|
| 459 |
+
tp_comm_overlap ................................. False
|
| 460 |
+
tp_comm_overlap_ag .............................. True
|
| 461 |
+
tp_comm_overlap_cfg ............................. None
|
| 462 |
+
tp_comm_overlap_rs .............................. True
|
| 463 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 464 |
+
tp_comm_split_ag ................................ True
|
| 465 |
+
tp_comm_split_rs ................................ True
|
| 466 |
+
train_data_path ................................. None
|
| 467 |
+
train_iters ..................................... 10
|
| 468 |
+
train_samples ................................... None
|
| 469 |
+
train_sync_interval ............................. None
|
| 470 |
+
transformer_impl ................................ transformer_engine
|
| 471 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 472 |
+
untie_embeddings_and_output_weights ............. False
|
| 473 |
+
use_checkpoint_args ............................. False
|
| 474 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 475 |
+
use_cpu_initialization .......................... None
|
| 476 |
+
use_custom_fsdp ................................. False
|
| 477 |
+
use_dist_ckpt ................................... True
|
| 478 |
+
use_dist_ckpt_deprecated ........................ False
|
| 479 |
+
use_distributed_optimizer ....................... False
|
| 480 |
+
use_flash_attn .................................. False
|
| 481 |
+
use_legacy_models ............................... False
|
| 482 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 483 |
+
use_one_sent_docs ............................... False
|
| 484 |
+
use_persistent_ckpt_worker ...................... False
|
| 485 |
+
use_precision_aware_optimizer ................... False
|
| 486 |
+
use_pytorch_profiler ............................ False
|
| 487 |
+
use_ring_exchange_p2p ........................... False
|
| 488 |
+
use_rope_scaling ................................ False
|
| 489 |
+
use_rotary_position_embeddings .................. False
|
| 490 |
+
use_sharp ....................................... False
|
| 491 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 492 |
+
use_torch_fsdp2 ................................. False
|
| 493 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 494 |
+
use_tp_pp_dp_mapping ............................ False
|
| 495 |
+
v_head_dim ...................................... 128
|
| 496 |
+
valid_data_path ................................. None
|
| 497 |
+
variable_seq_lengths ............................ False
|
| 498 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 499 |
+
vision_backbone_type ............................ vit
|
| 500 |
+
vision_pretraining .............................. False
|
| 501 |
+
vision_pretraining_type ......................... classify
|
| 502 |
+
vocab_extra_ids ................................. 0
|
| 503 |
+
vocab_file ...................................... vocab.json
|
| 504 |
+
vocab_size ...................................... None
|
| 505 |
+
wandb_exp_name ..................................
|
| 506 |
+
wandb_project ...................................
|
| 507 |
+
wandb_save_dir ..................................
|
| 508 |
+
weight_decay .................................... 0.1
|
| 509 |
+
weight_decay_incr_style ......................... constant
|
| 510 |
+
wgrad_deferral_limit ............................ 0
|
| 511 |
+
world_size ...................................... 8
|
| 512 |
+
yaml_cfg ........................................ None
|
| 513 |
+
-------------------- end of arguments ---------------------
|
| 514 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 515 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 516 |
+
> padded vocab (size: 50257) with 47 dummy tokens (new size: 50304)
|
| 517 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 518 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 519 |
+
> initializing torch distributed ...
|
| 520 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 521 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 522 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 523 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 524 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 525 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 526 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 527 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 528 |
+
> initialized tensor model parallel with size 1
|
| 529 |
+
> initialized pipeline model parallel with size 1
|
| 530 |
+
> setting random seeds to 1234 ...
|
| 531 |
+
> compiling dataset index builder ...
|
| 532 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 533 |
+
make: Nothing to be done for 'default'.
|
| 534 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 535 |
+
>>> done with dataset index builder. Compilation time: 0.059 seconds
|
| 536 |
+
> compiling and loading fused kernels ...
|
| 537 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.515 seconds
|
| 538 |
+
time to initialize megatron (seconds): 7.924
|
| 539 |
+
[after megatron is initialized] datetime: 2025-06-21 22:06:42
|
| 540 |
+
building GPT model ...
|
| 541 |
+
>>> embedding
|
| 542 |
+
>>> decoder
|
| 543 |
+
>>> output_layer
|
| 544 |
+
>>> embedding
|
| 545 |
+
>>> decoder
|
| 546 |
+
>>> output_layer
|
| 547 |
+
>>> embedding
|
| 548 |
+
>>> decoder
|
| 549 |
+
>>> output_layer
|
| 550 |
+
>>> embedding
|
| 551 |
+
>>> decoder
|
| 552 |
+
>>> output_layer
|
| 553 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 562663424
|
| 554 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 562663424
|
| 555 |
+
>>> embedding
|
| 556 |
+
>>> decoder
|
| 557 |
+
>>> output_layer
|
| 558 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 562663424
|
| 559 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 562663424
|
| 560 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 562663424
|
| 561 |
+
>>> embedding
|
| 562 |
+
>>> decoder
|
| 563 |
+
>>> output_layer
|
| 564 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 562663424
|
| 565 |
+
>>> embedding
|
| 566 |
+
>>> decoder
|
| 567 |
+
>>> output_layer
|
| 568 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 562663424
|
| 569 |
+
>>> embedding
|
| 570 |
+
>>> decoder
|
| 571 |
+
>>> output_layer
|
| 572 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 562663424
|
| 573 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 574 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 575 |
+
Params for bucket 1 (562663424 elements, 562663424 padded size):
|
| 576 |
+
module.decoder.final_layernorm.weight
|
| 577 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 578 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 579 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 580 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 581 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 582 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 583 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 584 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 585 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 586 |
+
module.embedding.word_embeddings.weight
|
| 587 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 588 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 589 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 590 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 591 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 592 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 593 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 594 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 595 |
+
module.embedding.position_embeddings.weight
|
| 596 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 597 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 598 |
+
module.decoder.final_layernorm.bias
|
| 599 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 600 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 601 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 602 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 603 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 604 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14972775eea0>, config_logger_dir='')
|
| 605 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 606 |
+
WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
|
| 607 |
+
will not load any checkpoints and will start from random
|
| 608 |
+
(min, max) time across ranks (ms):
|
| 609 |
+
load-checkpoint ................................: (3.32, 3.38)
|
| 610 |
+
[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 22:06:42
|
| 611 |
+
> building train, validation, and test datasets ...
|
| 612 |
+
> datasets target sizes (minimum size):
|
| 613 |
+
train: 10
|
| 614 |
+
validation: 1
|
| 615 |
+
test: 1
|
| 616 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
|
| 617 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
|
| 618 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
|
| 619 |
+
> building train, validation, and test datasets for GPT ...
|
| 620 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=1024, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14972781f770>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
|
| 621 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
|
| 622 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 623 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 624 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005874 seconds
|
| 625 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66592
|
| 626 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 627 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
|
| 628 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 629 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 630 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.003374 seconds
|
| 631 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66562
|
| 632 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 633 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
|
| 634 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 635 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 636 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.003366 seconds
|
| 637 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66686
|
| 638 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 639 |
+
> finished creating GPT datasets ...
|
| 640 |
+
[after dataloaders are built] datetime: 2025-06-21 22:06:42
|
| 641 |
+
done with setup ...
|
| 642 |
+
training ...(min, max) time across ranks (ms):
|
| 643 |
+
model-and-optimizer-setup ......................: (271.13, 289.44)
|
| 644 |
+
train/valid/test-data-iterators-setup ..........: (168.82, 202.81)
|
| 645 |
+
|
| 646 |
+
Setting rerun_state_machine.current_iteration to 0...
|
| 647 |
+
[before the start of training step] datetime: 2025-06-21 22:06:43
|
| 648 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 649 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 650 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 651 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 652 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 653 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 654 |
+
batch tensor:batch tensor: labels torch.Size([1, 1024])tokens
|
| 655 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 656 |
+
batch tensor: attention_masktorch.Size([1, 1024]) torch.Size([1, 1, 1024, 1024])
|
| 657 |
+
|
| 658 |
+
batch tensor: position_ids batch tensor:torch.Size([1, 1024])
|
| 659 |
+
labels torch.Size([1, 1024])
|
| 660 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 661 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 662 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 663 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 664 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 665 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 666 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 667 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 668 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 669 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 670 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 671 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 672 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 673 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 674 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 675 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 676 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 677 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 678 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 679 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 680 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 681 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 682 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 683 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 684 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 685 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 686 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 687 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 688 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 689 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 690 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 691 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 692 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 693 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 694 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 695 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 696 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 697 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 698 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 699 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 700 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 701 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 702 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 703 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 704 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 705 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 706 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 707 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 708 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 709 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 710 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 711 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 712 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 713 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 714 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 715 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 716 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 717 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 718 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 719 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 720 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 721 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 722 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 723 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 724 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 725 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 726 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 727 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 728 |
+
Start exporting trace 0
|
| 729 |
+
Done exporting trace 0
|
| 730 |
+
Number of parameters in transformer block in billions: 0.35
|
| 731 |
+
[2025-06-21 22:06:58] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 15326.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 732 |
+
Number of parameters in embedding layers in billions: 0.21
|
| 733 |
+
Total number of parameters in billions: 0.56
|
| 734 |
+
Number of parameters in most loaded shard in billions: 0.5584
|
| 735 |
+
Theoretical memory footprints: weight and optimizer=9585.70 MB
|
| 736 |
+
[Rank 2] (after 1 iterations) memory (MB) | allocated: 6874.138671875 | max allocated: 6874.1396484375 | reserved: 7222.0 | max reserved: 7222.0
|
| 737 |
+
[Rank 5] (after 1 iterations) memory (MB) | allocated: 6874.138671875 | max allocated: 6874.1396484375 | reserved: 7224.0 | max reserved: 7224.0
|
| 738 |
+
[Rank 4] (after 1 iterations) memory (MB) | allocated: 6874.138671875 | max allocated: 6874.1396484375 | reserved: 7222.0 | max reserved: 7222.0[Rank 0] (after 1 iterations) memory (MB) | allocated: 6874.138671875 | max allocated: 6874.1396484375 | reserved: 7222.0 | max reserved: 7222.0[Rank 1] (after 1 iterations) memory (MB) | allocated: 6874.138671875 | max allocated: 6874.1396484375 | reserved: 7222.0 | max reserved: 7222.0
|
| 739 |
+
|
| 740 |
+
|
| 741 |
+
[Rank 3] (after 1 iterations) memory (MB) | allocated: 6874.138671875 | max allocated: 6874.1396484375 | reserved: 7222.0 | max reserved: 7222.0[Rank 7] (after 1 iterations) memory (MB) | allocated: 6874.138671875 | max allocated: 6874.1396484375 | reserved: 7224.0 | max reserved: 7224.0
|
| 742 |
+
|
| 743 |
+
[Rank 6] (after 1 iterations) memory (MB) | allocated: 6874.138671875 | max allocated: 6874.1396484375 | reserved: 7222.0 | max reserved: 7222.0
|
| 744 |
+
batch tensor: tokens batch tensor: tokenstorch.Size([1, 1024])
|
| 745 |
+
batch tensor: labels torch.Size([1, 1024])torch.Size([1, 1024])
|
| 746 |
+
|
| 747 |
+
batch tensor: loss_maskbatch tensor: torch.Size([1, 1024])labels
|
| 748 |
+
torch.Size([1, 1024])
|
| 749 |
+
batch tensor: batch tensor:attention_mask loss_masktorch.Size([1, 1, 1024, 1024])
|
| 750 |
+
torch.Size([1, 1024])
|
| 751 |
+
batch tensor: batch tensor:position_ids attention_masktorch.Size([1, 1024])
|
| 752 |
+
torch.Size([1, 1, 1024, 1024])
|
| 753 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 754 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 755 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 756 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 757 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 758 |
+
batch tensor:batch tensor: position_ids torch.Size([1, 1024])tokens
|
| 759 |
+
torch.Size([1, 1024])
|
| 760 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 761 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 762 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 763 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 764 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 765 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 766 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 767 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 768 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 769 |
+
batch tensor: tokens torch.Size([1, 1024])batch tensor:
|
| 770 |
+
batch tensor:tokens labels torch.Size([1, 1024])
|
| 771 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 772 |
+
torch.Size([1, 1024])batch tensor:
|
| 773 |
+
attention_mask batch tensor:torch.Size([1, 1, 1024, 1024])
|
| 774 |
+
labels batch tensor:torch.Size([1, 1024])
|
| 775 |
+
position_ids batch tensor:torch.Size([1, 1024])
|
| 776 |
+
loss_mask torch.Size([1, 1024])
|
| 777 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 778 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 779 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 780 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 781 |
+
batch tensor after cp: batch tensor after cp:loss_mask tokenstorch.Size([1, 128])
|
| 782 |
+
batch tensor after cp:torch.Size([1, 128])
|
| 783 |
+
attention_mask batch tensor after cp: torch.Size([1, 1, 128, 1024])labels
|
| 784 |
+
batch tensor after cp:torch.Size([1, 128])
|
| 785 |
+
position_idsbatch tensor after cp: torch.Size([1, 128])loss_mask
|
| 786 |
+
torch.Size([1, 128])
|
| 787 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 788 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 789 |
+
batch tensor after cp:batch tensor after cp: tokenstokens torch.Size([1, 128])
|
| 790 |
+
batch tensor after cp:torch.Size([1, 128])
|
| 791 |
+
labels batch tensor after cp:torch.Size([1, 128])
|
| 792 |
+
labelsbatch tensor after cp: torch.Size([1, 128])loss_mask
|
| 793 |
+
batch tensor after cp:torch.Size([1, 128])
|
| 794 |
+
loss_maskbatch tensor after cp: torch.Size([1, 128])attention_mask
|
| 795 |
+
batch tensor after cp:torch.Size([1, 1, 128, 1024])
|
| 796 |
+
attention_mask batch tensor after cp: torch.Size([1, 1, 128, 1024])position_ids
|
| 797 |
+
batch tensor after cp:torch.Size([1, 128])
|
| 798 |
+
position_ids torch.Size([1, 128])
|
| 799 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 800 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 801 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 802 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 803 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 804 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 805 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 806 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 807 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 808 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 809 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 810 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 811 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 812 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 813 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 814 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 815 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 816 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 817 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 818 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 819 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 820 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 821 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 822 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 823 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 824 |
+
Start exporting trace 1
|
| 825 |
+
Done exporting trace 1
|
| 826 |
+
[2025-06-21 22:06:58] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 75.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 827 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 828 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 829 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 830 |
+
batch tensor: batch tensor:attention_maskbatch tensor: batch tensor: torch.Size([1, 1, 1024, 1024]) tokens
|
| 831 |
+
tokensbatch tensor:tokens position_idstorch.Size([1, 1024])
|
| 832 |
+
torch.Size([1, 1024])torch.Size([1, 1024])batch tensor:torch.Size([1, 1024])
|
| 833 |
+
|
| 834 |
+
batch tensor:
|
| 835 |
+
batch tensor: tokens labelsbatch tensor: labels torch.Size([1, 1024])torch.Size([1, 1024])labels
|
| 836 |
+
|
| 837 |
+
torch.Size([1, 1024])batch tensor: batch tensor:
|
| 838 |
+
loss_masktorch.Size([1, 1024]) batch tensor:loss_mask
|
| 839 |
+
torch.Size([1, 1024]) labels batch tensor:batch tensor:
|
| 840 |
+
torch.Size([1, 1024])torch.Size([1, 1024]) loss_mask
|
| 841 |
+
|
| 842 |
+
batch tensor:tokens batch tensor:batch tensor: torch.Size([1, 1024]) attention_maskloss_mask torch.Size([1, 1024])
|
| 843 |
+
attention_masktorch.Size([1, 1, 1024, 1024])batch tensor:
|
| 844 |
+
|
| 845 |
+
torch.Size([1, 1024]) batch tensor:attention_mask
|
| 846 |
+
batch tensor:batch tensor:batch tensor:torch.Size([1, 1, 1024, 1024])
|
| 847 |
+
attention_mask batch tensor:tokens position_idstorch.Size([1, 1, 1024, 1024]) position_ids
|
| 848 |
+
batch tensor: batch tensor:labels torch.Size([1, 1, 1024, 1024])torch.Size([1, 1024])
|
| 849 |
+
torch.Size([1, 1024])position_idstokens torch.Size([1, 1024])
|
| 850 |
+
batch tensor:
|
| 851 |
+
torch.Size([1, 1024])torch.Size([1, 1024])batch tensor:
|
| 852 |
+
|
| 853 |
+
|
| 854 |
+
torch.Size([1, 1024]) position_idslabelsbatch tensor:
|
| 855 |
+
torch.Size([1, 1024])loss_masktorch.Size([1, 1024])batch tensor:
|
| 856 |
+
|
| 857 |
+
batch tensor:torch.Size([1, 1024])labels
|
| 858 |
+
loss_maskbatch tensor:torch.Size([1, 1024])
|
| 859 |
+
attention_masktorch.Size([1, 1024])batch tensor:
|
| 860 |
+
loss_masktorch.Size([1, 1, 1024, 1024]) batch tensor:
|
| 861 |
+
torch.Size([1, 1024]) batch tensor:
|
| 862 |
+
attention_mask batch tensor: position_ids torch.Size([1, 1, 1024, 1024]) attention_mask
|
| 863 |
+
batch tensor:torch.Size([1, 1024]) position_ids
|
| 864 |
+
torch.Size([1, 1, 1024, 1024])
|
| 865 |
+
torch.Size([1, 1024])batch tensor:
|
| 866 |
+
position_ids torch.Size([1, 1024])
|
| 867 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 868 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 869 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 870 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 871 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 872 |
+
batch tensor after cp:batch tensor after cp:batch tensor after cp: batch tensor after cp:tokenstokenstokens tokens torch.Size([1, 128]) torch.Size([1, 128])
|
| 873 |
+
torch.Size([1, 128])
|
| 874 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 875 |
+
|
| 876 |
+
batch tensor after cp: batch tensor after cp:batch tensor after cp:labelslabels labelstorch.Size([1, 128])labels
|
| 877 |
+
torch.Size([1, 128]) batch tensor after cp:
|
| 878 |
+
torch.Size([1, 128]) batch tensor after cp:
|
| 879 |
+
loss_masktorch.Size([1, 128])batch tensor after cp:
|
| 880 |
+
loss_mask torch.Size([1, 128])batch tensor after cp: loss_mask
|
| 881 |
+
torch.Size([1, 128]) batch tensor after cp:
|
| 882 |
+
loss_masktorch.Size([1, 128])batch tensor after cp: batch tensor after cp:
|
| 883 |
+
torch.Size([1, 128])
|
| 884 |
+
batch tensor after cp:batch tensor after cp:attention_mask tokens attention_mask torch.Size([1, 1, 128, 1024])batch tensor after cp:torch.Size([1, 128])attention_mask tokens
|
| 885 |
+
|
| 886 |
+
batch tensor after cp:torch.Size([1, 1, 128, 1024])batch tensor after cp:attention_mask torch.Size([1, 1, 128, 1024])batch tensor after cp:
|
| 887 |
+
tokens
|
| 888 |
+
torch.Size([1, 128]) batch tensor after cp:torch.Size([1, 1, 128, 1024])
|
| 889 |
+
batch tensor after cp:position_idslabels
|
| 890 |
+
batch tensor after cp: torch.Size([1, 128]) position_idsbatch tensor after cp: position_ids
|
| 891 |
+
torch.Size([1, 128])torch.Size([1, 128])
|
| 892 |
+
labelstorch.Size([1, 128])batch tensor after cp:
|
| 893 |
+
position_idstorch.Size([1, 128])batch tensor after cp:
|
| 894 |
+
|
| 895 |
+
torch.Size([1, 128])loss_masklabelstorch.Size([1, 128])
|
| 896 |
+
|
| 897 |
+
batch tensor after cp: torch.Size([1, 128]) torch.Size([1, 128])
|
| 898 |
+
loss_mask
|
| 899 |
+
batch tensor after cp: batch tensor after cp:torch.Size([1, 128]) attention_mask
|
| 900 |
+
loss_maskbatch tensor after cp:torch.Size([1, 1, 128, 1024])
|
| 901 |
+
torch.Size([1, 128])attention_mask
|
| 902 |
+
batch tensor after cp: batch tensor after cp: torch.Size([1, 1, 128, 1024])position_idsattention_mask
|
| 903 |
+
batch tensor after cp:torch.Size([1, 128])torch.Size([1, 1, 128, 1024])
|
| 904 |
+
|
| 905 |
+
position_idsbatch tensor after cp: torch.Size([1, 128])position_ids
|
| 906 |
+
torch.Size([1, 128])
|
| 907 |
+
Start exporting trace 2
|
| 908 |
+
Done exporting trace 2
|
| 909 |
+
[2025-06-21 22:06:58] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 48.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 910 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 911 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 912 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 913 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 914 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 915 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 916 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 917 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 918 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 919 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 920 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 921 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 922 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 923 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 924 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 925 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 926 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 927 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 928 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 929 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 930 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 931 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 932 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 933 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 934 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 935 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 936 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 937 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 938 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 939 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 940 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 941 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 942 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 943 |
+
batch tensor: attention_mask batch tensor:torch.Size([1, 1, 1024, 1024])
|
| 944 |
+
batch tensor: tokensposition_ids torch.Size([1, 1024])
|
| 945 |
+
torch.Size([1, 1024])
|
| 946 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 947 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 948 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 949 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 950 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 951 |
+
batch tensor: labels batch tensor after cp:torch.Size([1, 1024])
|
| 952 |
+
tokensbatch tensor: loss_mask torch.Size([1, 128])
|
| 953 |
+
torch.Size([1, 1024])batch tensor after cp:batch tensor after cp:
|
| 954 |
+
batch tensor:tokens labels attention_mask torch.Size([1, 128]) torch.Size([1, 128])torch.Size([1, 1, 1024, 1024])
|
| 955 |
+
|
| 956 |
+
|
| 957 |
+
batch tensor after cp:batch tensor after cp:batch tensor:batch tensor: loss_mask labelstorch.Size([1, 128])
|
| 958 |
+
position_idstokens torch.Size([1, 128])
|
| 959 |
+
batch tensor after cp: torch.Size([1, 1024])batch tensor after cp:
|
| 960 |
+
torch.Size([1, 1024]) attention_mask
|
| 961 |
+
loss_mask batch tensor:torch.Size([1, 1, 128, 1024])torch.Size([1, 128])
|
| 962 |
+
|
| 963 |
+
batch tensor after cp:labelsbatch tensor after cp: position_idstorch.Size([1, 1024]) attention_mask
|
| 964 |
+
torch.Size([1, 128])batch tensor:torch.Size([1, 1, 128, 1024])
|
| 965 |
+
|
| 966 |
+
loss_mask batch tensor after cp: torch.Size([1, 1024])position_ids
|
| 967 |
+
torch.Size([1, 128])batch tensor:
|
| 968 |
+
attention_mask torch.Size([1, 1, 1024, 1024])
|
| 969 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 970 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 971 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 972 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 973 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 974 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 975 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 976 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 977 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 978 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 979 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 980 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 981 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 982 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 983 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 984 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 985 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 986 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 987 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 988 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 989 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 990 |
+
Start exporting trace 3
|
| 991 |
+
Done exporting trace 3
|
| 992 |
+
[2025-06-21 22:06:58] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 45.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 993 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 994 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 995 |
+
batch tensor: batch tensor:loss_mask torch.Size([1, 1024])
|
| 996 |
+
tokensbatch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 997 |
+
batch tensor: torch.Size([1, 1024])position_ids
|
| 998 |
+
torch.Size([1, 1024])
|
| 999 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1000 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1001 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1002 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1003 |
+
batch tensor: tokens batch tensor:torch.Size([1, 1024])
|
| 1004 |
+
tokensbatch tensor: labels torch.Size([1, 1024])
|
| 1005 |
+
batch tensor: torch.Size([1, 1024])loss_mask
|
| 1006 |
+
torch.Size([1, 1024])
|
| 1007 |
+
batch tensor:batch tensor: labelsattention_mask torch.Size([1, 1024])torch.Size([1, 1, 1024, 1024])
|
| 1008 |
+
|
| 1009 |
+
batch tensor: batch tensor:loss_mask position_idstorch.Size([1, 1024])
|
| 1010 |
+
torch.Size([1, 1024])
|
| 1011 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1012 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1013 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1014 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1015 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1016 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1017 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1018 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1019 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1020 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1021 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1022 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1023 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1024 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1025 |
+
batch tensor after cp:batch tensor after cp: loss_masktokens torch.Size([1, 128])
|
| 1026 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1027 |
+
attention_maskbatch tensor after cp: torch.Size([1, 1, 128, 1024])labels
|
| 1028 |
+
batch tensor after cp:torch.Size([1, 128])
|
| 1029 |
+
position_idsbatch tensor after cp: torch.Size([1, 128])
|
| 1030 |
+
loss_mask torch.Size([1, 128])
|
| 1031 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1032 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1033 |
+
batch tensor:batch tensor: batch tensor: tokens tokens tokenstorch.Size([1, 1024])
|
| 1034 |
+
torch.Size([1, 1024])batch tensor:
|
| 1035 |
+
torch.Size([1, 1024])labelsbatch tensor:
|
| 1036 |
+
torch.Size([1, 1024])labels
|
| 1037 |
+
batch tensor: batch tensor:torch.Size([1, 1024])
|
| 1038 |
+
labelsloss_maskbatch tensor: torch.Size([1, 1024])loss_masktorch.Size([1, 1024])
|
| 1039 |
+
|
| 1040 |
+
batch tensor:torch.Size([1, 1024])batch tensor:
|
| 1041 |
+
attention_maskbatch tensor:loss_mask attention_mask torch.Size([1, 1, 1024, 1024])torch.Size([1, 1024])
|
| 1042 |
+
torch.Size([1, 1, 1024, 1024])
|
| 1043 |
+
batch tensor:
|
| 1044 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1045 |
+
batch tensor: position_ids position_ids torch.Size([1, 1024])
|
| 1046 |
+
batch tensor:torch.Size([1, 1024])
|
| 1047 |
+
position_ids torch.Size([1, 1024])
|
| 1048 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1049 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1050 |
+
batch tensor after cp: loss_mask batch tensor after cp:torch.Size([1, 128])batch tensor after cp:
|
| 1051 |
+
tokens batch tensor after cp:tokens torch.Size([1, 128])attention_masktorch.Size([1, 128])
|
| 1052 |
+
|
| 1053 |
+
torch.Size([1, 1, 128, 1024])batch tensor after cp:batch tensor after cp:
|
| 1054 |
+
batch tensor after cp:labels labels position_ids torch.Size([1, 128])
|
| 1055 |
+
torch.Size([1, 128])torch.Size([1, 128])batch tensor after cp:
|
| 1056 |
+
|
| 1057 |
+
loss_maskbatch tensor after cp: torch.Size([1, 128])loss_mask
|
| 1058 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1059 |
+
batch tensor after cp:attention_mask attention_mask torch.Size([1, 1, 128, 1024])
|
| 1060 |
+
torch.Size([1, 1, 128, 1024])batch tensor after cp:
|
| 1061 |
+
batch tensor after cp:position_ids position_idstorch.Size([1, 128])
|
| 1062 |
+
torch.Size([1, 128])
|
| 1063 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1064 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1065 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1066 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1067 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1068 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1069 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1070 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1071 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1072 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1073 |
+
Start exporting trace 4
|
| 1074 |
+
Done exporting trace 4
|
| 1075 |
+
[2025-06-21 22:06:58] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 44.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 1076 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1077 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1078 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1079 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1080 |
+
batch tensor: batch tensor:position_ids torch.Size([1, 1024])
|
| 1081 |
+
tokens torch.Size([1, 1024])
|
| 1082 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1083 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1084 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1085 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1086 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1087 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1088 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1089 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1090 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1091 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1092 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1093 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1094 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1095 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1096 |
+
batch tensor: batch tensor after cp:tokens tokens torch.Size([1, 128])
|
| 1097 |
+
torch.Size([1, 1024])batch tensor after cp:
|
| 1098 |
+
labels batch tensor:torch.Size([1, 128])
|
| 1099 |
+
labelsbatch tensor after cp: torch.Size([1, 1024])
|
| 1100 |
+
loss_maskbatch tensor: torch.Size([1, 128])loss_mask
|
| 1101 |
+
batch tensor after cp:torch.Size([1, 1024])
|
| 1102 |
+
attention_mask batch tensor:torch.Size([1, 1, 128, 1024])
|
| 1103 |
+
attention_maskbatch tensor after cp: position_idstorch.Size([1, 1, 1024, 1024]) torch.Size([1, 128])
|
| 1104 |
+
|
| 1105 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1106 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1107 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1108 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1109 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1110 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1111 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1112 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1113 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1114 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1115 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1116 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1117 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1118 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1119 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1120 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1121 |
+
batch tensor:batch tensor: tokens tokens torch.Size([1, 1024])
|
| 1122 |
+
torch.Size([1, 1024])
|
| 1123 |
+
batch tensor: labelsbatch tensor: labelstorch.Size([1, 1024])
|
| 1124 |
+
torch.Size([1, 1024])batch tensor:
|
| 1125 |
+
batch tensor:loss_mask loss_masktorch.Size([1, 1024])
|
| 1126 |
+
torch.Size([1, 1024])
|
| 1127 |
+
batch tensor: batch tensor:attention_mask attention_masktorch.Size([1, 1, 1024, 1024])
|
| 1128 |
+
torch.Size([1, 1, 1024, 1024])batch tensor:
|
| 1129 |
+
position_idsbatch tensor: position_idstorch.Size([1, 1024])
|
| 1130 |
+
torch.Size([1, 1024])
|
| 1131 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1132 |
+
batch tensor after cp: batch tensor:tokens labels torch.Size([1, 128])torch.Size([1, 1024])
|
| 1133 |
+
|
| 1134 |
+
batch tensor after cp:batch tensor: labelsloss_mask torch.Size([1, 128])torch.Size([1, 1024])
|
| 1135 |
+
|
| 1136 |
+
batch tensor after cp:batch tensor: loss_maskattention_mask torch.Size([1, 128])torch.Size([1, 1, 1024, 1024])
|
| 1137 |
+
|
| 1138 |
+
batch tensor after cp:batch tensor: attention_maskposition_ids torch.Size([1, 1, 128, 1024])torch.Size([1, 1024])
|
| 1139 |
+
|
| 1140 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1141 |
+
batch tensor after cp: tokens batch tensor after cp:torch.Size([1, 128])
|
| 1142 |
+
tokensbatch tensor after cp: torch.Size([1, 128])labels
|
| 1143 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1144 |
+
batch tensor after cp:labels loss_masktorch.Size([1, 128])
|
| 1145 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1146 |
+
loss_maskbatch tensor after cp: torch.Size([1, 128])attention_mask
|
| 1147 |
+
batch tensor after cp:torch.Size([1, 1, 128, 1024])
|
| 1148 |
+
attention_mask batch tensor after cp:torch.Size([1, 1, 128, 1024])
|
| 1149 |
+
position_ids batch tensor after cp:torch.Size([1, 128])
|
| 1150 |
+
position_ids torch.Size([1, 128])
|
| 1151 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1152 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1153 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1154 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1155 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1156 |
+
Start exporting trace 5
|
| 1157 |
+
Done exporting trace 5
|
| 1158 |
+
[2025-06-21 22:06:58] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 45.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 1159 |
+
batch tensor:batch tensor: tokenstokens torch.Size([1, 1024])torch.Size([1, 1024])
|
| 1160 |
+
|
| 1161 |
+
batch tensor: labelsbatch tensor: torch.Size([1, 1024])labels
|
| 1162 |
+
batch tensor: loss_masktorch.Size([1, 1024])
|
| 1163 |
+
torch.Size([1, 1024])
|
| 1164 |
+
batch tensor: batch tensor:loss_mask attention_masktorch.Size([1, 1024]) batch tensor:torch.Size([1, 1, 1024, 1024])
|
| 1165 |
+
|
| 1166 |
+
batch tensor:batch tensor: batch tensor:tokensposition_ids attention_mask torch.Size([1, 1024])
|
| 1167 |
+
torch.Size([1, 1, 1024, 1024])tokens
|
| 1168 |
+
torch.Size([1, 1024])batch tensor:
|
| 1169 |
+
position_ids batch tensor:torch.Size([1, 1024])torch.Size([1, 1024])
|
| 1170 |
+
|
| 1171 |
+
labels torch.Size([1, 1024])batch tensor:
|
| 1172 |
+
batch tensor:labels loss_masktorch.Size([1, 1024])
|
| 1173 |
+
torch.Size([1, 1024])batch tensor:
|
| 1174 |
+
loss_maskbatch tensor: torch.Size([1, 1024])attention_mask
|
| 1175 |
+
torch.Size([1, 1, 1024, 1024])batch tensor:
|
| 1176 |
+
attention_maskbatch tensor: position_idstorch.Size([1, 1, 1024, 1024])
|
| 1177 |
+
torch.Size([1, 1024])batch tensor:
|
| 1178 |
+
position_ids torch.Size([1, 1024])
|
| 1179 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1180 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1181 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1182 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1183 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1184 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1185 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1186 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1187 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1188 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1189 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1190 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1191 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1192 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1193 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1194 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1195 |
+
batch tensor after cp:batch tensor after cp: tokenslabels batch tensor after cp:torch.Size([1, 128])torch.Size([1, 128])
|
| 1196 |
+
|
| 1197 |
+
tokensbatch tensor after cp:batch tensor after cp: loss_mask labelstorch.Size([1, 128]) torch.Size([1, 128])
|
| 1198 |
+
torch.Size([1, 128])
|
| 1199 |
+
|
| 1200 |
+
batch tensor after cp:batch tensor after cp:batch tensor after cp: labelsattention_maskloss_mask batch tensor: torch.Size([1, 128])
|
| 1201 |
+
torch.Size([1, 128])torch.Size([1, 1, 128, 1024])
|
| 1202 |
+
batch tensor after cp:
|
| 1203 |
+
batch tensor after cp:tokens batch tensor after cp:attention_mask loss_mask position_ids torch.Size([1, 1, 128, 1024])torch.Size([1, 128])torch.Size([1, 128])
|
| 1204 |
+
|
| 1205 |
+
torch.Size([1, 1024])
|
| 1206 |
+
batch tensor after cp:batch tensor after cp:
|
| 1207 |
+
attention_maskposition_ids batch tensor:torch.Size([1, 1, 128, 1024])torch.Size([1, 128])
|
| 1208 |
+
|
| 1209 |
+
labelsbatch tensor after cp: torch.Size([1, 1024])position_ids
|
| 1210 |
+
torch.Size([1, 128])batch tensor:
|
| 1211 |
+
loss_mask torch.Size([1, 1024])
|
| 1212 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1213 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1214 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1215 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1216 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1217 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1218 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1219 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1220 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1221 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1222 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1223 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1224 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1225 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1226 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1227 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1228 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1229 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1230 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1231 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1232 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1233 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1234 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1235 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1236 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1237 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1238 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1239 |
+
Start exporting trace 6
|
| 1240 |
+
Done exporting trace 6
|
| 1241 |
+
[2025-06-21 22:06:58] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 43.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 1242 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1243 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1244 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1245 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1246 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1247 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1248 |
+
batch tensor:batch tensor: labels torch.Size([1, 1024])tokens
|
| 1249 |
+
torch.Size([1, 1024])
|
| 1250 |
+
batch tensor:batch tensor: labels torch.Size([1, 1024])
|
| 1251 |
+
batch tensor: loss_maskbatch tensor: tokens torch.Size([1, 1024]) loss_mask torch.Size([1, 1024])
|
| 1252 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024]) torch.Size([1, 1024])
|
| 1253 |
+
batch tensor: attention_mask
|
| 1254 |
+
torch.Size([1, 1, 1024, 1024])
|
| 1255 |
+
|
| 1256 |
+
batch tensor:batch tensor: batch tensor:position_idslabels position_idstorch.Size([1, 1024])torch.Size([1, 1024])
|
| 1257 |
+
|
| 1258 |
+
torch.Size([1, 1024])
|
| 1259 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1260 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1261 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1262 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1263 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1264 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1265 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1266 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1267 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1268 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1269 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1270 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1271 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1272 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1273 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1274 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1275 |
+
batch tensor: attention_mask batch tensor after cp:torch.Size([1, 1, 1024, 1024]) batch tensor after cp:tokens
|
| 1276 |
+
batch tensor:batch tensor after cp: tokens torch.Size([1, 128])position_ids tokens
|
| 1277 |
+
torch.Size([1, 128]) torch.Size([1, 128])batch tensor after cp:
|
| 1278 |
+
torch.Size([1, 1024])batch tensor after cp:
|
| 1279 |
+
labels
|
| 1280 |
+
batch tensor after cp: labels torch.Size([1, 128]) labels
|
| 1281 |
+
torch.Size([1, 128]) batch tensor after cp:
|
| 1282 |
+
torch.Size([1, 128])
|
| 1283 |
+
batch tensor after cp:batch tensor after cp:loss_mask loss_maskloss_mask torch.Size([1, 128]) torch.Size([1, 128])
|
| 1284 |
+
torch.Size([1, 128])
|
| 1285 |
+
batch tensor after cp:
|
| 1286 |
+
batch tensor after cp: batch tensor after cp: attention_mask attention_mask attention_mask torch.Size([1, 1, 128, 1024])torch.Size([1, 1, 128, 1024])torch.Size([1, 1, 128, 1024])
|
| 1287 |
+
|
| 1288 |
+
|
| 1289 |
+
batch tensor after cp:batch tensor after cp: batch tensor after cp:position_ids position_idsposition_ids batch tensor:batch tensor after cp:torch.Size([1, 128]) torch.Size([1, 128])tokenstorch.Size([1, 128])
|
| 1290 |
+
|
| 1291 |
+
|
| 1292 |
+
tokens torch.Size([1, 128])torch.Size([1, 1024])
|
| 1293 |
+
|
| 1294 |
+
batch tensor after cp:batch tensor: labelslabels torch.Size([1, 1024])torch.Size([1, 128])
|
| 1295 |
+
|
| 1296 |
+
batch tensor:batch tensor after cp: loss_maskloss_mask torch.Size([1, 1024])torch.Size([1, 128])
|
| 1297 |
+
|
| 1298 |
+
batch tensor:batch tensor after cp: attention_maskattention_mask torch.Size([1, 1, 1024, 1024])
|
| 1299 |
+
torch.Size([1, 1, 128, 1024])
|
| 1300 |
+
batch tensor: batch tensor after cp: position_idsposition_ids torch.Size([1, 1024])torch.Size([1, 128])
|
| 1301 |
+
|
| 1302 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1303 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1304 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1305 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1306 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1307 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1308 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1309 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1310 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1311 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1312 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1313 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1314 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1315 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1316 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1317 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1318 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1319 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1320 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1321 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1322 |
+
Start exporting trace 7
|
| 1323 |
+
Done exporting trace 7
|
| 1324 |
+
[2025-06-21 22:06:58] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 43.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 1325 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1326 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1327 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1328 |
+
batch tensor:batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])tokens
|
| 1329 |
+
batch tensor:batch tensor: position_ids torch.Size([1, 1024])torch.Size([1, 1024])tokens
|
| 1330 |
+
|
| 1331 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1332 |
+
torch.Size([1, 1024])batch tensor:
|
| 1333 |
+
loss_mask torch.Size([1, 1024])
|
| 1334 |
+
batch tensor:batch tensor: labelsattention_mask torch.Size([1, 1, 1024, 1024])torch.Size([1, 1024])
|
| 1335 |
+
|
| 1336 |
+
batch tensor:batch tensor: position_idsloss_mask torch.Size([1, 1024])torch.Size([1, 1024])
|
| 1337 |
+
|
| 1338 |
+
batch tensor:batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])tokens
|
| 1339 |
+
batch tensor: position_ids torch.Size([1, 1024])torch.Size([1, 1024])
|
| 1340 |
+
|
| 1341 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1342 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1343 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1344 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1345 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1346 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1347 |
+
batch tensor:batch tensor: loss_mask torch.Size([1, 1024])tokens
|
| 1348 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1349 |
+
torch.Size([1, 1024])batch tensor:
|
| 1350 |
+
position_ids batch tensor:torch.Size([1, 1024])
|
| 1351 |
+
labels torch.Size([1, 1024])
|
| 1352 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1353 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1354 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1355 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1356 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1357 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1358 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1359 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1360 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1361 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1362 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])batch tensor after cp:
|
| 1363 |
+
batch tensor after cp:tokens attention_mask torch.Size([1, 128])torch.Size([1, 1, 128, 1024])
|
| 1364 |
+
|
| 1365 |
+
batch tensor after cp:batch tensor after cp: labelsposition_ids torch.Size([1, 128])torch.Size([1, 128])
|
| 1366 |
+
|
| 1367 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1368 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1369 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1370 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1371 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1372 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1373 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1374 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1375 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1376 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1377 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1378 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1379 |
+
batch tensor after cp:batch tensor after cp: tokensposition_ids torch.Size([1, 128])torch.Size([1, 128])
|
| 1380 |
+
|
| 1381 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1382 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1383 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1384 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1385 |
+
batch tensor:batch tensor: tokenstokens torch.Size([1, 1024])torch.Size([1, 1024])
|
| 1386 |
+
|
| 1387 |
+
batch tensor: batch tensor:labels labelstorch.Size([1, 1024])
|
| 1388 |
+
torch.Size([1, 1024])
|
| 1389 |
+
batch tensor: batch tensor:loss_mask loss_masktorch.Size([1, 1024])
|
| 1390 |
+
torch.Size([1, 1024])
|
| 1391 |
+
batch tensor: batch tensor:attention_mask attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1392 |
+
torch.Size([1, 1, 1024, 1024])
|
| 1393 |
+
batch tensor: batch tensor:position_ids position_ids torch.Size([1, 1024])
|
| 1394 |
+
torch.Size([1, 1024])
|
| 1395 |
+
batch tensor after cp: tokens batch tensor after cp:torch.Size([1, 128])
|
| 1396 |
+
tokensbatch tensor after cp: labelstorch.Size([1, 128])
|
| 1397 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1398 |
+
batch tensor after cp:labels loss_masktorch.Size([1, 128])
|
| 1399 |
+
torch.Size([1, 128])
|
| 1400 |
+
batch tensor after cp: batch tensor after cp:loss_mask attention_masktorch.Size([1, 128])
|
| 1401 |
+
torch.Size([1, 1, 128, 1024])batch tensor after cp:
|
| 1402 |
+
attention_maskbatch tensor after cp: position_idstorch.Size([1, 1, 128, 1024])
|
| 1403 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1404 |
+
position_ids torch.Size([1, 128])
|
| 1405 |
+
Start exporting trace 8
|
| 1406 |
+
Done exporting trace 8
|
| 1407 |
+
[2025-06-21 22:06:58] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 41.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 1408 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1409 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1410 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1411 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1412 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1413 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1414 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1415 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1416 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1417 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1418 |
+
batch tensor:batch tensor: tokens batch tensor:torch.Size([1, 1024])tokens
|
| 1419 |
+
tokensbatch tensor: labels torch.Size([1, 1024])
|
| 1420 |
+
torch.Size([1, 1024])batch tensor:torch.Size([1, 1024])
|
| 1421 |
+
|
| 1422 |
+
loss_mask batch tensor:batch tensor:torch.Size([1, 1024])
|
| 1423 |
+
labelslabelsbatch tensor: torch.Size([1, 1024])torch.Size([1, 1024])attention_mask
|
| 1424 |
+
|
| 1425 |
+
batch tensor:batch tensor:torch.Size([1, 1, 1024, 1024])
|
| 1426 |
+
loss_maskloss_maskbatch tensor: torch.Size([1, 1024])torch.Size([1, 1024])position_ids
|
| 1427 |
+
|
| 1428 |
+
batch tensor:batch tensor:torch.Size([1, 1024])
|
| 1429 |
+
attention_maskattention_mask torch.Size([1, 1, 1024, 1024])torch.Size([1, 1, 1024, 1024])
|
| 1430 |
+
|
| 1431 |
+
batch tensor:batch tensor: batch tensor:position_idsposition_ids torch.Size([1, 1024])torch.Size([1, 1024])tokens
|
| 1432 |
+
|
| 1433 |
+
torch.Size([1, 1024])
|
| 1434 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1435 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1436 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1437 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1438 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1439 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1440 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1441 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1442 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1443 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1444 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1445 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1446 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1447 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1448 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1449 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1450 |
+
batch tensor after cp:batch tensor after cp: loss_masktokens torch.Size([1, 128])
|
| 1451 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1452 |
+
attention_maskbatch tensor after cp: labelstorch.Size([1, 1, 128, 1024])
|
| 1453 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1454 |
+
batch tensor after cp:position_ids loss_masktorch.Size([1, 128])
|
| 1455 |
+
torch.Size([1, 128])
|
| 1456 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1457 |
+
batch tensor after cp: batch tensor after cp:position_ids torch.Size([1, 128])tokens
|
| 1458 |
+
batch tensor after cp: torch.Size([1, 128])tokens
|
| 1459 |
+
batch tensor after cp:torch.Size([1, 128])
|
| 1460 |
+
labels batch tensor after cp:torch.Size([1, 128])
|
| 1461 |
+
labelsbatch tensor after cp: torch.Size([1, 128])loss_mask
|
| 1462 |
+
batch tensor after cp:torch.Size([1, 128])
|
| 1463 |
+
loss_mask batch tensor after cp:torch.Size([1, 128])
|
| 1464 |
+
attention_mask batch tensor after cp:torch.Size([1, 1, 128, 1024])
|
| 1465 |
+
attention_maskbatch tensor after cp: position_idstorch.Size([1, 1, 128, 1024])
|
| 1466 |
+
torch.Size([1, 128])batch tensor after cp:
|
| 1467 |
+
position_ids torch.Size([1, 128])
|
| 1468 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1469 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1470 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1471 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1472 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1473 |
+
batch tensor: tokens torch.Size([1, 1024])
|
| 1474 |
+
batch tensor: labels torch.Size([1, 1024])
|
| 1475 |
+
batch tensor: loss_mask torch.Size([1, 1024])
|
| 1476 |
+
batch tensor: attention_mask torch.Size([1, 1, 1024, 1024])
|
| 1477 |
+
batch tensor: position_ids torch.Size([1, 1024])
|
| 1478 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1479 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1480 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1481 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1482 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1483 |
+
batch tensor after cp: tokens torch.Size([1, 128])
|
| 1484 |
+
batch tensor after cp: labels torch.Size([1, 128])
|
| 1485 |
+
batch tensor after cp: loss_mask torch.Size([1, 128])
|
| 1486 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 128, 1024])
|
| 1487 |
+
batch tensor after cp: position_ids torch.Size([1, 128])
|
| 1488 |
+
Start exporting trace 9
|
| 1489 |
+
Done exporting trace 9
|
| 1490 |
+
[2025-06-21 22:06:58] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 45.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 1491 |
+
[after training is done] datetime: 2025-06-21 22:06:58
|
| 1492 |
+
saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format
|
| 1493 |
+
DEBUG:megatron.training.checkpointing:rank: 5, takes 0.03761911392211914 to prepare state dict for ckpt
|
| 1494 |
+
DEBUG:megatron.training.checkpointing:rank: 0, takes 0.03771018981933594 to prepare state dict for ckpt
|
| 1495 |
+
DEBUG:megatron.training.checkpointing:rank: 2, takes 0.03774237632751465 to prepare state dict for ckpt
|
| 1496 |
+
DEBUG:megatron.training.checkpointing:rank: 1, takes 0.037798404693603516 to prepare state dict for ckpt
|
| 1497 |
+
DEBUG:megatron.training.checkpointing:rank: 3, takes 0.03775763511657715 to prepare state dict for ckpt
|
| 1498 |
+
DEBUG:megatron.training.checkpointing:rank: 6, takes 0.037831783294677734 to prepare state dict for ckpt
|
| 1499 |
+
DEBUG:megatron.training.checkpointing:rank: 4, takes 0.03794074058532715 to prepare state dict for ckpt
|
| 1500 |
+
DEBUG:megatron.training.checkpointing:rank: 7, takes 0.037969350814819336 to prepare state dict for ckpt
|
| 1501 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 1502 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 1503 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 1504 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 1505 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 1506 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 1507 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|