GindaChen commited on
Commit
5766c7e
·
verified ·
1 Parent(s): 28276c9

Upload folder using huggingface_hub

Browse files
Files changed (45) hide show
  1. .gitattributes +1 -0
  2. attnserver.run_attnserver.slurm.sh.343188.err.log +0 -0
  3. attnserver.run_attnserver.slurm.sh.343188.out.log +198 -0
  4. attnserver.run_attnserver.slurm.sh.343195.err.log +0 -0
  5. attnserver.run_attnserver.slurm.sh.343195.out.log +772 -0
  6. attnserver.run_attnserver.slurm.sh.343196.err.log +2 -2
  7. attnserver.run_attnserver.slurm.sh.343196.out.log +1531 -0
  8. attnserver.run_attnserver.slurm.sh.343202.err.log +0 -0
  9. attnserver.run_attnserver.slurm.sh.343202.out.log +0 -0
  10. attnserver.run_attnserver.slurm.sh.343203.err.log +0 -0
  11. attnserver.run_attnserver.slurm.sh.343203.out.log +0 -0
  12. attnserver.run_attnserver.slurm.sh.343204.err.log +0 -0
  13. attnserver.run_attnserver.slurm.sh.343204.out.log +0 -0
  14. attnserver.run_attnserver.slurm.sh.343205.err.log +0 -0
  15. attnserver.run_attnserver.slurm.sh.343205.out.log +0 -0
  16. attnserver.run_attnserver.slurm.sh.343206.err.log +0 -0
  17. attnserver.run_attnserver.slurm.sh.343206.out.log +0 -0
  18. attnserver.run_attnserver.slurm.sh.343207.err.log +0 -0
  19. attnserver.run_attnserver.slurm.sh.343207.out.log +0 -0
  20. attnserver.run_attnserver.slurm.sh.343208.err.log +0 -0
  21. attnserver.run_attnserver.slurm.sh.343208.out.log +0 -0
  22. attnserver.run_attnserver.slurm.sh.343209.err.log +0 -0
  23. attnserver.run_attnserver.slurm.sh.343209.out.log +0 -0
  24. attnserver.run_attnserver.slurm.sh.343210.err.log +0 -0
  25. attnserver.run_attnserver.slurm.sh.343210.out.log +0 -0
  26. attnserver.run_attnserver.slurm.sh.343211.err.log +0 -0
  27. attnserver.run_attnserver.slurm.sh.343211.out.log +0 -0
  28. attnserver.run_attnserver.slurm.sh.343212.err.log +0 -0
  29. attnserver.run_attnserver.slurm.sh.343212.out.log +0 -0
  30. attnserver.run_attnserver.slurm.sh.343213.err.log +0 -0
  31. attnserver.run_attnserver.slurm.sh.343213.out.log +0 -0
  32. attnserver.run_attnserver.slurm.sh.343214.err.log +0 -0
  33. attnserver.run_attnserver.slurm.sh.343214.out.log +0 -0
  34. attnserver.run_attnserver.slurm.sh.343219.err.log +0 -0
  35. attnserver.run_attnserver.slurm.sh.343219.out.log +0 -0
  36. attnserver.run_attnserver.slurm.sh.343220.err.log +0 -0
  37. attnserver.run_attnserver.slurm.sh.343220.out.log +0 -0
  38. attnserver.run_attnserver.slurm.sh.343221.err.log +0 -0
  39. attnserver.run_attnserver.slurm.sh.343221.out.log +0 -0
  40. attnserver.run_attnserver.slurm.sh.343222.err.log +543 -0
  41. attnserver.run_attnserver.slurm.sh.343222.out.log +0 -0
  42. attnserver.run_attnserver.slurm.sh.343223.err.log +156 -0
  43. attnserver.run_attnserver.slurm.sh.343223.out.log +19 -0
  44. attnserver.run_attnserver.slurm.sh.343225.err.log +199 -0
  45. attnserver.run_attnserver.slurm.sh.343225.out.log +0 -0
.gitattributes CHANGED
@@ -62,3 +62,4 @@ attnserver.run_attnserver.slurm.sh.343191.err.log filter=lfs diff=lfs merge=lfs
62
  attnserver.run_attnserver.slurm.sh.343192.err.log filter=lfs diff=lfs merge=lfs -text
63
  attnserver.run_attnserver.slurm.sh.343194.err.log filter=lfs diff=lfs merge=lfs -text
64
  attnserver.run_attnserver.slurm.sh.343196.err.log filter=lfs diff=lfs merge=lfs -text
 
 
62
  attnserver.run_attnserver.slurm.sh.343192.err.log filter=lfs diff=lfs merge=lfs -text
63
  attnserver.run_attnserver.slurm.sh.343194.err.log filter=lfs diff=lfs merge=lfs -text
64
  attnserver.run_attnserver.slurm.sh.343196.err.log filter=lfs diff=lfs merge=lfs -text
65
+ attnserver.run_attnserver.slurm.sh.343205.err.log filter=lfs diff=lfs merge=lfs -text
attnserver.run_attnserver.slurm.sh.343188.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343188.out.log CHANGED
@@ -125448,3 +125448,201 @@ batch tensor after cp: labels torch.Size([1, 16384])
125448
  batch tensor after cp: loss_mask torch.Size([1, 16384])
125449
  batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
125450
  batch tensor after cp: position_ids torch.Size([1, 16384])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125448
  batch tensor after cp: loss_mask torch.Size([1, 16384])
125449
  batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
125450
  batch tensor after cp: position_ids torch.Size([1, 16384])
125451
+ batch tensor: tokens torch.Size([1, 131072])
125452
+ batch tensor: labels torch.Size([1, 131072])
125453
+ batch tensor: loss_mask torch.Size([1, 131072])
125454
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
125455
+ batch tensor: position_ids torch.Size([1, 131072])
125456
+ batch tensor after cp: tokens torch.Size([1, 16384])
125457
+ batch tensor after cp: labels torch.Size([1, 16384])
125458
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
125459
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
125460
+ batch tensor after cp: position_ids torch.Size([1, 16384])
125461
+ Start exporting trace 9
125462
+ Done exporting trace 9
125463
+ [2025-06-21 21:20:32] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 131186.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
125464
+ [after training is done] datetime: 2025-06-21 21:20:32
125465
+ saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format
125466
+ DEBUG:megatron.training.checkpointing:rank: 21, takes 0.03559446334838867 to prepare state dict for ckpt
125467
+ DEBUG:megatron.training.checkpointing:rank: 18, takes 0.035654306411743164 to prepare state dict for ckpt
125468
+ DEBUG:megatron.training.checkpointing:rank: 23, takes 0.03560662269592285 to prepare state dict for ckpt
125469
+ DEBUG:megatron.training.checkpointing:rank: 19, takes 0.03567671775817871 to prepare state dict for ckpt
125470
+ DEBUG:megatron.training.checkpointing:rank: 20, takes 0.035619258880615234 to prepare state dict for ckpt
125471
+ DEBUG:megatron.training.checkpointing:rank: 17, takes 0.03568840026855469 to prepare state dict for ckpt
125472
+ DEBUG:megatron.training.checkpointing:rank: 22, takes 0.035652875900268555 to prepare state dict for ckpt
125473
+ DEBUG:megatron.training.checkpointing:rank: 1, takes 0.038015127182006836 to prepare state dict for ckpt
125474
+ DEBUG:megatron.training.checkpointing:rank: 6, takes 0.03799891471862793 to prepare state dict for ckpt
125475
+ DEBUG:megatron.training.checkpointing:rank: 7, takes 0.038033485412597656 to prepare state dict for ckpt
125476
+ DEBUG:megatron.training.checkpointing:rank: 5, takes 0.038045644760131836 to prepare state dict for ckpt
125477
+ DEBUG:megatron.training.checkpointing:rank: 4, takes 0.03802990913391113 to prepare state dict for ckpt
125478
+ DEBUG:megatron.training.checkpointing:rank: 2, takes 0.0380549430847168 to prepare state dict for ckpt
125479
+ DEBUG:megatron.training.checkpointing:rank: 0, takes 0.039293766021728516 to prepare state dict for ckpt
125480
+ DEBUG:megatron.training.checkpointing:rank: 60, takes 0.03842616081237793 to prepare state dict for ckpt
125481
+ DEBUG:megatron.training.checkpointing:rank: 62, takes 0.03842592239379883 to prepare state dict for ckpt
125482
+ DEBUG:megatron.training.checkpointing:rank: 38, takes 0.03788471221923828 to prepare state dict for ckpt
125483
+ DEBUG:megatron.training.checkpointing:rank: 39, takes 0.03789353370666504 to prepare state dict for ckpt
125484
+ DEBUG:megatron.training.checkpointing:rank: 54, takes 0.0380856990814209 to prepare state dict for ckpt
125485
+ DEBUG:megatron.training.checkpointing:rank: 59, takes 0.03845810890197754 to prepare state dict for ckpt
125486
+ DEBUG:megatron.training.checkpointing:rank: 61, takes 0.03849649429321289 to prepare state dict for ckpt
125487
+ DEBUG:megatron.training.checkpointing:rank: 63, takes 0.03849530220031738 to prepare state dict for ckpt
125488
+ DEBUG:megatron.training.checkpointing:rank: 58, takes 0.03854060173034668 to prepare state dict for ckpt
125489
+ DEBUG:megatron.training.checkpointing:rank: 33, takes 0.0381929874420166 to prepare state dict for ckpt
125490
+ DEBUG:megatron.training.checkpointing:rank: 53, takes 0.0381319522857666 to prepare state dict for ckpt
125491
+ DEBUG:megatron.training.checkpointing:rank: 56, takes 0.038945674896240234 to prepare state dict for ckpt
125492
+ DEBUG:megatron.training.checkpointing:rank: 36, takes 0.038127899169921875 to prepare state dict for ckpt
125493
+ DEBUG:megatron.training.checkpointing:rank: 51, takes 0.03824138641357422 to prepare state dict for ckpt
125494
+ DEBUG:megatron.training.checkpointing:rank: 49, takes 0.03826737403869629 to prepare state dict for ckpt
125495
+ DEBUG:megatron.training.checkpointing:rank: 37, takes 0.038047075271606445 to prepare state dict for ckpt
125496
+ DEBUG:megatron.training.checkpointing:rank: 55, takes 0.038230180740356445 to prepare state dict for ckpt
125497
+ DEBUG:megatron.training.checkpointing:rank: 32, takes 0.03902602195739746 to prepare state dict for ckpt
125498
+ DEBUG:megatron.training.checkpointing:rank: 52, takes 0.03822898864746094 to prepare state dict for ckpt
125499
+ DEBUG:megatron.training.checkpointing:rank: 48, takes 0.039357662200927734 to prepare state dict for ckpt
125500
+ DEBUG:megatron.training.checkpointing:rank: 46, takes 0.04223155975341797 to prepare state dict for ckpt
125501
+ DEBUG:megatron.training.checkpointing:rank: 47, takes 0.04227733612060547 to prepare state dict for ckpt
125502
+ DEBUG:megatron.training.checkpointing:rank: 42, takes 0.04228615760803223 to prepare state dict for ckpt
125503
+ DEBUG:megatron.training.checkpointing:rank: 44, takes 0.04235696792602539 to prepare state dict for ckpt
125504
+ DEBUG:megatron.training.checkpointing:rank: 13, takes 0.0433039665222168 to prepare state dict for ckpt
125505
+ DEBUG:megatron.training.checkpointing:rank: 15, takes 0.043365478515625 to prepare state dict for ckpt
125506
+ DEBUG:megatron.training.checkpointing:rank: 12, takes 0.043390750885009766 to prepare state dict for ckpt
125507
+ DEBUG:megatron.training.checkpointing:rank: 11, takes 0.04373908042907715 to prepare state dict for ckpt
125508
+ DEBUG:megatron.training.checkpointing:rank: 14, takes 0.04376220703125 to prepare state dict for ckpt
125509
+ DEBUG:megatron.training.checkpointing:rank: 8, takes 0.04438281059265137 to prepare state dict for ckpt
125510
+ DEBUG:megatron.training.checkpointing:rank: 34, takes 0.04465961456298828 to prepare state dict for ckpt
125511
+ DEBUG:megatron.training.checkpointing:rank: 35, takes 0.04468655586242676 to prepare state dict for ckpt
125512
+ DEBUG:megatron.training.checkpointing:rank: 45, takes 0.04526782035827637 to prepare state dict for ckpt
125513
+ DEBUG:megatron.training.checkpointing:rank: 57, takes 0.04595685005187988 to prepare state dict for ckpt
125514
+ DEBUG:megatron.training.checkpointing:rank: 3, takes 0.046442270278930664 to prepare state dict for ckpt
125515
+ DEBUG:megatron.training.checkpointing:rank: 28, takes 0.046813249588012695 to prepare state dict for ckpt
125516
+ DEBUG:megatron.training.checkpointing:rank: 31, takes 0.046875715255737305 to prepare state dict for ckpt
125517
+ DEBUG:megatron.training.checkpointing:rank: 27, takes 0.04690742492675781 to prepare state dict for ckpt
125518
+ DEBUG:megatron.training.checkpointing:rank: 30, takes 0.0469052791595459 to prepare state dict for ckpt
125519
+ DEBUG:megatron.training.checkpointing:rank: 50, takes 0.046746253967285156 to prepare state dict for ckpt
125520
+ DEBUG:megatron.training.checkpointing:rank: 41, takes 0.04684638977050781 to prepare state dict for ckpt
125521
+ DEBUG:megatron.training.checkpointing:rank: 40, takes 0.05038762092590332 to prepare state dict for ckpt
125522
+ DEBUG:megatron.training.checkpointing:rank: 43, takes 0.050742149353027344 to prepare state dict for ckpt
125523
+ DEBUG:megatron.training.checkpointing:rank: 10, takes 0.05088043212890625 to prepare state dict for ckpt
125524
+ DEBUG:megatron.training.checkpointing:rank: 26, takes 0.050818681716918945 to prepare state dict for ckpt
125525
+ DEBUG:megatron.training.checkpointing:rank: 9, takes 0.05148434638977051 to prepare state dict for ckpt
125526
+ DEBUG:megatron.training.checkpointing:rank: 29, takes 0.052930355072021484 to prepare state dict for ckpt
125527
+ DEBUG:megatron.training.checkpointing:rank: 25, takes 0.053748369216918945 to prepare state dict for ckpt
125528
+ DEBUG:megatron.training.checkpointing:rank: 24, takes 0.05582380294799805 to prepare state dict for ckpt
125529
+ DEBUG:megatron.training.checkpointing:rank: 16, takes 0.1492319107055664 to prepare state dict for ckpt
125530
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125531
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125532
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125533
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125534
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125535
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125536
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125537
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125538
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125539
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125540
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125541
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125542
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125543
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125544
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125545
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125546
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125547
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125548
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125549
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125550
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125551
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125552
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125553
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125554
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125555
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125556
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125557
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125558
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125559
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125560
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125561
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125562
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125563
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125564
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125565
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125566
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125567
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125568
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125569
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125570
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125571
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125572
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125573
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125574
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125575
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125576
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125577
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125578
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125579
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125580
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125581
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125582
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125583
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125584
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125585
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125586
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125587
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125588
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125589
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125590
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125591
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125592
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
125593
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125594
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125595
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125596
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125597
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125598
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125599
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125600
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125601
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125602
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125603
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125604
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125605
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125606
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125607
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125608
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125609
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125610
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125611
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125612
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125613
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125614
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125615
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125616
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125617
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125618
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125619
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125620
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125621
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125622
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125623
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125624
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125625
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125626
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125627
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125628
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125629
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125630
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125631
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125632
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125633
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125634
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125635
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125636
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125637
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125638
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125639
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125640
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125641
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125642
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125643
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125644
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125645
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125646
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125647
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
125648
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(52428800), 1), (np.int64(46137344), 2), (np.int64(46137344), 3), (np.int64(41959936), 4), (np.int64(41959936), 5), (np.int64(44040192), 6), (np.int64(44040192), 7)]
attnserver.run_attnserver.slurm.sh.343195.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343195.out.log CHANGED
@@ -68053,3 +68053,775 @@ batch tensor after cp: labels torch.Size([1, 32768])
68053
  batch tensor after cp: loss_mask torch.Size([1, 32768])
68054
  batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68055
  batch tensor after cp: position_ids torch.Size([1, 32768])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68053
  batch tensor after cp: loss_mask torch.Size([1, 32768])
68054
  batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68055
  batch tensor after cp: position_ids torch.Size([1, 32768])
68056
+ batch tensor: tokens torch.Size([1, 131072])
68057
+ batch tensor: labels torch.Size([1, 131072])
68058
+ batch tensor: loss_mask torch.Size([1, 131072])
68059
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68060
+ batch tensor: position_ids torch.Size([1, 131072])
68061
+ batch tensor after cp: tokens torch.Size([1, 32768])
68062
+ batch tensor after cp: labels torch.Size([1, 32768])
68063
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68064
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68065
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68066
+ batch tensor: tokens torch.Size([1, 131072])
68067
+ batch tensor: labels torch.Size([1, 131072])
68068
+ batch tensor: loss_mask torch.Size([1, 131072])
68069
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68070
+ batch tensor: position_ids torch.Size([1, 131072])
68071
+ batch tensor after cp: tokens torch.Size([1, 32768])
68072
+ batch tensor after cp: labels torch.Size([1, 32768])
68073
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68074
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68075
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68076
+ batch tensor: tokens torch.Size([1, 131072])
68077
+ batch tensor: labels torch.Size([1, 131072])
68078
+ batch tensor: loss_mask torch.Size([1, 131072])
68079
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68080
+ batch tensor: position_ids torch.Size([1, 131072])
68081
+ batch tensor after cp: tokens torch.Size([1, 32768])
68082
+ batch tensor after cp: labels torch.Size([1, 32768])
68083
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68084
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68085
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68086
+ Start exporting trace 7
68087
+ Done exporting trace 7
68088
+ [2025-06-21 21:20:47] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 139675.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
68089
+ batch tensor: tokens torch.Size([1, 131072])
68090
+ batch tensor: labels torch.Size([1, 131072])
68091
+ batch tensor: loss_mask torch.Size([1, 131072])
68092
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68093
+ batch tensor: position_ids torch.Size([1, 131072])
68094
+ batch tensor after cp: tokens torch.Size([1, 32768])
68095
+ batch tensor after cp: labels torch.Size([1, 32768])
68096
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68097
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68098
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68099
+ batch tensor: tokens torch.Size([1, 131072])
68100
+ batch tensor: labels torch.Size([1, 131072])
68101
+ batch tensor: loss_mask torch.Size([1, 131072])
68102
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68103
+ batch tensor: position_ids torch.Size([1, 131072])
68104
+ batch tensor after cp: tokens torch.Size([1, 32768])
68105
+ batch tensor after cp: labels torch.Size([1, 32768])
68106
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68107
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68108
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68109
+ batch tensor: tokens torch.Size([1, 131072])
68110
+ batch tensor: labels torch.Size([1, 131072])
68111
+ batch tensor: loss_mask torch.Size([1, 131072])
68112
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68113
+ batch tensor: position_ids torch.Size([1, 131072])
68114
+ batch tensor after cp: tokens torch.Size([1, 32768])
68115
+ batch tensor after cp: labels torch.Size([1, 32768])
68116
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68117
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68118
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68119
+ batch tensor: tokens torch.Size([1, 131072])
68120
+ batch tensor: labels torch.Size([1, 131072])
68121
+ batch tensor: loss_mask torch.Size([1, 131072])
68122
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68123
+ batch tensor: position_ids torch.Size([1, 131072])
68124
+ batch tensor after cp: tokens torch.Size([1, 32768])
68125
+ batch tensor after cp: labels torch.Size([1, 32768])
68126
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68127
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68128
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68129
+ batch tensor: tokens torch.Size([1, 131072])
68130
+ batch tensor: labels torch.Size([1, 131072])
68131
+ batch tensor: loss_mask torch.Size([1, 131072])
68132
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68133
+ batch tensor: position_ids torch.Size([1, 131072])
68134
+ batch tensor after cp: tokens torch.Size([1, 32768])
68135
+ batch tensor after cp: labels torch.Size([1, 32768])
68136
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68137
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68138
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68139
+ batch tensor: tokens torch.Size([1, 131072])
68140
+ batch tensor: labels torch.Size([1, 131072])
68141
+ batch tensor: loss_mask torch.Size([1, 131072])
68142
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68143
+ batch tensor: position_ids torch.Size([1, 131072])
68144
+ batch tensor after cp: tokens torch.Size([1, 32768])
68145
+ batch tensor after cp: labels torch.Size([1, 32768])
68146
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68147
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68148
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68149
+ batch tensor: tokens torch.Size([1, 131072])
68150
+ batch tensor: labels torch.Size([1, 131072])
68151
+ batch tensor: loss_mask torch.Size([1, 131072])
68152
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68153
+ batch tensor: position_ids torch.Size([1, 131072])
68154
+ batch tensor after cp: tokens torch.Size([1, 32768])
68155
+ batch tensor after cp: labels torch.Size([1, 32768])
68156
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68157
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68158
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68159
+ batch tensor: tokens torch.Size([1, 131072])
68160
+ batch tensor: labels torch.Size([1, 131072])
68161
+ batch tensor: loss_mask torch.Size([1, 131072])
68162
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68163
+ batch tensor: position_ids torch.Size([1, 131072])
68164
+ batch tensor after cp: tokens torch.Size([1, 32768])
68165
+ batch tensor after cp: labels torch.Size([1, 32768])
68166
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68167
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68168
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68169
+ batch tensor: tokens torch.Size([1, 131072])
68170
+ batch tensor: labels torch.Size([1, 131072])
68171
+ batch tensor: loss_mask torch.Size([1, 131072])
68172
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68173
+ batch tensor: position_ids torch.Size([1, 131072])
68174
+ batch tensor after cp: tokens torch.Size([1, 32768])
68175
+ batch tensor after cp: labels torch.Size([1, 32768])
68176
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68177
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68178
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68179
+ batch tensor: tokens torch.Size([1, 131072])
68180
+ batch tensor: labels torch.Size([1, 131072])
68181
+ batch tensor: loss_mask torch.Size([1, 131072])
68182
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68183
+ batch tensor: position_ids torch.Size([1, 131072])
68184
+ batch tensor after cp: tokens torch.Size([1, 32768])
68185
+ batch tensor after cp: labels torch.Size([1, 32768])
68186
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68187
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68188
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68189
+ batch tensor: tokens torch.Size([1, 131072])
68190
+ batch tensor: labels torch.Size([1, 131072])
68191
+ batch tensor: loss_mask torch.Size([1, 131072])
68192
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68193
+ batch tensor: position_ids torch.Size([1, 131072])
68194
+ batch tensor after cp: tokens torch.Size([1, 32768])
68195
+ batch tensor after cp: labels torch.Size([1, 32768])
68196
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68197
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68198
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68199
+ batch tensor: tokens torch.Size([1, 131072])
68200
+ batch tensor: labels torch.Size([1, 131072])
68201
+ batch tensor: loss_mask torch.Size([1, 131072])
68202
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68203
+ batch tensor: position_ids torch.Size([1, 131072])
68204
+ batch tensor after cp: tokens torch.Size([1, 32768])
68205
+ batch tensor after cp: labels torch.Size([1, 32768])
68206
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68207
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68208
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68209
+ batch tensor: tokens torch.Size([1, 131072])
68210
+ batch tensor: labels torch.Size([1, 131072])
68211
+ batch tensor: loss_mask torch.Size([1, 131072])
68212
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68213
+ batch tensor: position_ids torch.Size([1, 131072])
68214
+ batch tensor after cp: tokens torch.Size([1, 32768])
68215
+ batch tensor after cp: labels torch.Size([1, 32768])
68216
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68217
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68218
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68219
+ batch tensor: tokens torch.Size([1, 131072])
68220
+ batch tensor: labels torch.Size([1, 131072])
68221
+ batch tensor: loss_mask torch.Size([1, 131072])
68222
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68223
+ batch tensor: position_ids torch.Size([1, 131072])
68224
+ batch tensor after cp: tokens torch.Size([1, 32768])
68225
+ batch tensor after cp: labels torch.Size([1, 32768])
68226
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68227
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68228
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68229
+ batch tensor: tokens torch.Size([1, 131072])
68230
+ batch tensor: labels torch.Size([1, 131072])
68231
+ batch tensor: loss_mask torch.Size([1, 131072])
68232
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68233
+ batch tensor: position_ids torch.Size([1, 131072])
68234
+ batch tensor after cp: tokens torch.Size([1, 32768])
68235
+ batch tensor after cp: labels torch.Size([1, 32768])
68236
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68237
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68238
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68239
+ batch tensor: tokens torch.Size([1, 131072])
68240
+ batch tensor: labels torch.Size([1, 131072])
68241
+ batch tensor: loss_mask torch.Size([1, 131072])
68242
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68243
+ batch tensor: position_ids torch.Size([1, 131072])
68244
+ batch tensor after cp: tokens torch.Size([1, 32768])
68245
+ batch tensor after cp: labels torch.Size([1, 32768])
68246
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68247
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68248
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68249
+ batch tensor: tokens torch.Size([1, 131072])
68250
+ batch tensor: labels torch.Size([1, 131072])
68251
+ batch tensor: loss_mask torch.Size([1, 131072])
68252
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68253
+ batch tensor: position_ids torch.Size([1, 131072])
68254
+ batch tensor after cp: tokens torch.Size([1, 32768])
68255
+ batch tensor after cp: labels torch.Size([1, 32768])
68256
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68257
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68258
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68259
+ batch tensor: tokens torch.Size([1, 131072])
68260
+ batch tensor: labels torch.Size([1, 131072])
68261
+ batch tensor: loss_mask torch.Size([1, 131072])
68262
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68263
+ batch tensor: position_ids torch.Size([1, 131072])
68264
+ batch tensor after cp: tokens torch.Size([1, 32768])
68265
+ batch tensor after cp: labels torch.Size([1, 32768])
68266
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68267
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68268
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68269
+ batch tensor: tokens torch.Size([1, 131072])
68270
+ batch tensor: labels torch.Size([1, 131072])
68271
+ batch tensor: loss_mask torch.Size([1, 131072])
68272
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68273
+ batch tensor: position_ids torch.Size([1, 131072])
68274
+ batch tensor after cp: tokens torch.Size([1, 32768])
68275
+ batch tensor after cp: labels torch.Size([1, 32768])
68276
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68277
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68278
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68279
+ batch tensor: tokens torch.Size([1, 131072])
68280
+ batch tensor: labels torch.Size([1, 131072])
68281
+ batch tensor: loss_mask torch.Size([1, 131072])
68282
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68283
+ batch tensor: position_ids torch.Size([1, 131072])
68284
+ batch tensor after cp: tokens torch.Size([1, 32768])
68285
+ batch tensor after cp: labels torch.Size([1, 32768])
68286
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68287
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68288
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68289
+ batch tensor: tokens torch.Size([1, 131072])
68290
+ batch tensor: labels torch.Size([1, 131072])
68291
+ batch tensor: loss_mask torch.Size([1, 131072])
68292
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68293
+ batch tensor: position_ids torch.Size([1, 131072])
68294
+ batch tensor after cp: tokens torch.Size([1, 32768])
68295
+ batch tensor after cp: labels torch.Size([1, 32768])
68296
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68297
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68298
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68299
+ batch tensor: tokens torch.Size([1, 131072])
68300
+ batch tensor: labels torch.Size([1, 131072])
68301
+ batch tensor: loss_mask torch.Size([1, 131072])
68302
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68303
+ batch tensor: position_ids torch.Size([1, 131072])
68304
+ batch tensor after cp: tokens torch.Size([1, 32768])
68305
+ batch tensor after cp: labels torch.Size([1, 32768])
68306
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68307
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68308
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68309
+ batch tensor: tokens torch.Size([1, 131072])
68310
+ batch tensor: labels torch.Size([1, 131072])
68311
+ batch tensor: loss_mask torch.Size([1, 131072])
68312
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68313
+ batch tensor: position_ids torch.Size([1, 131072])
68314
+ batch tensor after cp: tokens torch.Size([1, 32768])
68315
+ batch tensor after cp: labels torch.Size([1, 32768])
68316
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68317
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68318
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68319
+ batch tensor: tokens torch.Size([1, 131072])
68320
+ batch tensor: labels torch.Size([1, 131072])
68321
+ batch tensor: loss_mask torch.Size([1, 131072])
68322
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68323
+ batch tensor: position_ids torch.Size([1, 131072])
68324
+ batch tensor after cp: tokens torch.Size([1, 32768])
68325
+ batch tensor after cp: labels torch.Size([1, 32768])
68326
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68327
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68328
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68329
+ batch tensor: tokens torch.Size([1, 131072])
68330
+ batch tensor: labels torch.Size([1, 131072])
68331
+ batch tensor: loss_mask torch.Size([1, 131072])
68332
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68333
+ batch tensor: position_ids torch.Size([1, 131072])
68334
+ batch tensor after cp: tokens torch.Size([1, 32768])
68335
+ batch tensor after cp: labels torch.Size([1, 32768])
68336
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68337
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68338
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68339
+ batch tensor: tokens torch.Size([1, 131072])
68340
+ batch tensor: labels torch.Size([1, 131072])
68341
+ batch tensor: loss_mask torch.Size([1, 131072])
68342
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68343
+ batch tensor: position_ids torch.Size([1, 131072])
68344
+ batch tensor after cp: tokens torch.Size([1, 32768])
68345
+ batch tensor after cp: labels torch.Size([1, 32768])
68346
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68347
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68348
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68349
+ batch tensor: tokens torch.Size([1, 131072])
68350
+ batch tensor: labels torch.Size([1, 131072])
68351
+ batch tensor: loss_mask torch.Size([1, 131072])
68352
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68353
+ batch tensor: position_ids torch.Size([1, 131072])
68354
+ batch tensor after cp: tokens torch.Size([1, 32768])
68355
+ batch tensor after cp: labels torch.Size([1, 32768])
68356
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68357
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68358
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68359
+ batch tensor: tokens torch.Size([1, 131072])
68360
+ batch tensor: labels torch.Size([1, 131072])
68361
+ batch tensor: loss_mask torch.Size([1, 131072])
68362
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68363
+ batch tensor: position_ids torch.Size([1, 131072])
68364
+ batch tensor after cp: tokens torch.Size([1, 32768])
68365
+ batch tensor after cp: labels torch.Size([1, 32768])
68366
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68367
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68368
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68369
+ batch tensor: tokens torch.Size([1, 131072])
68370
+ batch tensor: labels torch.Size([1, 131072])
68371
+ batch tensor: loss_mask torch.Size([1, 131072])
68372
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68373
+ batch tensor: position_ids torch.Size([1, 131072])
68374
+ batch tensor after cp: tokens torch.Size([1, 32768])
68375
+ batch tensor after cp: labels torch.Size([1, 32768])
68376
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68377
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68378
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68379
+ batch tensor: tokens torch.Size([1, 131072])
68380
+ batch tensor: labels torch.Size([1, 131072])
68381
+ batch tensor: loss_mask torch.Size([1, 131072])
68382
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68383
+ batch tensor: position_ids torch.Size([1, 131072])
68384
+ batch tensor after cp: tokens torch.Size([1, 32768])
68385
+ batch tensor after cp: labels torch.Size([1, 32768])
68386
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68387
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68388
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68389
+ batch tensor: tokens torch.Size([1, 131072])
68390
+ batch tensor: labels torch.Size([1, 131072])
68391
+ batch tensor: loss_mask torch.Size([1, 131072])
68392
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68393
+ batch tensor: position_ids torch.Size([1, 131072])
68394
+ batch tensor after cp: tokens torch.Size([1, 32768])
68395
+ batch tensor after cp: labels torch.Size([1, 32768])
68396
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68397
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68398
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68399
+ batch tensor: tokens torch.Size([1, 131072])
68400
+ batch tensor: labels torch.Size([1, 131072])
68401
+ batch tensor: loss_mask torch.Size([1, 131072])
68402
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68403
+ batch tensor: position_ids torch.Size([1, 131072])
68404
+ batch tensor after cp: tokens torch.Size([1, 32768])
68405
+ batch tensor after cp: labels torch.Size([1, 32768])
68406
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68407
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68408
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68409
+ Start exporting trace 8
68410
+ Done exporting trace 8
68411
+ [2025-06-21 21:22:53] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 126399.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
68412
+ batch tensor: tokens torch.Size([1, 131072])
68413
+ batch tensor: labels torch.Size([1, 131072])
68414
+ batch tensor: loss_mask torch.Size([1, 131072])
68415
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68416
+ batch tensor: position_ids torch.Size([1, 131072])
68417
+ batch tensor after cp: tokens torch.Size([1, 32768])
68418
+ batch tensor after cp: labels torch.Size([1, 32768])
68419
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68420
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68421
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68422
+ batch tensor: tokens torch.Size([1, 131072])
68423
+ batch tensor: labels torch.Size([1, 131072])
68424
+ batch tensor: loss_mask torch.Size([1, 131072])
68425
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68426
+ batch tensor: position_ids torch.Size([1, 131072])
68427
+ batch tensor after cp: tokens torch.Size([1, 32768])
68428
+ batch tensor after cp: labels torch.Size([1, 32768])
68429
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68430
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68431
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68432
+ batch tensor: tokens torch.Size([1, 131072])
68433
+ batch tensor: labels torch.Size([1, 131072])
68434
+ batch tensor: loss_mask torch.Size([1, 131072])
68435
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68436
+ batch tensor: position_ids torch.Size([1, 131072])
68437
+ batch tensor after cp: tokens torch.Size([1, 32768])
68438
+ batch tensor after cp: labels torch.Size([1, 32768])
68439
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68440
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68441
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68442
+ batch tensor: tokens torch.Size([1, 131072])
68443
+ batch tensor: labels torch.Size([1, 131072])
68444
+ batch tensor: loss_mask torch.Size([1, 131072])
68445
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68446
+ batch tensor: position_ids torch.Size([1, 131072])
68447
+ batch tensor after cp: tokens torch.Size([1, 32768])
68448
+ batch tensor after cp: labels torch.Size([1, 32768])
68449
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68450
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68451
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68452
+ batch tensor: tokens torch.Size([1, 131072])
68453
+ batch tensor: labels torch.Size([1, 131072])
68454
+ batch tensor: loss_mask torch.Size([1, 131072])
68455
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68456
+ batch tensor: position_ids torch.Size([1, 131072])
68457
+ batch tensor after cp: tokens torch.Size([1, 32768])
68458
+ batch tensor after cp: labels torch.Size([1, 32768])
68459
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68460
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68461
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68462
+ batch tensor: tokens torch.Size([1, 131072])
68463
+ batch tensor: labels torch.Size([1, 131072])
68464
+ batch tensor: loss_mask torch.Size([1, 131072])
68465
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68466
+ batch tensor: position_ids torch.Size([1, 131072])
68467
+ batch tensor after cp: tokens torch.Size([1, 32768])
68468
+ batch tensor after cp: labels torch.Size([1, 32768])
68469
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68470
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68471
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68472
+ batch tensor: tokens torch.Size([1, 131072])
68473
+ batch tensor: labels torch.Size([1, 131072])
68474
+ batch tensor: loss_mask torch.Size([1, 131072])
68475
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68476
+ batch tensor: position_ids torch.Size([1, 131072])
68477
+ batch tensor after cp: tokens torch.Size([1, 32768])
68478
+ batch tensor after cp: labels torch.Size([1, 32768])
68479
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68480
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68481
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68482
+ batch tensor: tokens torch.Size([1, 131072])
68483
+ batch tensor: labels torch.Size([1, 131072])
68484
+ batch tensor: loss_mask torch.Size([1, 131072])
68485
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68486
+ batch tensor: position_ids torch.Size([1, 131072])
68487
+ batch tensor after cp: tokens torch.Size([1, 32768])
68488
+ batch tensor after cp: labels torch.Size([1, 32768])
68489
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68490
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68491
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68492
+ batch tensor: tokens torch.Size([1, 131072])
68493
+ batch tensor: labels torch.Size([1, 131072])
68494
+ batch tensor: loss_mask torch.Size([1, 131072])
68495
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68496
+ batch tensor: position_ids torch.Size([1, 131072])
68497
+ batch tensor after cp: tokens torch.Size([1, 32768])
68498
+ batch tensor after cp: labels torch.Size([1, 32768])
68499
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68500
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68501
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68502
+ batch tensor: tokens torch.Size([1, 131072])
68503
+ batch tensor: labels torch.Size([1, 131072])
68504
+ batch tensor: loss_mask torch.Size([1, 131072])
68505
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68506
+ batch tensor: position_ids torch.Size([1, 131072])
68507
+ batch tensor after cp: tokens torch.Size([1, 32768])
68508
+ batch tensor after cp: labels torch.Size([1, 32768])
68509
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68510
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68511
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68512
+ batch tensor: tokens torch.Size([1, 131072])
68513
+ batch tensor: labels torch.Size([1, 131072])
68514
+ batch tensor: loss_mask torch.Size([1, 131072])
68515
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68516
+ batch tensor: position_ids torch.Size([1, 131072])
68517
+ batch tensor after cp: tokens torch.Size([1, 32768])
68518
+ batch tensor after cp: labels torch.Size([1, 32768])
68519
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68520
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68521
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68522
+ batch tensor: tokens torch.Size([1, 131072])
68523
+ batch tensor: labels torch.Size([1, 131072])
68524
+ batch tensor: loss_mask torch.Size([1, 131072])
68525
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68526
+ batch tensor: position_ids torch.Size([1, 131072])
68527
+ batch tensor after cp: tokens torch.Size([1, 32768])
68528
+ batch tensor after cp: labels torch.Size([1, 32768])
68529
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68530
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68531
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68532
+ batch tensor: tokens torch.Size([1, 131072])
68533
+ batch tensor: labels torch.Size([1, 131072])
68534
+ batch tensor: loss_mask torch.Size([1, 131072])
68535
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68536
+ batch tensor: position_ids torch.Size([1, 131072])
68537
+ batch tensor after cp: tokens torch.Size([1, 32768])
68538
+ batch tensor after cp: labels torch.Size([1, 32768])
68539
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68540
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68541
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68542
+ batch tensor: tokens torch.Size([1, 131072])
68543
+ batch tensor: labels torch.Size([1, 131072])
68544
+ batch tensor: loss_mask torch.Size([1, 131072])
68545
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68546
+ batch tensor: position_ids torch.Size([1, 131072])
68547
+ batch tensor after cp: tokens torch.Size([1, 32768])
68548
+ batch tensor after cp: labels torch.Size([1, 32768])
68549
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68550
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68551
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68552
+ batch tensor: tokens torch.Size([1, 131072])
68553
+ batch tensor: labels torch.Size([1, 131072])
68554
+ batch tensor: loss_mask torch.Size([1, 131072])
68555
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68556
+ batch tensor: position_ids torch.Size([1, 131072])
68557
+ batch tensor after cp: tokens torch.Size([1, 32768])
68558
+ batch tensor after cp: labels torch.Size([1, 32768])
68559
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68560
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68561
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68562
+ batch tensor: tokens torch.Size([1, 131072])
68563
+ batch tensor: labels torch.Size([1, 131072])
68564
+ batch tensor: loss_mask torch.Size([1, 131072])
68565
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68566
+ batch tensor: position_ids torch.Size([1, 131072])
68567
+ batch tensor after cp: tokens torch.Size([1, 32768])
68568
+ batch tensor after cp: labels torch.Size([1, 32768])
68569
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68570
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68571
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68572
+ batch tensor: tokens torch.Size([1, 131072])
68573
+ batch tensor: labels torch.Size([1, 131072])
68574
+ batch tensor: loss_mask torch.Size([1, 131072])
68575
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68576
+ batch tensor: position_ids torch.Size([1, 131072])
68577
+ batch tensor after cp: tokens torch.Size([1, 32768])
68578
+ batch tensor after cp: labels torch.Size([1, 32768])
68579
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68580
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68581
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68582
+ batch tensor: tokens torch.Size([1, 131072])
68583
+ batch tensor: labels torch.Size([1, 131072])
68584
+ batch tensor: loss_mask torch.Size([1, 131072])
68585
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68586
+ batch tensor: position_ids torch.Size([1, 131072])
68587
+ batch tensor after cp: tokens torch.Size([1, 32768])
68588
+ batch tensor after cp: labels torch.Size([1, 32768])
68589
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68590
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68591
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68592
+ batch tensor: tokens torch.Size([1, 131072])
68593
+ batch tensor: labels torch.Size([1, 131072])
68594
+ batch tensor: loss_mask torch.Size([1, 131072])
68595
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68596
+ batch tensor: position_ids torch.Size([1, 131072])
68597
+ batch tensor after cp: tokens torch.Size([1, 32768])
68598
+ batch tensor after cp: labels torch.Size([1, 32768])
68599
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68600
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68601
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68602
+ batch tensor: tokens torch.Size([1, 131072])
68603
+ batch tensor: labels torch.Size([1, 131072])
68604
+ batch tensor: loss_mask torch.Size([1, 131072])
68605
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68606
+ batch tensor: position_ids torch.Size([1, 131072])
68607
+ batch tensor after cp: tokens torch.Size([1, 32768])
68608
+ batch tensor after cp: labels torch.Size([1, 32768])
68609
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68610
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68611
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68612
+ batch tensor: tokens torch.Size([1, 131072])
68613
+ batch tensor: labels torch.Size([1, 131072])
68614
+ batch tensor: loss_mask torch.Size([1, 131072])
68615
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68616
+ batch tensor: position_ids torch.Size([1, 131072])
68617
+ batch tensor after cp: tokens torch.Size([1, 32768])
68618
+ batch tensor after cp: labels torch.Size([1, 32768])
68619
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68620
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68621
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68622
+ batch tensor: tokens torch.Size([1, 131072])
68623
+ batch tensor: labels torch.Size([1, 131072])
68624
+ batch tensor: loss_mask torch.Size([1, 131072])
68625
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68626
+ batch tensor: position_ids torch.Size([1, 131072])
68627
+ batch tensor: tokens torch.Size([1, 131072])
68628
+ batch tensor: labels torch.Size([1, 131072])
68629
+ batch tensor: loss_mask torch.Size([1, 131072])
68630
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68631
+ batch tensor: position_ids torch.Size([1, 131072])
68632
+ batch tensor after cp: tokens torch.Size([1, 32768])
68633
+ batch tensor after cp: labelsbatch tensor after cp: torch.Size([1, 32768])
68634
+ tokens batch tensor after cp: loss_masktorch.Size([1, 32768])
68635
+ torch.Size([1, 32768])
68636
+ batch tensor after cp: batch tensor after cp:labels attention_mask torch.Size([1, 32768])
68637
+ torch.Size([1, 1, 32768, 131072])batch tensor after cp:
68638
+ loss_maskbatch tensor after cp: torch.Size([1, 32768])position_ids
68639
+ batch tensor after cp:torch.Size([1, 32768])
68640
+ attention_mask torch.Size([1, 1, 32768, 131072])
68641
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68642
+ batch tensor: tokens torch.Size([1, 131072])
68643
+ batch tensor: labels torch.Size([1, 131072])
68644
+ batch tensor: loss_mask torch.Size([1, 131072])
68645
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68646
+ batch tensor: position_ids torch.Size([1, 131072])
68647
+ batch tensor after cp: tokens torch.Size([1, 32768])
68648
+ batch tensor after cp: labels torch.Size([1, 32768])
68649
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68650
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68651
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68652
+ batch tensor: tokens torch.Size([1, 131072])
68653
+ batch tensor: labels torch.Size([1, 131072])
68654
+ batch tensor: loss_mask torch.Size([1, 131072])
68655
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68656
+ batch tensor: position_ids torch.Size([1, 131072])
68657
+ batch tensor after cp: tokens torch.Size([1, 32768])
68658
+ batch tensor after cp: labels torch.Size([1, 32768])
68659
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68660
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68661
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68662
+ batch tensor: tokens torch.Size([1, 131072])
68663
+ batch tensor: labels torch.Size([1, 131072])
68664
+ batch tensor: loss_mask torch.Size([1, 131072])
68665
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68666
+ batch tensor: position_ids torch.Size([1, 131072])
68667
+ batch tensor after cp: tokens torch.Size([1, 32768])
68668
+ batch tensor after cp: labels torch.Size([1, 32768])
68669
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68670
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68671
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68672
+ batch tensor: tokens torch.Size([1, 131072])
68673
+ batch tensor: labels torch.Size([1, 131072])
68674
+ batch tensor: loss_mask torch.Size([1, 131072])
68675
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68676
+ batch tensor: position_ids torch.Size([1, 131072])
68677
+ batch tensor after cp: tokens torch.Size([1, 32768])
68678
+ batch tensor after cp: labels torch.Size([1, 32768])
68679
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68680
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68681
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68682
+ batch tensor: tokens torch.Size([1, 131072])
68683
+ batch tensor: labels torch.Size([1, 131072])
68684
+ batch tensor: loss_mask torch.Size([1, 131072])
68685
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68686
+ batch tensor: position_ids torch.Size([1, 131072])
68687
+ batch tensor after cp: tokens torch.Size([1, 32768])
68688
+ batch tensor after cp: labels torch.Size([1, 32768])
68689
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68690
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68691
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68692
+ batch tensor: tokens torch.Size([1, 131072])
68693
+ batch tensor: labels torch.Size([1, 131072])
68694
+ batch tensor: loss_mask torch.Size([1, 131072])
68695
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68696
+ batch tensor: position_ids torch.Size([1, 131072])
68697
+ batch tensor after cp: tokens torch.Size([1, 32768])
68698
+ batch tensor after cp: labels torch.Size([1, 32768])
68699
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68700
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68701
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68702
+ batch tensor: tokens torch.Size([1, 131072])
68703
+ batch tensor: labels torch.Size([1, 131072])
68704
+ batch tensor: loss_mask torch.Size([1, 131072])
68705
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68706
+ batch tensor: position_ids torch.Size([1, 131072])
68707
+ batch tensor after cp: tokens torch.Size([1, 32768])
68708
+ batch tensor after cp: labels torch.Size([1, 32768])
68709
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68710
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68711
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68712
+ batch tensor: tokens torch.Size([1, 131072])
68713
+ batch tensor: labels torch.Size([1, 131072])
68714
+ batch tensor: loss_mask torch.Size([1, 131072])
68715
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68716
+ batch tensor: position_ids torch.Size([1, 131072])
68717
+ batch tensor after cp: tokens torch.Size([1, 32768])
68718
+ batch tensor after cp: labels torch.Size([1, 32768])
68719
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68720
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68721
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68722
+ batch tensor: tokens torch.Size([1, 131072])
68723
+ batch tensor: labels torch.Size([1, 131072])
68724
+ batch tensor: loss_mask torch.Size([1, 131072])
68725
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
68726
+ batch tensor: position_ids torch.Size([1, 131072])
68727
+ batch tensor after cp: tokens torch.Size([1, 32768])
68728
+ batch tensor after cp: labels torch.Size([1, 32768])
68729
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
68730
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
68731
+ batch tensor after cp: position_ids torch.Size([1, 32768])
68732
+ Start exporting trace 9
68733
+ Done exporting trace 9
68734
+ [2025-06-21 21:24:50] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 116167.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
68735
+ [after training is done] datetime: 2025-06-21 21:24:50
68736
+ saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format
68737
+ DEBUG:megatron.training.checkpointing:rank: 12, takes 0.03408384323120117 to prepare state dict for ckpt
68738
+ DEBUG:megatron.training.checkpointing:rank: 14, takes 0.03418087959289551 to prepare state dict for ckpt
68739
+ DEBUG:megatron.training.checkpointing:rank: 13, takes 0.03429102897644043 to prepare state dict for ckpt
68740
+ DEBUG:megatron.training.checkpointing:rank: 15, takes 0.03522133827209473 to prepare state dict for ckpt
68741
+ DEBUG:megatron.training.checkpointing:rank: 11, takes 0.0352175235748291 to prepare state dict for ckpt
68742
+ DEBUG:megatron.training.checkpointing:rank: 10, takes 0.035242557525634766 to prepare state dict for ckpt
68743
+ DEBUG:megatron.training.checkpointing:rank: 1, takes 0.03671622276306152 to prepare state dict for ckpt
68744
+ DEBUG:megatron.training.checkpointing:rank: 3, takes 0.03668403625488281 to prepare state dict for ckpt
68745
+ DEBUG:megatron.training.checkpointing:rank: 7, takes 0.03663802146911621 to prepare state dict for ckpt
68746
+ DEBUG:megatron.training.checkpointing:rank: 6, takes 0.036659955978393555 to prepare state dict for ckpt
68747
+ DEBUG:megatron.training.checkpointing:rank: 4, takes 0.03672170639038086 to prepare state dict for ckpt
68748
+ DEBUG:megatron.training.checkpointing:rank: 5, takes 0.03667259216308594 to prepare state dict for ckpt
68749
+ DEBUG:megatron.training.checkpointing:rank: 9, takes 0.038300275802612305 to prepare state dict for ckpt
68750
+ DEBUG:megatron.training.checkpointing:rank: 2, takes 0.03907895088195801 to prepare state dict for ckpt
68751
+ DEBUG:megatron.training.checkpointing:rank: 0, takes 0.04081249237060547 to prepare state dict for ckpt
68752
+ DEBUG:megatron.training.checkpointing:rank: 8, takes 0.046274662017822266 to prepare state dict for ckpt
68753
+ DEBUG:megatron.training.checkpointing:rank: 22, takes 0.04995894432067871 to prepare state dict for ckpt
68754
+ DEBUG:megatron.training.checkpointing:rank: 20, takes 0.049971818923950195 to prepare state dict for ckpt
68755
+ DEBUG:megatron.training.checkpointing:rank: 18, takes 0.050069570541381836 to prepare state dict for ckpt
68756
+ DEBUG:megatron.training.checkpointing:rank: 17, takes 0.05010414123535156 to prepare state dict for ckpt
68757
+ DEBUG:megatron.training.checkpointing:rank: 21, takes 0.05001044273376465 to prepare state dict for ckpt
68758
+ DEBUG:megatron.training.checkpointing:rank: 23, takes 0.05006670951843262 to prepare state dict for ckpt
68759
+ DEBUG:megatron.training.checkpointing:rank: 19, takes 0.050194501876831055 to prepare state dict for ckpt
68760
+ DEBUG:megatron.training.checkpointing:rank: 16, takes 0.05068063735961914 to prepare state dict for ckpt
68761
+ DEBUG:megatron.training.checkpointing:rank: 26, takes 0.05485033988952637 to prepare state dict for ckpt
68762
+ DEBUG:megatron.training.checkpointing:rank: 31, takes 0.05485200881958008 to prepare state dict for ckpt
68763
+ DEBUG:megatron.training.checkpointing:rank: 28, takes 0.054932594299316406 to prepare state dict for ckpt
68764
+ DEBUG:megatron.training.checkpointing:rank: 29, takes 0.05492997169494629 to prepare state dict for ckpt
68765
+ DEBUG:megatron.training.checkpointing:rank: 30, takes 0.05497312545776367 to prepare state dict for ckpt
68766
+ DEBUG:megatron.training.checkpointing:rank: 24, takes 0.05541348457336426 to prepare state dict for ckpt
68767
+ DEBUG:megatron.training.checkpointing:rank: 27, takes 0.05557417869567871 to prepare state dict for ckpt
68768
+ DEBUG:megatron.training.checkpointing:rank: 25, takes 0.055478572845458984 to prepare state dict for ckpt
68769
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68770
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68771
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68772
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68773
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68774
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68775
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68776
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68777
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68778
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68779
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68780
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68781
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68782
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68783
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68784
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68785
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68786
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68787
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68788
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68789
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68790
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68791
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68792
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68793
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68794
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68795
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68796
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68797
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68798
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68799
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
68800
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68801
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68802
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68803
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68804
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68805
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68806
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68807
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68808
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68809
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68810
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68811
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68812
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68813
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68814
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68815
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68816
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68817
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68818
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68819
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68820
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68821
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68822
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68823
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68824
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68825
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68826
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
68827
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
attnserver.run_attnserver.slurm.sh.343196.err.log CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7636e0b47b03376a86b6f6fb4aa841d0e498d0b1e970066f497cddbccfef4104
3
- size 30665081
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7657f5e07c19df76364b583a8feaf52138819e9c54084ad692adda6be46a46e6
3
+ size 60333717
attnserver.run_attnserver.slurm.sh.343196.out.log CHANGED
@@ -53067,3 +53067,1534 @@ batch tensor after cp: position_ids torch.Size([2, 24576])
53067
  Start exporting trace 8
53068
  Done exporting trace 8
53069
  [2025-06-21 21:19:45] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 12817.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53067
  Start exporting trace 8
53068
  Done exporting trace 8
53069
  [2025-06-21 21:19:45] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 12817.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
53070
+ batch tensor: tokens torch.Size([2, 98304])
53071
+ batch tensor: labels torch.Size([2, 98304])
53072
+ batch tensor: loss_mask torch.Size([2, 98304])
53073
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53074
+ batch tensor: position_ids torch.Size([2, 98304])
53075
+ batch tensor after cp: tokens torch.Size([2, 24576])
53076
+ batch tensor after cp: labels torch.Size([2, 24576])
53077
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53078
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53079
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53080
+ batch tensor: tokens torch.Size([2, 98304])
53081
+ batch tensor: labels torch.Size([2, 98304])
53082
+ batch tensor: loss_mask torch.Size([2, 98304])
53083
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53084
+ batch tensor: position_ids torch.Size([2, 98304])
53085
+ batch tensor after cp: tokens torch.Size([2, 24576])
53086
+ batch tensor after cp: labels torch.Size([2, 24576])
53087
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53088
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53089
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53090
+ batch tensor: tokensbatch tensor: tokens torch.Size([2, 98304])
53091
+ batch tensor: labels torch.Size([2, 98304])
53092
+ batch tensor: loss_mask torch.Size([2, 98304])torch.Size([2, 98304])
53093
+
53094
+ batch tensor:batch tensor: attention_masklabels torch.Size([2, 98304])torch.Size([2, 1, 98304, 98304])
53095
+
53096
+ batch tensor: batch tensor:loss_mask position_idstorch.Size([2, 98304])
53097
+ torch.Size([2, 98304])
53098
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53099
+ batch tensor: position_ids torch.Size([2, 98304])
53100
+ batch tensor after cp: tokens torch.Size([2, 24576])
53101
+ batch tensor after cp: labels torch.Size([2, 24576])
53102
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53103
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53104
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53105
+ batch tensor after cp: tokens torch.Size([2, 24576])
53106
+ batch tensor after cp: labels torch.Size([2, 24576])
53107
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53108
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53109
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53110
+ batch tensor: tokens torch.Size([2, 98304])
53111
+ batch tensor: labels torch.Size([2, 98304])
53112
+ batch tensor: loss_mask torch.Size([2, 98304])
53113
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53114
+ batch tensor: position_ids torch.Size([2, 98304])
53115
+ batch tensor after cp: tokens torch.Size([2, 24576])
53116
+ batch tensor after cp: labels torch.Size([2, 24576])
53117
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53118
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53119
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53120
+ batch tensor: tokens torch.Size([2, 98304])
53121
+ batch tensor: labels torch.Size([2, 98304])
53122
+ batch tensor: loss_mask torch.Size([2, 98304])
53123
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53124
+ batch tensor: position_ids torch.Size([2, 98304])
53125
+ batch tensor after cp: tokens torch.Size([2, 24576])
53126
+ batch tensor after cp: labels torch.Size([2, 24576])
53127
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53128
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53129
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53130
+ batch tensor: tokens torch.Size([2, 98304])
53131
+ batch tensor: labels torch.Size([2, 98304])
53132
+ batch tensor: loss_mask torch.Size([2, 98304])
53133
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53134
+ batch tensor: position_ids torch.Size([2, 98304])
53135
+ batch tensor after cp: tokens torch.Size([2, 24576])
53136
+ batch tensor after cp: labels torch.Size([2, 24576])
53137
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53138
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53139
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53140
+ batch tensor: tokens torch.Size([2, 98304])
53141
+ batch tensor: labels torch.Size([2, 98304])
53142
+ batch tensor: loss_mask torch.Size([2, 98304])
53143
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53144
+ batch tensor: position_ids torch.Size([2, 98304])
53145
+ batch tensor after cp: tokens torch.Size([2, 24576])
53146
+ batch tensor after cp: labels torch.Size([2, 24576])
53147
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53148
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53149
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53150
+ batch tensor: tokens torch.Size([2, 98304])
53151
+ batch tensor: labels torch.Size([2, 98304])
53152
+ batch tensor: loss_mask torch.Size([2, 98304])
53153
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53154
+ batch tensor: position_ids torch.Size([2, 98304])
53155
+ batch tensor after cp: tokens torch.Size([2, 24576])
53156
+ batch tensor after cp: labels torch.Size([2, 24576])
53157
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53158
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53159
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53160
+ batch tensor: tokens torch.Size([2, 98304])
53161
+ batch tensor: labels torch.Size([2, 98304])
53162
+ batch tensor: loss_mask torch.Size([2, 98304])
53163
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53164
+ batch tensor: position_ids torch.Size([2, 98304])
53165
+ batch tensor after cp: tokens torch.Size([2, 24576])
53166
+ batch tensor after cp: labels torch.Size([2, 24576])
53167
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53168
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53169
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53170
+ batch tensor: tokens torch.Size([2, 98304])
53171
+ batch tensor: labels torch.Size([2, 98304])
53172
+ batch tensor: loss_mask torch.Size([2, 98304])
53173
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53174
+ batch tensor: position_ids torch.Size([2, 98304])
53175
+ batch tensor after cp: tokens torch.Size([2, 24576])
53176
+ batch tensor after cp: labels torch.Size([2, 24576])
53177
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53178
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53179
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53180
+ batch tensor: tokens torch.Size([2, 98304])
53181
+ batch tensor: labels torch.Size([2, 98304])
53182
+ batch tensor: loss_mask torch.Size([2, 98304])
53183
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53184
+ batch tensor: position_ids torch.Size([2, 98304])
53185
+ batch tensor after cp: tokens torch.Size([2, 24576])
53186
+ batch tensor after cp: labels torch.Size([2, 24576])
53187
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53188
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53189
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53190
+ batch tensor: tokens torch.Size([2, 98304])
53191
+ batch tensor: labels torch.Size([2, 98304])
53192
+ batch tensor: loss_mask torch.Size([2, 98304])
53193
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53194
+ batch tensor: position_ids torch.Size([2, 98304])
53195
+ batch tensor after cp: tokens torch.Size([2, 24576])
53196
+ batch tensor after cp: labels torch.Size([2, 24576])
53197
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53198
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53199
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53200
+ batch tensor: tokens torch.Size([2, 98304])
53201
+ batch tensor: labels torch.Size([2, 98304])
53202
+ batch tensor: loss_mask torch.Size([2, 98304])
53203
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53204
+ batch tensor: position_ids torch.Size([2, 98304])
53205
+ batch tensor after cp: tokens torch.Size([2, 24576])
53206
+ batch tensor after cp: labels torch.Size([2, 24576])
53207
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53208
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53209
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53210
+ batch tensor: tokens torch.Size([2, 98304])
53211
+ batch tensor: labels torch.Size([2, 98304])
53212
+ batch tensor: loss_mask torch.Size([2, 98304])
53213
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53214
+ batch tensor: position_ids torch.Size([2, 98304])
53215
+ batch tensor after cp: tokens torch.Size([2, 24576])
53216
+ batch tensor after cp: labels torch.Size([2, 24576])
53217
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53218
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53219
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53220
+ batch tensor: tokens torch.Size([2, 98304])
53221
+ batch tensor: labels torch.Size([2, 98304])
53222
+ batch tensor: loss_mask torch.Size([2, 98304])
53223
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53224
+ batch tensor: position_ids torch.Size([2, 98304])
53225
+ batch tensor after cp: tokens torch.Size([2, 24576])
53226
+ batch tensor after cp: labels torch.Size([2, 24576])
53227
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53228
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53229
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53230
+ batch tensor: tokens torch.Size([2, 98304])
53231
+ batch tensor: labels torch.Size([2, 98304])
53232
+ batch tensor: loss_mask torch.Size([2, 98304])
53233
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53234
+ batch tensor: position_ids torch.Size([2, 98304])
53235
+ batch tensor after cp: tokens torch.Size([2, 24576])
53236
+ batch tensor after cp: labels torch.Size([2, 24576])
53237
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53238
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53239
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53240
+ batch tensor: tokens torch.Size([2, 98304])
53241
+ batch tensor: labels torch.Size([2, 98304])
53242
+ batch tensor: loss_mask torch.Size([2, 98304])
53243
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53244
+ batch tensor: position_ids torch.Size([2, 98304])
53245
+ batch tensor after cp: tokens torch.Size([2, 24576])
53246
+ batch tensor after cp: labels torch.Size([2, 24576])
53247
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53248
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53249
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53250
+ batch tensor: tokens torch.Size([2, 98304])
53251
+ batch tensor: labels torch.Size([2, 98304])
53252
+ batch tensor: loss_mask torch.Size([2, 98304])
53253
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53254
+ batch tensor: position_ids torch.Size([2, 98304])
53255
+ batch tensor after cp: tokens torch.Size([2, 24576])
53256
+ batch tensor after cp: labels torch.Size([2, 24576])
53257
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53258
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53259
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53260
+ batch tensor: tokens torch.Size([2, 98304])
53261
+ batch tensor: labels torch.Size([2, 98304])
53262
+ batch tensor: loss_mask torch.Size([2, 98304])
53263
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53264
+ batch tensor: position_ids torch.Size([2, 98304])
53265
+ batch tensor after cp: tokens torch.Size([2, 24576])
53266
+ batch tensor after cp: labels torch.Size([2, 24576])
53267
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53268
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53269
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53270
+ batch tensor: tokens torch.Size([2, 98304])
53271
+ batch tensor: labels torch.Size([2, 98304])
53272
+ batch tensor: loss_mask torch.Size([2, 98304])
53273
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53274
+ batch tensor: position_ids torch.Size([2, 98304])
53275
+ batch tensor after cp: tokens torch.Size([2, 24576])
53276
+ batch tensor after cp: labels torch.Size([2, 24576])
53277
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53278
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53279
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53280
+ batch tensor: tokens torch.Size([2, 98304])
53281
+ batch tensor: labels torch.Size([2, 98304])
53282
+ batch tensor: loss_mask torch.Size([2, 98304])
53283
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53284
+ batch tensor: position_ids torch.Size([2, 98304])
53285
+ batch tensor after cp: tokens torch.Size([2, 24576])
53286
+ batch tensor after cp: labels torch.Size([2, 24576])
53287
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53288
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53289
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53290
+ batch tensor: tokens torch.Size([2, 98304])
53291
+ batch tensor: labels torch.Size([2, 98304])
53292
+ batch tensor: loss_mask torch.Size([2, 98304])
53293
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53294
+ batch tensor: position_ids torch.Size([2, 98304])
53295
+ batch tensor after cp: tokens torch.Size([2, 24576])
53296
+ batch tensor after cp: labels torch.Size([2, 24576])
53297
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53298
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53299
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53300
+ batch tensor: tokens torch.Size([2, 98304])
53301
+ batch tensor: labels torch.Size([2, 98304])
53302
+ batch tensor: loss_mask torch.Size([2, 98304])
53303
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53304
+ batch tensor: position_ids torch.Size([2, 98304])
53305
+ batch tensor after cp: tokens torch.Size([2, 24576])
53306
+ batch tensor after cp: labels torch.Size([2, 24576])
53307
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53308
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53309
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53310
+ batch tensor: tokens torch.Size([2, 98304])
53311
+ batch tensor: labels torch.Size([2, 98304])
53312
+ batch tensor: loss_mask torch.Size([2, 98304])
53313
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53314
+ batch tensor: position_ids torch.Size([2, 98304])
53315
+ batch tensor after cp: tokens torch.Size([2, 24576])
53316
+ batch tensor after cp: labels torch.Size([2, 24576])
53317
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53318
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53319
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53320
+ batch tensor: tokens torch.Size([2, 98304])
53321
+ batch tensor: labels torch.Size([2, 98304])
53322
+ batch tensor: loss_mask torch.Size([2, 98304])
53323
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53324
+ batch tensor: position_ids torch.Size([2, 98304])
53325
+ batch tensor after cp: tokens torch.Size([2, 24576])
53326
+ batch tensor after cp: labels torch.Size([2, 24576])
53327
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53328
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53329
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53330
+ batch tensor: tokens torch.Size([2, 98304])
53331
+ batch tensor: labels torch.Size([2, 98304])
53332
+ batch tensor: loss_mask torch.Size([2, 98304])
53333
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53334
+ batch tensor: position_ids torch.Size([2, 98304])
53335
+ batch tensor after cp: tokens torch.Size([2, 24576])
53336
+ batch tensor after cp: labels torch.Size([2, 24576])
53337
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53338
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53339
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53340
+ batch tensor: tokens torch.Size([2, 98304])
53341
+ batch tensor: labels torch.Size([2, 98304])
53342
+ batch tensor: loss_mask torch.Size([2, 98304])
53343
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53344
+ batch tensor: position_ids torch.Size([2, 98304])
53345
+ batch tensor after cp: tokens torch.Size([2, 24576])
53346
+ batch tensor after cp: labels torch.Size([2, 24576])
53347
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53348
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53349
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53350
+ batch tensor: tokens torch.Size([2, 98304])
53351
+ batch tensor: labels torch.Size([2, 98304])
53352
+ batch tensor: loss_mask torch.Size([2, 98304])
53353
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53354
+ batch tensor: position_ids torch.Size([2, 98304])
53355
+ batch tensor after cp: tokens torch.Size([2, 24576])
53356
+ batch tensor after cp: labels torch.Size([2, 24576])
53357
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53358
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53359
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53360
+ batch tensor: tokens torch.Size([2, 98304])
53361
+ batch tensor: labels torch.Size([2, 98304])
53362
+ batch tensor: loss_mask torch.Size([2, 98304])
53363
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53364
+ batch tensor: position_ids torch.Size([2, 98304])
53365
+ batch tensor after cp: tokens torch.Size([2, 24576])
53366
+ batch tensor after cp: labels torch.Size([2, 24576])
53367
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53368
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53369
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53370
+ batch tensor: tokens torch.Size([2, 98304])
53371
+ batch tensor: labels torch.Size([2, 98304])
53372
+ batch tensor: loss_mask torch.Size([2, 98304])
53373
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53374
+ batch tensor: position_ids torch.Size([2, 98304])
53375
+ batch tensor after cp: tokens torch.Size([2, 24576])
53376
+ batch tensor after cp: labels torch.Size([2, 24576])
53377
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53378
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53379
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53380
+ batch tensor: tokens torch.Size([2, 98304])
53381
+ batch tensor: labels torch.Size([2, 98304])
53382
+ batch tensor: loss_mask torch.Size([2, 98304])
53383
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
53384
+ batch tensor: position_ids torch.Size([2, 98304])
53385
+ batch tensor after cp: tokens torch.Size([2, 24576])
53386
+ batch tensor after cp: labels torch.Size([2, 24576])
53387
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
53388
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
53389
+ batch tensor after cp: position_ids torch.Size([2, 24576])
53390
+ Start exporting trace 9
53391
+ Done exporting trace 9
53392
+ [2025-06-21 21:19:57] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 11742.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
53393
+ [after training is done] datetime: 2025-06-21 21:19:57
53394
+ saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format
53395
+ DEBUG:megatron.training.checkpointing:rank: 6, takes 0.034978628158569336 to prepare state dict for ckpt
53396
+ DEBUG:megatron.training.checkpointing:rank: 5, takes 0.03509259223937988 to prepare state dict for ckpt
53397
+ DEBUG:megatron.training.checkpointing:rank: 3, takes 0.035161495208740234 to prepare state dict for ckpt
53398
+ DEBUG:megatron.training.checkpointing:rank: 7, takes 0.03516721725463867 to prepare state dict for ckpt
53399
+ DEBUG:megatron.training.checkpointing:rank: 1, takes 0.03517961502075195 to prepare state dict for ckpt
53400
+ DEBUG:megatron.training.checkpointing:rank: 4, takes 0.035202741622924805 to prepare state dict for ckpt
53401
+ DEBUG:megatron.training.checkpointing:rank: 2, takes 0.036451101303100586 to prepare state dict for ckpt
53402
+ DEBUG:megatron.training.checkpointing:rank: 0, takes 0.03753018379211426 to prepare state dict for ckpt
53403
+ DEBUG:megatron.training.checkpointing:rank: 15, takes 0.03772926330566406 to prepare state dict for ckpt
53404
+ DEBUG:megatron.training.checkpointing:rank: 13, takes 0.03908133506774902 to prepare state dict for ckpt
53405
+ DEBUG:megatron.training.checkpointing:rank: 10, takes 0.039129018783569336 to prepare state dict for ckpt
53406
+ DEBUG:megatron.training.checkpointing:rank: 11, takes 0.039137840270996094 to prepare state dict for ckpt
53407
+ DEBUG:megatron.training.checkpointing:rank: 25, takes 0.03958582878112793 to prepare state dict for ckpt
53408
+ DEBUG:megatron.training.checkpointing:rank: 28, takes 0.03963923454284668 to prepare state dict for ckpt
53409
+ DEBUG:megatron.training.checkpointing:rank: 27, takes 0.03962898254394531 to prepare state dict for ckpt
53410
+ DEBUG:megatron.training.checkpointing:rank: 26, takes 0.03968334197998047 to prepare state dict for ckpt
53411
+ DEBUG:megatron.training.checkpointing:rank: 31, takes 0.03962564468383789 to prepare state dict for ckpt
53412
+ DEBUG:megatron.training.checkpointing:rank: 29, takes 0.03988933563232422 to prepare state dict for ckpt
53413
+ DEBUG:megatron.training.checkpointing:rank: 24, takes 0.04006528854370117 to prepare state dict for ckpt
53414
+ DEBUG:megatron.training.checkpointing:rank: 21, takes 0.040782928466796875 to prepare state dict for ckpt
53415
+ DEBUG:megatron.training.checkpointing:rank: 17, takes 0.04078507423400879 to prepare state dict for ckpt
53416
+ DEBUG:megatron.training.checkpointing:rank: 19, takes 0.04078793525695801 to prepare state dict for ckpt
53417
+ DEBUG:megatron.training.checkpointing:rank: 20, takes 0.040799617767333984 to prepare state dict for ckpt
53418
+ DEBUG:megatron.training.checkpointing:rank: 23, takes 0.04079556465148926 to prepare state dict for ckpt
53419
+ DEBUG:megatron.training.checkpointing:rank: 18, takes 0.040815114974975586 to prepare state dict for ckpt
53420
+ DEBUG:megatron.training.checkpointing:rank: 22, takes 0.040851593017578125 to prepare state dict for ckpt
53421
+ DEBUG:megatron.training.checkpointing:rank: 16, takes 0.04122138023376465 to prepare state dict for ckpt
53422
+ DEBUG:megatron.training.checkpointing:rank: 14, takes 0.04608941078186035 to prepare state dict for ckpt
53423
+ DEBUG:megatron.training.checkpointing:rank: 12, takes 0.04615354537963867 to prepare state dict for ckpt
53424
+ DEBUG:megatron.training.checkpointing:rank: 30, takes 0.05424642562866211 to prepare state dict for ckpt
53425
+ DEBUG:megatron.training.checkpointing:rank: 8, takes 0.05540108680725098 to prepare state dict for ckpt
53426
+ DEBUG:megatron.training.checkpointing:rank: 9, takes 0.05667376518249512 to prepare state dict for ckpt
53427
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53428
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53429
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53430
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53431
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53432
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53433
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53434
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53435
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53436
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53437
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53438
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53439
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53440
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53441
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53442
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53443
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53444
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53445
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53446
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53447
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53448
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53449
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53450
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53451
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53452
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53453
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53454
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53455
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53456
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53457
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
53458
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53459
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53460
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53461
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53462
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53463
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53464
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53465
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53466
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53467
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53468
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53469
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53470
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53471
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53472
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53473
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53474
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53475
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53476
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53477
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53478
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53479
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53480
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53481
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53482
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53483
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53484
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53485
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(104857600), 0), (np.int64(106954752), 1), (np.int64(106954752), 2), (np.int64(102794240), 3)]
53486
+ Running ctx_length=65536, TP_SIZE=8, CP_SIZE=4, BATCH_SIZE=2
53487
+ Cleaning up checkpoint directory: gpt-checkpoint
53488
+ --------------------------------
53489
+ CTX_LENGTH: 65536
53490
+ TP_SIZE: 8
53491
+ CP_SIZE: 4
53492
+ CHECKPOINT_PATH: gpt-checkpoint
53493
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
53494
+ --------------------------------
53495
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
53496
+ Cleaning up checkpoint directory: gpt-checkpoint
53497
+ --------------------------------
53498
+ CTX_LENGTH: 65536
53499
+ TP_SIZE: 8
53500
+ CP_SIZE: 4
53501
+ CHECKPOINT_PATH: gpt-checkpoint
53502
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
53503
+ --------------------------------
53504
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
53505
+ Cleaning up checkpoint directory: gpt-checkpoint
53506
+ --------------------------------
53507
+ CTX_LENGTH: 65536
53508
+ TP_SIZE: 8
53509
+ CP_SIZE: 4
53510
+ CHECKPOINT_PATH: gpt-checkpoint
53511
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
53512
+ --------------------------------
53513
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
53514
+ Cleaning up checkpoint directory: gpt-checkpoint
53515
+ --------------------------------
53516
+ CTX_LENGTH: 65536
53517
+ TP_SIZE: 8
53518
+ CP_SIZE: 4
53519
+ CHECKPOINT_PATH: gpt-checkpoint
53520
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
53521
+ --------------------------------
53522
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
53523
+ INFO:megatron.training.initialize:Setting logging level to 0
53524
+ INFO:megatron.training.initialize:Setting logging level to 0
53525
+ INFO:megatron.training.initialize:Setting logging level to 0
53526
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
53527
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
53528
+ INFO:megatron.training.initialize:Setting logging level to 0
53529
+ INFO:megatron.training.initialize:Setting logging level to 0
53530
+ INFO:megatron.training.initialize:Setting logging level to 0
53531
+ INFO:megatron.training.initialize:Setting logging level to 0
53532
+ INFO:megatron.training.initialize:Setting logging level to 0
53533
+ INFO:megatron.training.initialize:Setting logging level to 0
53534
+ using world size: 32, data-parallel size: 1, context-parallel size: 4, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
53535
+ Number of virtual stages per pipeline stage: None
53536
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
53537
+ using torch.float16 for parameters ...
53538
+ ------------------------ arguments ------------------------
53539
+ account_for_embedding_in_pipeline_split ......... False
53540
+ account_for_loss_in_pipeline_split .............. False
53541
+ accumulate_allreduce_grads_in_fp32 .............. False
53542
+ adam_beta1 ...................................... 0.9
53543
+ adam_beta2 ...................................... 0.999
53544
+ adam_eps ........................................ 1e-08
53545
+ add_bias_linear ................................. True
53546
+ add_position_embedding .......................... True
53547
+ add_qkv_bias .................................... True
53548
+ adlr_autoresume ................................. False
53549
+ adlr_autoresume_interval ........................ 1000
53550
+ align_grad_reduce ............................... True
53551
+ align_param_gather .............................. False
53552
+ app_tag_run_name ................................ None
53553
+ app_tag_run_version ............................. 0.0.0
53554
+ apply_layernorm_1p .............................. False
53555
+ apply_query_key_layer_scaling ................... False
53556
+ apply_residual_connection_post_layernorm ........ False
53557
+ apply_rope_fusion ............................... False
53558
+ async_save ...................................... None
53559
+ async_tensor_model_parallel_allreduce ........... True
53560
+ attention_backend ............................... AttnBackend.auto
53561
+ attention_dropout ............................... 0.1
53562
+ attention_softmax_in_fp32 ....................... False
53563
+ auto_detect_ckpt_format ......................... False
53564
+ barrier_with_L1_time ............................ True
53565
+ bert_binary_head ................................ True
53566
+ bert_embedder_type .............................. megatron
53567
+ bert_load ....................................... None
53568
+ bf16 ............................................ False
53569
+ bias_dropout_fusion ............................. True
53570
+ bias_gelu_fusion ................................ True
53571
+ bias_swiglu_fusion .............................. True
53572
+ biencoder_projection_dim ........................ 0
53573
+ biencoder_shared_query_context_model ............ False
53574
+ block_data_path ................................. None
53575
+ calc_ft_timeouts ................................ False
53576
+ calculate_per_token_loss ........................ False
53577
+ check_for_large_grads ........................... False
53578
+ check_for_nan_in_loss_and_grad .................. False
53579
+ check_for_spiky_loss ............................ False
53580
+ check_weight_hash_across_dp_replicas_interval ... None
53581
+ ckpt_assume_constant_structure .................. False
53582
+ ckpt_convert_format ............................. None
53583
+ ckpt_convert_save ............................... None
53584
+ ckpt_convert_update_legacy_dist_opt_format ...... False
53585
+ ckpt_format ..................................... torch_dist
53586
+ ckpt_fully_parallel_load ........................ False
53587
+ ckpt_fully_parallel_save ........................ True
53588
+ ckpt_fully_parallel_save_deprecated ............. False
53589
+ ckpt_step ....................................... None
53590
+ classes_fraction ................................ 1.0
53591
+ clip_grad ....................................... 1.0
53592
+ clone_scatter_output_in_embedding ............... True
53593
+ config_logger_dir ...............................
53594
+ consumed_train_samples .......................... 0
53595
+ consumed_valid_samples .......................... 0
53596
+ context_parallel_size ........................... 4
53597
+ cp_comm_type .................................... ['p2p']
53598
+ create_attention_mask_in_dataloader ............. True
53599
+ cross_entropy_fusion_impl ....................... native
53600
+ cross_entropy_loss_fusion ....................... False
53601
+ cuda_graph_scope ................................ full
53602
+ cuda_graph_warmup_steps ......................... 3
53603
+ data_args_path .................................. None
53604
+ data_cache_path ................................. None
53605
+ data_parallel_random_init ....................... False
53606
+ data_parallel_sharding_strategy ................. no_shard
53607
+ data_parallel_size .............................. 1
53608
+ data_path ....................................... None
53609
+ data_per_class_fraction ......................... 1.0
53610
+ data_sharding ................................... True
53611
+ dataloader_type ................................. single
53612
+ ddp_average_in_collective ....................... False
53613
+ ddp_bucket_size ................................. None
53614
+ ddp_num_buckets ................................. None
53615
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
53616
+ decoder_first_pipeline_num_layers ............... None
53617
+ decoder_last_pipeline_num_layers ................ None
53618
+ decoder_num_layers .............................. None
53619
+ decoder_seq_length .............................. None
53620
+ decoupled_lr .................................... None
53621
+ decoupled_min_lr ................................ None
53622
+ decrease_batch_size_if_needed ................... False
53623
+ defer_embedding_wgrad_compute ................... False
53624
+ deprecated_use_mcore_models ..................... False
53625
+ deterministic_mode .............................. False
53626
+ dino_bottleneck_size ............................ 256
53627
+ dino_freeze_last_layer .......................... 1
53628
+ dino_head_hidden_size ........................... 2048
53629
+ dino_local_crops_number ......................... 10
53630
+ dino_local_img_size ............................. 96
53631
+ dino_norm_last_layer ............................ False
53632
+ dino_teacher_temp ............................... 0.07
53633
+ dino_warmup_teacher_temp ........................ 0.04
53634
+ dino_warmup_teacher_temp_epochs ................. 30
53635
+ disable_bf16_reduced_precision_matmul ........... False
53636
+ disable_mamba_mem_eff_path ...................... False
53637
+ disable_straggler_on_startup .................... False
53638
+ dist_ckpt_format_deprecated ..................... None
53639
+ dist_ckpt_strictness ............................ assume_ok_unexpected
53640
+ distribute_saved_activations .................... False
53641
+ distributed_backend ............................. nccl
53642
+ distributed_timeout_minutes ..................... 10
53643
+ embedding_path .................................. None
53644
+ empty_unused_memory_level ....................... 0
53645
+ enable_cuda_graph ............................... False
53646
+ enable_ft_package ............................... False
53647
+ enable_gloo_process_groups ...................... True
53648
+ enable_msc ...................................... True
53649
+ enable_one_logger ............................... True
53650
+ encoder_num_layers .............................. 2
53651
+ encoder_pipeline_model_parallel_size ............ 0
53652
+ encoder_seq_length .............................. 65536
53653
+ INFO:megatron.training.initialize:Setting logging level to 0
53654
+ encoder_tensor_model_parallel_size .............. 0
53655
+ end_weight_decay ................................ 0.1
53656
+ eod_mask_loss ................................... False
53657
+ error_injection_rate ............................ 0
53658
+ error_injection_type ............................ transient_error
53659
+ eval_interval ................................... 16
53660
+ eval_iters ...................................... 1
53661
+ evidence_data_path .............................. None
53662
+ exit_duration_in_mins ........................... None
53663
+ exit_interval ................................... None
53664
+ exit_on_missing_checkpoint ...................... False
53665
+ exit_signal_handler ............................. False
53666
+ exp_avg_dtype ................................... torch.float32
53667
+ exp_avg_sq_dtype ................................ torch.float32
53668
+ expert_model_parallel_size ...................... 1
53669
+ expert_tensor_parallel_size ..................... 8
53670
+ external_cuda_graph ............................. False
53671
+ ffn_hidden_size ................................. 16384
53672
+ finetune ........................................ False
53673
+ first_last_layers_bf16 .......................... False
53674
+ flash_decode .................................... False
53675
+ fp16 ............................................ True
53676
+ fp16_lm_cross_entropy ........................... False
53677
+ fp32_residual_connection ........................ False
53678
+ fp8 ............................................. None
53679
+ fp8_amax_compute_algo ........................... most_recent
53680
+ fp8_amax_history_len ............................ 1
53681
+ fp8_interval .................................... 1
53682
+ fp8_margin ...................................... 0
53683
+ fp8_param_gather ................................ False
53684
+ fp8_recipe ...................................... delayed
53685
+ fp8_wgrad ....................................... True
53686
+ fsdp_double_buffer .............................. False
53687
+ global_batch_size ............................... 1
53688
+ grad_reduce_in_bf16 ............................. False
53689
+ gradient_accumulation_fusion .................... True
53690
+ gradient_reduce_div_fusion ...................... True
53691
+ group_query_attention ........................... True
53692
+ head_lr_mult .................................... 1.0
53693
+ heterogeneous_layers_config_encoded_json ........ None
53694
+ heterogeneous_layers_config_path ................ None
53695
+ hidden_dropout .................................. 0.1
53696
+ hidden_size ..................................... 4096
53697
+ hierarchical_context_parallel_sizes ............. None
53698
+ high_priority_stream_groups ..................... []
53699
+ hybrid_attention_ratio .......................... 0.0
53700
+ hybrid_mlp_ratio ................................ 0.0
53701
+ hybrid_override_pattern ......................... None
53702
+ hysteresis ...................................... 2
53703
+ ict_head_size ................................... None
53704
+ ict_load ........................................ None
53705
+ img_h ........................................... 224
53706
+ img_w ........................................... 224
53707
+ indexer_batch_size .............................. 128
53708
+ indexer_log_interval ............................ 1000
53709
+ inference_batch_times_seqlen_threshold .......... -1
53710
+ inference_dynamic_batching ...................... False
53711
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
53712
+ inference_dynamic_batching_buffer_overflow_factor None
53713
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
53714
+ inference_dynamic_batching_chunk_size ........... 256
53715
+ inference_dynamic_batching_max_requests_override None
53716
+ inference_dynamic_batching_max_tokens_override .. None
53717
+ inference_max_batch_size ........................ 8
53718
+ inference_max_seq_length ........................ 2560
53719
+ inference_rng_tracker ........................... False
53720
+ init_method_std ................................. 0.02
53721
+ init_method_xavier_uniform ...................... False
53722
+ init_model_with_meta_device ..................... False
53723
+ initial_loss_scale .............................. 4294967296
53724
+ inprocess_active_world_size ..................... 32
53725
+ inprocess_barrier_timeout ....................... 120
53726
+ inprocess_completion_timeout .................... 120
53727
+ inprocess_empty_cuda_cache ...................... False
53728
+ inprocess_granularity ........................... node
53729
+ inprocess_hard_timeout .......................... 90
53730
+ inprocess_heartbeat_interval .................... 30
53731
+ inprocess_heartbeat_timeout ..................... 60
53732
+ inprocess_last_call_wait ........................ 1
53733
+ inprocess_max_iterations ........................ None
53734
+ inprocess_monitor_process_interval .............. 1.0
53735
+ inprocess_monitor_thread_interval ............... 1.0
53736
+ inprocess_progress_watchdog_interval ............ 1.0
53737
+ inprocess_restart ............................... False
53738
+ inprocess_soft_timeout .......................... 60
53739
+ inprocess_termination_grace_time ................ 1
53740
+ is_hybrid_model ................................. False
53741
+ iter_per_epoch .................................. 1250
53742
+ iterations_to_skip .............................. []
53743
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
53744
+ kv_channels ..................................... 64
53745
+ kv_lora_rank .................................... 32
53746
+ lazy_mpu_init ................................... None
53747
+ load ............................................ gpt-checkpoint
53748
+ load_model_opt_format ........................... False
53749
+ local_rank ...................................... 0
53750
+ log_interval .................................... 1
53751
+ log_loss_scale_to_tensorboard ................... True
53752
+ log_memory_to_tensorboard ....................... False
53753
+ log_num_zeros_in_grad ........................... False
53754
+ log_params_norm ................................. False
53755
+ log_progress .................................... False
53756
+ log_straggler ................................... False
53757
+ log_throughput .................................. False
53758
+ log_timers_to_tensorboard ....................... False
53759
+ log_validation_ppl_to_tensorboard ............... False
53760
+ log_world_size_to_tensorboard ................... False
53761
+ logging_level ................................... 0
53762
+ loss_scale ...................................... None
53763
+ loss_scale_window ............................... 1000
53764
+ lr .............................................. 0.0005
53765
+ lr_decay_iters .................................. 150000
53766
+ lr_decay_samples ................................ None
53767
+ lr_decay_style .................................. cosine
53768
+ lr_warmup_fraction .............................. None
53769
+ lr_warmup_init .................................. 0.0
53770
+ lr_warmup_iters ................................. 2
53771
+ lr_warmup_samples ............................... 0
53772
+ lr_wsd_decay_iters .............................. None
53773
+ lr_wsd_decay_samples ............................ None
53774
+ lr_wsd_decay_style .............................. exponential
53775
+ main_grads_dtype ................................ torch.float32
53776
+ main_params_dtype ............................... torch.float32
53777
+ make_vocab_size_divisible_by .................... 128
53778
+ mamba_head_dim .................................. 64
53779
+ mamba_num_groups ................................ 8
53780
+ mamba_num_heads ................................. None
53781
+ mamba_state_dim ................................. 128
53782
+ manual_gc ....................................... False
53783
+ manual_gc_eval .................................. True
53784
+ manual_gc_interval .............................. 0
53785
+ mask_factor ..................................... 1.0
53786
+ mask_prob ....................................... 0.15
53787
+ mask_type ....................................... random
53788
+ masked_softmax_fusion ........................... True
53789
+ max_position_embeddings ......................... 65536
53790
+ max_tokens_to_oom ............................... 12000
53791
+ memory_snapshot_path ............................ snapshot.pickle
53792
+ merge_file ...................................... merges.txt
53793
+ micro_batch_size ................................ 1
53794
+ microbatch_group_size_per_vp_stage .............. None
53795
+ mid_level_dataset_surplus ....................... 0.005
53796
+ min_loss_scale .................................. 1.0
53797
+ min_lr .......................................... 0.0
53798
+ mlp_chunks_for_prefill .......................... 1
53799
+ mmap_bin_files .................................. True
53800
+ mock_data ....................................... True
53801
+ moe_apply_probs_on_input ........................ False
53802
+ moe_aux_loss_coeff .............................. 0.0
53803
+ moe_enable_deepep ............................... False
53804
+ moe_expert_capacity_factor ...................... None
53805
+ INFO:megatron.training.initialize:Setting logging level to 0
53806
+ moe_extended_tp ................................. False
53807
+ moe_ffn_hidden_size ............................. None
53808
+ moe_grouped_gemm ................................ False
53809
+ moe_input_jitter_eps ............................ None
53810
+ moe_layer_freq .................................. 1
53811
+ moe_layer_recompute ............................. False
53812
+ moe_pad_expert_input_to_capacity ................ False
53813
+ moe_per_layer_logging ........................... False
53814
+ moe_permute_fusion .............................. False
53815
+ moe_router_bias_update_rate ..................... 0.001
53816
+ moe_router_dtype ................................ None
53817
+ moe_router_enable_expert_bias ................... False
53818
+ moe_router_force_load_balancing ................. False
53819
+ moe_router_group_topk ........................... None
53820
+ moe_router_load_balancing_type .................. aux_loss
53821
+ moe_router_num_groups ........................... None
53822
+ moe_router_padding_for_fp8 ...................... False
53823
+ moe_router_pre_softmax .......................... False
53824
+ moe_router_score_function ....................... softmax
53825
+ moe_router_topk ................................. 2
53826
+ moe_router_topk_scaling_factor .................. None
53827
+ moe_shared_expert_intermediate_size ............. None
53828
+ moe_shared_expert_overlap ....................... False
53829
+ moe_token_dispatcher_type ....................... allgather
53830
+ moe_token_drop_policy ........................... probs
53831
+ moe_use_legacy_grouped_gemm ..................... False
53832
+ moe_use_upcycling ............................... False
53833
+ moe_z_loss_coeff ................................ None
53834
+ mrope_section ................................... None
53835
+ mscale .......................................... 1.0
53836
+ mscale_all_dim .................................. 1.0
53837
+ mtp_loss_scaling_factor ......................... 0.1
53838
+ mtp_num_layers .................................. None
53839
+ multi_latent_attention .......................... False
53840
+ nccl_all_reduce_for_prefill ..................... False
53841
+ nccl_communicator_config_path ................... None
53842
+ nccl_ub ......................................... False
53843
+ no_load_optim ................................... None
53844
+ no_load_rng ..................................... None
53845
+ no_persist_layer_norm ........................... False
53846
+ no_rope_freq .................................... None
53847
+ no_save_optim ................................... None
53848
+ no_save_rng ..................................... None
53849
+ non_persistent_ckpt_type ........................ None
53850
+ non_persistent_global_ckpt_dir .................. None
53851
+ non_persistent_local_ckpt_algo .................. fully_parallel
53852
+ non_persistent_local_ckpt_dir ................... None
53853
+ non_persistent_save_interval .................... None
53854
+ norm_epsilon .................................... 1e-05
53855
+ normalization ................................... LayerNorm
53856
+ num_attention_heads ............................. 64
53857
+ num_channels .................................... 3
53858
+ INFO:megatron.training.initialize:Setting logging level to 0
53859
+ num_classes ..................................... 1000
53860
+ num_dataset_builder_threads ..................... 1
53861
+ num_distributed_optimizer_instances ............. 1
53862
+ num_experts ..................................... None
53863
+ num_layers ...................................... 2
53864
+ num_layers_at_end_in_bf16 ....................... 1
53865
+ num_layers_at_start_in_bf16 ..................... 1
53866
+ num_layers_per_virtual_pipeline_stage ........... None
53867
+ num_query_groups ................................ 16
53868
+ num_virtual_stages_per_pipeline_rank ............ None
53869
+ num_workers ..................................... 2
53870
+ object_storage_cache_path ....................... None
53871
+ one_logger_async ................................ False
53872
+ one_logger_project .............................. megatron-lm
53873
+ one_logger_run_name ............................. None
53874
+ onnx_safe ....................................... None
53875
+ openai_gelu ..................................... False
53876
+ optimizer ....................................... adam
53877
+ optimizer_cpu_offload ........................... False
53878
+ optimizer_offload_fraction ...................... 1.0
53879
+ output_bert_embeddings .......................... False
53880
+ overlap_cpu_optimizer_d2h_h2d ................... False
53881
+ overlap_grad_reduce ............................. False
53882
+ overlap_p2p_comm ................................ False
53883
+ overlap_p2p_comm_warmup_flush ................... False
53884
+ overlap_param_gather ............................ False
53885
+ overlap_param_gather_with_optimizer_step ........ False
53886
+ override_opt_param_scheduler .................... False
53887
+ params_dtype .................................... torch.float16
53888
+ patch_dim ....................................... 16
53889
+ per_split_data_args_path ........................ None
53890
+ perform_initialization .......................... True
53891
+ pin_cpu_grads ................................... True
53892
+ pin_cpu_params .................................. True
53893
+ pipeline_model_parallel_comm_backend ............ None
53894
+ pipeline_model_parallel_size .................... 1
53895
+ pipeline_model_parallel_split_rank .............. None
53896
+ position_embedding_type ......................... learned_absolute
53897
+ pretrained_checkpoint ........................... None
53898
+ profile ......................................... False
53899
+ profile_ranks ................................... [0]
53900
+ profile_step_end ................................ 12
53901
+ profile_step_start .............................. 10
53902
+ q_lora_rank ..................................... None
53903
+ qk_head_dim ..................................... 128
53904
+ qk_l2_norm ...................................... False
53905
+ qk_layernorm .................................... False
53906
+ qk_pos_emb_head_dim ............................. 64
53907
+ query_in_block_prob ............................. 0.1
53908
+ rampup_batch_size ............................... None
53909
+ rank ............................................ 0
53910
+ recompute_granularity ........................... None
53911
+ recompute_method ................................ None
53912
+ recompute_modules ............................... None
53913
+ recompute_num_layers ............................ None
53914
+ record_memory_history ........................... False
53915
+ relative_attention_max_distance ................. 128
53916
+ relative_attention_num_buckets .................. 32
53917
+ replication ..................................... False
53918
+ replication_factor .............................. 2
53919
+ replication_jump ................................ None
53920
+ rerun_mode ...................................... disabled
53921
+ reset_attention_mask ............................ False
53922
+ reset_position_ids .............................. False
53923
+ result_rejected_tracker_filename ................ None
53924
+ retriever_report_topk_accuracies ................ []
53925
+ retriever_score_scaling ......................... False
53926
+ retriever_seq_length ............................ 256
53927
+ retro_add_retriever ............................. False
53928
+ retro_attention_gate ............................ 1
53929
+ retro_cyclic_train_iters ........................ None
53930
+ retro_encoder_attention_dropout ................. 0.1
53931
+ retro_encoder_hidden_dropout .................... 0.1
53932
+ retro_encoder_layers ............................ 2
53933
+ retro_num_neighbors ............................. 2
53934
+ retro_num_retrieved_chunks ...................... 2
53935
+ retro_project_dir ............................... None
53936
+ retro_verify_neighbor_count ..................... True
53937
+ rope_scaling_factor ............................. 8.0
53938
+ rotary_base ..................................... 10000
53939
+ rotary_interleaved .............................. False
53940
+ rotary_percent .................................. 1.0
53941
+ rotary_scaling_factor ........................... 1.0
53942
+ rotary_seq_len_interpolation_factor ............. None
53943
+ run_workload_inspector_server ................... False
53944
+ sample_rate ..................................... 1.0
53945
+ save ............................................ gpt-checkpoint
53946
+ save_interval ................................... 16
53947
+ scatter_gather_tensors_in_pipeline .............. True
53948
+ seed ............................................ 1234
53949
+ seq_length ...................................... 65536
53950
+ sequence_parallel ............................... False
53951
+ sgd_momentum .................................... 0.9
53952
+ short_seq_prob .................................. 0.1
53953
+ skip_train ...................................... False
53954
+ skipped_train_samples ........................... 0
53955
+ spec ............................................ None
53956
+ split ........................................... None
53957
+ squared_relu .................................... False
53958
+ start_weight_decay .............................. 0.1
53959
+ straggler_ctrlr_port ............................ 65535
53960
+ straggler_minmax_count .......................... 1
53961
+ suggested_communication_unit_size ............... None
53962
+ swiglu .......................................... False
53963
+ swin_backbone_type .............................. tiny
53964
+ symmetric_ar_type ............................... None
53965
+ te_rng_tracker .................................. False
53966
+ tensor_model_parallel_size ...................... 8
53967
+ tensorboard_dir ................................. tensorboard-logs/
53968
+ tensorboard_log_interval ........................ 1
53969
+ tensorboard_queue_size .......................... 1000
53970
+ test_data_path .................................. None
53971
+ test_mode ....................................... False
53972
+ tiktoken_num_special_tokens ..................... 1000
53973
+ tiktoken_pattern ................................ None
53974
+ tiktoken_special_tokens ......................... None
53975
+ timing_log_level ................................ 0
53976
+ timing_log_option ............................... minmax
53977
+ titles_data_path ................................ None
53978
+ tokenizer_model ................................. None
53979
+ tokenizer_type .................................. GPT2BPETokenizer
53980
+ torch_fsdp2_reshard_after_forward ............... True
53981
+ tp_comm_bootstrap_backend ....................... nccl
53982
+ tp_comm_bulk_dgrad .............................. True
53983
+ tp_comm_bulk_wgrad .............................. True
53984
+ tp_comm_overlap ................................. False
53985
+ tp_comm_overlap_ag .............................. True
53986
+ tp_comm_overlap_cfg ............................. None
53987
+ tp_comm_overlap_rs .............................. True
53988
+ tp_comm_overlap_rs_dgrad ........................ False
53989
+ tp_comm_split_ag ................................ True
53990
+ tp_comm_split_rs ................................ True
53991
+ train_data_path ................................. None
53992
+ train_iters ..................................... 10
53993
+ train_samples ................................... None
53994
+ train_sync_interval ............................. None
53995
+ transformer_impl ................................ transformer_engine
53996
+ transformer_pipeline_model_parallel_size ........ 1
53997
+ untie_embeddings_and_output_weights ............. False
53998
+ use_checkpoint_args ............................. False
53999
+ use_checkpoint_opt_param_scheduler .............. False
54000
+ use_cpu_initialization .......................... None
54001
+ use_custom_fsdp ................................. False
54002
+ use_dist_ckpt ................................... True
54003
+ use_dist_ckpt_deprecated ........................ False
54004
+ use_distributed_optimizer ....................... False
54005
+ use_flash_attn .................................. False
54006
+ use_legacy_models ............................... False
54007
+ use_mp_args_from_checkpoint_args ................ False
54008
+ use_one_sent_docs ............................... False
54009
+ use_persistent_ckpt_worker ...................... False
54010
+ use_precision_aware_optimizer ................... False
54011
+ use_pytorch_profiler ............................ False
54012
+ use_ring_exchange_p2p ........................... False
54013
+ use_rope_scaling ................................ False
54014
+ use_rotary_position_embeddings .................. False
54015
+ use_sharp ....................................... False
54016
+ use_tokenizer_model_from_checkpoint_args ........ True
54017
+ use_torch_fsdp2 ................................. False
54018
+ use_torch_optimizer_for_cpu_offload ............. False
54019
+ use_tp_pp_dp_mapping ............................ False
54020
+ v_head_dim ...................................... 128
54021
+ valid_data_path ................................. None
54022
+ variable_seq_lengths ............................ False
54023
+ virtual_pipeline_model_parallel_size ............ None
54024
+ vision_backbone_type ............................ vit
54025
+ vision_pretraining .............................. False
54026
+ vision_pretraining_type ......................... classify
54027
+ vocab_extra_ids ................................. 0
54028
+ vocab_file ...................................... vocab.json
54029
+ vocab_size ...................................... None
54030
+ wandb_exp_name ..................................
54031
+ wandb_project ...................................
54032
+ wandb_save_dir ..................................
54033
+ weight_decay .................................... 0.1
54034
+ weight_decay_incr_style ......................... constant
54035
+ wgrad_deferral_limit ............................ 0
54036
+ world_size ...................................... 32
54037
+ yaml_cfg ........................................ None
54038
+ -------------------- end of arguments ---------------------
54039
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
54040
+ > building GPT2BPETokenizer tokenizer ...
54041
+ > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
54042
+ INFO:megatron.training.initialize:Setting logging level to 0
54043
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
54044
+ > initializing torch distributed ...
54045
+ INFO:megatron.training.initialize:Setting logging level to 0
54046
+ INFO:megatron.training.initialize:Setting logging level to 0
54047
+ INFO:megatron.training.initialize:Setting logging level to 0
54048
+ INFO:megatron.training.initialize:Setting logging level to 0
54049
+ INFO:megatron.training.initialize:Setting logging level to 0
54050
+ INFO:megatron.training.initialize:Setting logging level to 0
54051
+ INFO:megatron.training.initialize:Setting logging level to 0
54052
+ INFO:megatron.training.initialize:Setting logging level to 0
54053
+ INFO:megatron.training.initialize:Setting logging level to 0
54054
+ INFO:megatron.training.initialize:Setting logging level to 0
54055
+ INFO:megatron.training.initialize:Setting logging level to 0
54056
+ INFO:megatron.training.initialize:Setting logging level to 0
54057
+ INFO:megatron.training.initialize:Setting logging level to 0
54058
+ INFO:megatron.training.initialize:Setting logging level to 0
54059
+ INFO:megatron.training.initialize:Setting logging level to 0
54060
+ > initialized tensor model parallel with size 8
54061
+ > initialized pipeline model parallel with size 1
54062
+ > setting random seeds to 1234 ...
54063
+ > compiling dataset index builder ...
54064
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
54065
+ INFO:megatron.training.initialize:Setting logging level to 0
54066
+ INFO:megatron.training.initialize:Setting logging level to 0
54067
+ INFO:megatron.training.initialize:Setting logging level to 0
54068
+ make: Nothing to be done for 'default'.
54069
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
54070
+ >>> done with dataset index builder. Compilation time: 0.040 seconds
54071
+ WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
54072
+ > compiling and loading fused kernels ...
54073
+ INFO:megatron.training.initialize:Setting logging level to 0
54074
+ >>> done with compiling and loading fused kernels. Compilation time: 2.964 seconds
54075
+ time to initialize megatron (seconds): 8.266
54076
+ [after megatron is initialized] datetime: 2025-06-21 21:30:39
54077
+ building GPT model ...
54078
+ >>> embedding
54079
+ >>> decoder
54080
+ >>> output_layer
54081
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 338753024
54082
+ >>> embedding
54083
+ >>> decoder
54084
+ >>> output_layer
54085
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 338753024
54086
+ >>> embedding
54087
+ >>> decoder
54088
+ >>> output_layer
54089
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 338753024
54090
+ >>> embedding
54091
+ >>> decoder
54092
+ >>> output_layer
54093
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 338753024
54094
+ >>> embedding
54095
+ >>> decoder
54096
+ >>> output_layer
54097
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 338753024
54098
+ >>> embedding
54099
+ >>> decoder
54100
+ >>> output_layer
54101
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 338753024
54102
+ >>> embedding
54103
+ >>> decoder
54104
+ >>> output_layer
54105
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 338753024
54106
+ >>> embedding
54107
+ >>> decoder
54108
+ >>> output_layer
54109
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 338753024
54110
+ >>> embedding
54111
+ >>> decoder
54112
+ >>> output_layer
54113
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 338753024
54114
+ >>> embedding
54115
+ >>> decoder
54116
+ >>> output_layer
54117
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 338753024
54118
+ >>> embedding
54119
+ >>> decoder
54120
+ >>> output_layer
54121
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 338753024
54122
+ >>> embedding
54123
+ >>> decoder
54124
+ >>> output_layer
54125
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 338753024
54126
+ >>> embedding
54127
+ >>> decoder
54128
+ >>> output_layer
54129
+ >>> embedding
54130
+ >>> decoder
54131
+ >>> output_layer
54132
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 338753024
54133
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 338753024
54134
+ >>> embedding
54135
+ >>> decoder
54136
+ >>> output_layer
54137
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 338753024
54138
+ >>> embedding
54139
+ >>> decoder
54140
+ >>> output_layer
54141
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 338753024
54142
+ >>> embedding
54143
+ >>> decoder
54144
+ >>> output_layer
54145
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 338753024
54146
+ >>> embedding
54147
+ >>> decoder
54148
+ >>> output_layer
54149
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 338753024
54150
+ >>> embedding
54151
+ >>> decoder
54152
+ >>> output_layer
54153
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 338753024
54154
+ >>> embedding
54155
+ >>> decoder
54156
+ >>> output_layer
54157
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 338753024
54158
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
54159
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
54160
+ Params for bucket 1 (338753024 elements, 338753024 padded size):
54161
+ module.decoder.layers.1.mlp.linear_fc1.bias
54162
+ module.decoder.layers.0.mlp.linear_fc1.bias
54163
+ module.decoder.final_layernorm.bias
54164
+ module.decoder.layers.1.self_attention.linear_qkv.weight
54165
+ module.decoder.layers.1.self_attention.linear_proj.weight
54166
+ module.decoder.layers.0.self_attention.linear_qkv.weight
54167
+ module.embedding.word_embeddings.weight
54168
+ module.decoder.layers.1.mlp.linear_fc2.weight
54169
+ module.decoder.layers.1.self_attention.linear_proj.bias
54170
+ module.decoder.final_layernorm.weight
54171
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
54172
+ module.decoder.layers.0.mlp.linear_fc2.weight
54173
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
54174
+ module.embedding.position_embeddings.weight
54175
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
54176
+ module.decoder.layers.1.self_attention.linear_qkv.bias
54177
+ module.decoder.layers.0.mlp.linear_fc2.bias
54178
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
54179
+ module.decoder.layers.0.self_attention.linear_qkv.bias
54180
+ module.decoder.layers.0.self_attention.linear_proj.weight
54181
+ module.decoder.layers.1.mlp.linear_fc1.weight
54182
+ module.decoder.layers.0.mlp.linear_fc1.weight
54183
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
54184
+ module.decoder.layers.1.mlp.linear_fc2.bias
54185
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
54186
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
54187
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
54188
+ module.decoder.layers.0.self_attention.linear_proj.bias
54189
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14d5acf2e7e0>, config_logger_dir='')
54190
+ >>> embedding
54191
+ >>> decoder
54192
+ >>> output_layer
54193
+ >>> embedding
54194
+ >>> decoder
54195
+ >>> output_layer
54196
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 338753024
54197
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 338753024
54198
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
54199
+ >>> embedding
54200
+ >>> decoder
54201
+ >>> output_layer
54202
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 338753024
54203
+ >>> embedding
54204
+ >>> decoder
54205
+ >>> output_layer
54206
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 338753024
54207
+ >>> embedding
54208
+ >>> decoder
54209
+ >>> output_layer
54210
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 338753024
54211
+ >>> embedding
54212
+ >>> decoder
54213
+ >>> output_layer
54214
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 338753024
54215
+ >>> embedding
54216
+ >>> decoder
54217
+ >>> output_layer
54218
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 338753024
54219
+ >>> embedding
54220
+ >>> decoder
54221
+ >>> output_layer
54222
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 338753024
54223
+ >>> embedding
54224
+ >>> decoder
54225
+ >>> output_layer
54226
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 338753024
54227
+ >>> embedding
54228
+ >>> decoder
54229
+ >>> output_layer
54230
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 338753024
54231
+ >>> embedding
54232
+ >>> decoder
54233
+ >>> output_layer
54234
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 338753024
54235
+ >>> embedding
54236
+ >>> decoder
54237
+ >>> output_layer
54238
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 338753024
54239
+ WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
54240
+ will not load any checkpoints and will start from random
54241
+ (min, max) time across ranks (ms):
54242
+ load-checkpoint ................................: (2.98, 3.60)
54243
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:30:42
54244
+ > building train, validation, and test datasets ...
54245
+ > datasets target sizes (minimum size):
54246
+ train: 10
54247
+ validation: 1
54248
+ test: 1
54249
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
54250
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
54251
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
54252
+ > building train, validation, and test datasets for GPT ...
54253
+ INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=65536, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14d5ad6fae70>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
54254
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
54255
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
54256
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
54257
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.006050 seconds
54258
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1040
54259
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
54260
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
54261
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
54262
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
54263
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001598 seconds
54264
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1040
54265
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
54266
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
54267
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
54268
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
54269
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001329 seconds
54270
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1041
54271
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
54272
+ > finished creating GPT datasets ...
54273
+ [after dataloaders are built] datetime: 2025-06-21 21:30:42
54274
+ done with setup ...
54275
+ training ...
54276
+ (min, max) time across ranks (ms):
54277
+ model-and-optimizer-setup ......................: (3324.90, 3349.59)
54278
+ train/valid/test-data-iterators-setup ..........: (17.58, 139.05)
54279
+ Setting rerun_state_machine.current_iteration to 0...
54280
+ [before the start of training step] datetime: 2025-06-21 21:30:42
54281
+ batch tensor: tokens torch.Size([2, 131072])
54282
+ batch tensor: labels torch.Size([2, 131072])
54283
+ batch tensor: loss_mask torch.Size([2, 131072])
54284
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54285
+ batch tensor: position_ids torch.Size([2, 131072])
54286
+ batch tensor after cp: tokens torch.Size([2, 32768])
54287
+ batch tensor after cp: labels torch.Size([2, 32768])
54288
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54289
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54290
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54291
+ batch tensor: tokens torch.Size([2, 131072])
54292
+ batch tensor: labels torch.Size([2, 131072])
54293
+ batch tensor: loss_mask torch.Size([2, 131072])
54294
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54295
+ batch tensor: position_ids torch.Size([2, 131072])
54296
+ batch tensor after cp: tokens torch.Size([2, 32768])
54297
+ batch tensor after cp: labels torch.Size([2, 32768])
54298
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54299
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54300
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54301
+ batch tensor: tokens torch.Size([2, 131072])
54302
+ batch tensor: labels torch.Size([2, 131072])
54303
+ batch tensor: loss_mask torch.Size([2, 131072])
54304
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54305
+ batch tensor: position_ids torch.Size([2, 131072])
54306
+ batch tensor after cp: tokens torch.Size([2, 32768])
54307
+ batch tensor after cp: labels torch.Size([2, 32768])
54308
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54309
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54310
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54311
+ batch tensor: tokens torch.Size([2, 131072])
54312
+ batch tensor: labels torch.Size([2, 131072])
54313
+ batch tensor: loss_mask torch.Size([2, 131072])
54314
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54315
+ batch tensor: position_ids torch.Size([2, 131072])
54316
+ batch tensor: tokens torch.Size([2, 131072])
54317
+ batch tensor: labels torch.Size([2, 131072])
54318
+ batch tensor: loss_mask torch.Size([2, 131072])
54319
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54320
+ batch tensor: position_ids torch.Size([2, 131072])
54321
+ batch tensor: tokens torch.Size([2, 131072])
54322
+ batch tensor: labels torch.Size([2, 131072])
54323
+ batch tensor: loss_mask torch.Size([2, 131072])
54324
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54325
+ batch tensor: position_ids torch.Size([2, 131072])
54326
+ batch tensor after cp: tokens torch.Size([2, 32768])
54327
+ batch tensor after cp: labels torch.Size([2, 32768])
54328
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54329
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54330
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54331
+ batch tensor after cp: tokens torch.Size([2, 32768])
54332
+ batch tensor after cp: labels torch.Size([2, 32768])
54333
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54334
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54335
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54336
+ batch tensor after cp: tokens torch.Size([2, 32768])
54337
+ batch tensor after cp: labels torch.Size([2, 32768])
54338
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54339
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54340
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54341
+ batch tensor: tokens torch.Size([2, 131072])
54342
+ batch tensor: labels torch.Size([2, 131072])
54343
+ batch tensor: loss_mask torch.Size([2, 131072])
54344
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54345
+ batch tensor: position_ids torch.Size([2, 131072])
54346
+ batch tensor: tokens torch.Size([2, 131072])
54347
+ batch tensor: labels torch.Size([2, 131072])
54348
+ batch tensor: loss_mask torch.Size([2, 131072])
54349
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54350
+ batch tensor: position_ids torch.Size([2, 131072])
54351
+ batch tensor: tokens torch.Size([2, 131072])
54352
+ batch tensor: labels torch.Size([2, 131072])
54353
+ batch tensor: loss_mask torch.Size([2, 131072])
54354
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54355
+ batch tensor: position_ids torch.Size([2, 131072])
54356
+ batch tensor: tokens torch.Size([2, 131072])
54357
+ batch tensor: labels torch.Size([2, 131072])
54358
+ batch tensor: loss_mask torch.Size([2, 131072])
54359
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54360
+ batch tensor: position_ids torch.Size([2, 131072])
54361
+ batch tensor: tokens torch.Size([2, 131072])
54362
+ batch tensor: labels torch.Size([2, 131072])
54363
+ batch tensor: loss_mask torch.Size([2, 131072])
54364
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54365
+ batch tensor: position_ids torch.Size([2, 131072])
54366
+ batch tensor after cp: tokens torch.Size([2, 32768])
54367
+ batch tensor after cp: labels torch.Size([2, 32768])
54368
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54369
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54370
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54371
+ batch tensor after cp: tokens torch.Size([2, 32768])
54372
+ batch tensor after cp: labels torch.Size([2, 32768])
54373
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54374
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54375
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54376
+ batch tensor after cp: tokens torch.Size([2, 32768])
54377
+ batch tensor after cp: labels torch.Size([2, 32768])
54378
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54379
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54380
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54381
+ batch tensor after cp: tokens torch.Size([2, 32768])
54382
+ batch tensor after cp: labels torch.Size([2, 32768])
54383
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54384
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54385
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54386
+ batch tensor after cp: tokens torch.Size([2, 32768])
54387
+ batch tensor after cp: labels torch.Size([2, 32768])
54388
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54389
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54390
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54391
+ batch tensor: tokens torch.Size([2, 131072])
54392
+ batch tensor: labels torch.Size([2, 131072])
54393
+ batch tensor: loss_mask torch.Size([2, 131072])
54394
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54395
+ batch tensor: position_ids torch.Size([2, 131072])
54396
+ batch tensor after cp: tokens torch.Size([2, 32768])
54397
+ batch tensor after cp: labels torch.Size([2, 32768])
54398
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54399
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54400
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54401
+ batch tensor: tokens torch.Size([2, 131072])
54402
+ batch tensor: labels torch.Size([2, 131072])
54403
+ batch tensor: loss_mask torch.Size([2, 131072])
54404
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54405
+ batch tensor: position_ids torch.Size([2, 131072])
54406
+ batch tensor: tokens torch.Size([2, 131072])
54407
+ batch tensor: labels torch.Size([2, 131072])
54408
+ batch tensor: loss_mask torch.Size([2, 131072])
54409
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54410
+ batch tensor: position_ids torch.Size([2, 131072])
54411
+ batch tensor: tokens torch.Size([2, 131072])
54412
+ batch tensor: labels torch.Size([2, 131072])
54413
+ batch tensor: loss_mask torch.Size([2, 131072])
54414
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54415
+ batch tensor: position_ids torch.Size([2, 131072])
54416
+ batch tensor: tokens torch.Size([2, 131072])
54417
+ batch tensor: labels torch.Size([2, 131072])
54418
+ batch tensor: loss_mask torch.Size([2, 131072])
54419
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54420
+ batch tensor: position_ids torch.Size([2, 131072])
54421
+ batch tensor after cp: tokens torch.Size([2, 32768])
54422
+ batch tensor after cp: labels torch.Size([2, 32768])
54423
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54424
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54425
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54426
+ batch tensor after cp: tokens torch.Size([2, 32768])
54427
+ batch tensor after cp: labels torch.Size([2, 32768])
54428
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54429
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54430
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54431
+ batch tensor after cp: tokens torch.Size([2, 32768])
54432
+ batch tensor after cp: labels torch.Size([2, 32768])
54433
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54434
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54435
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54436
+ batch tensor after cp: tokens torch.Size([2, 32768])
54437
+ batch tensor after cp: labels torch.Size([2, 32768])
54438
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54439
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54440
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54441
+ batch tensor: tokens torch.Size([2, 131072])
54442
+ batch tensor: labels torch.Size([2, 131072])
54443
+ batch tensor: loss_mask torch.Size([2, 131072])
54444
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54445
+ batch tensor: position_ids torch.Size([2, 131072])
54446
+ batch tensor: tokens torch.Size([2, 131072])
54447
+ batch tensor: labels torch.Size([2, 131072])
54448
+ batch tensor: loss_mask torch.Size([2, 131072])
54449
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54450
+ batch tensor: position_ids torch.Size([2, 131072])
54451
+ batch tensor: tokens torch.Size([2, 131072])
54452
+ batch tensor: labels torch.Size([2, 131072])
54453
+ batch tensor: loss_mask torch.Size([2, 131072])
54454
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54455
+ batch tensor: position_ids torch.Size([2, 131072])
54456
+ batch tensor: tokens torch.Size([2, 131072])
54457
+ batch tensor: labels torch.Size([2, 131072])
54458
+ batch tensor: loss_mask torch.Size([2, 131072])
54459
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54460
+ batch tensor: position_ids torch.Size([2, 131072])
54461
+ batch tensor: tokens torch.Size([2, 131072])
54462
+ batch tensor: labels torch.Size([2, 131072])
54463
+ batch tensor: loss_mask torch.Size([2, 131072])
54464
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54465
+ batch tensor: position_ids torch.Size([2, 131072])
54466
+ batch tensor: tokens torch.Size([2, 131072])
54467
+ batch tensor: labels torch.Size([2, 131072])
54468
+ batch tensor: loss_mask torch.Size([2, 131072])
54469
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54470
+ batch tensor: position_ids torch.Size([2, 131072])
54471
+ batch tensor: tokens torch.Size([2, 131072])
54472
+ batch tensor: labels torch.Size([2, 131072])
54473
+ batch tensor: loss_mask torch.Size([2, 131072])
54474
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54475
+ batch tensor: position_ids torch.Size([2, 131072])
54476
+ batch tensor after cp: tokens torch.Size([2, 32768])
54477
+ batch tensor after cp: labels torch.Size([2, 32768])
54478
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54479
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54480
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54481
+ batch tensor after cp: tokens torch.Size([2, 32768])
54482
+ batch tensor after cp: labels torch.Size([2, 32768])
54483
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54484
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54485
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54486
+ batch tensor after cp: tokens torch.Size([2, 32768])
54487
+ batch tensor after cp: labels torch.Size([2, 32768])
54488
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54489
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54490
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54491
+ batch tensor after cp: tokens torch.Size([2, 32768])
54492
+ batch tensor after cp: labels torch.Size([2, 32768])
54493
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54494
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54495
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54496
+ batch tensor: tokens torch.Size([2, 131072])
54497
+ batch tensor: labels torch.Size([2, 131072])
54498
+ batch tensor: loss_mask torch.Size([2, 131072])
54499
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54500
+ batch tensor: position_ids torch.Size([2, 131072])
54501
+ batch tensor after cp: tokens torch.Size([2, 32768])
54502
+ batch tensor after cp: labels torch.Size([2, 32768])
54503
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54504
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54505
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54506
+ batch tensor after cp: tokens torch.Size([2, 32768])
54507
+ batch tensor after cp: labels torch.Size([2, 32768])
54508
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54509
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54510
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54511
+ batch tensor: tokens torch.Size([2, 131072])
54512
+ batch tensor: labels torch.Size([2, 131072])
54513
+ batch tensor: loss_mask torch.Size([2, 131072])
54514
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54515
+ batch tensor: position_ids torch.Size([2, 131072])
54516
+ batch tensor after cp: tokens torch.Size([2, 32768])
54517
+ batch tensor after cp: labels torch.Size([2, 32768])
54518
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54519
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54520
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54521
+ batch tensor after cp: tokens torch.Size([2, 32768])
54522
+ batch tensor after cp: labels torch.Size([2, 32768])
54523
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54524
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54525
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54526
+ batch tensor after cp: tokens torch.Size([2, 32768])
54527
+ batch tensor after cp: labels torch.Size([2, 32768])
54528
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54529
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54530
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54531
+ batch tensor: tokens torch.Size([2, 131072])
54532
+ batch tensor: labels torch.Size([2, 131072])
54533
+ batch tensor: loss_mask torch.Size([2, 131072])
54534
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54535
+ batch tensor: position_ids torch.Size([2, 131072])
54536
+ batch tensor: tokens torch.Size([2, 131072])
54537
+ batch tensor: labels torch.Size([2, 131072])
54538
+ batch tensor: loss_mask torch.Size([2, 131072])
54539
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54540
+ batch tensor: position_ids torch.Size([2, 131072])
54541
+ batch tensor: tokens torch.Size([2, 131072])
54542
+ batch tensor: labels torch.Size([2, 131072])
54543
+ batch tensor: loss_mask torch.Size([2, 131072])
54544
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54545
+ batch tensor: position_ids torch.Size([2, 131072])
54546
+ batch tensor: tokens torch.Size([2, 131072])
54547
+ batch tensor: labels torch.Size([2, 131072])
54548
+ batch tensor: loss_mask torch.Size([2, 131072])
54549
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54550
+ batch tensor: position_ids torch.Size([2, 131072])
54551
+ batch tensor after cp: tokens torch.Size([2, 32768])
54552
+ batch tensor after cp: labels torch.Size([2, 32768])
54553
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54554
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54555
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54556
+ batch tensor: tokens torch.Size([2, 131072])
54557
+ batch tensor: labels torch.Size([2, 131072])
54558
+ batch tensor: loss_mask torch.Size([2, 131072])
54559
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54560
+ batch tensor: position_ids torch.Size([2, 131072])
54561
+ batch tensor after cp: tokens torch.Size([2, 32768])
54562
+ batch tensor after cp: labels torch.Size([2, 32768])
54563
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54564
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54565
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54566
+ batch tensor after cp: tokens torch.Size([2, 32768])
54567
+ batch tensor after cp: labels torch.Size([2, 32768])
54568
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54569
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54570
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54571
+ batch tensor after cp: tokens torch.Size([2, 32768])
54572
+ batch tensor after cp: labels torch.Size([2, 32768])
54573
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54574
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54575
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54576
+ batch tensor after cp: tokens torch.Size([2, 32768])
54577
+ batch tensor after cp: labels torch.Size([2, 32768])
54578
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54579
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54580
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54581
+ batch tensor: tokens torch.Size([2, 131072])
54582
+ batch tensor: labels torch.Size([2, 131072])
54583
+ batch tensor: loss_mask torch.Size([2, 131072])
54584
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54585
+ batch tensor: position_ids torch.Size([2, 131072])
54586
+ batch tensor: tokens torch.Size([2, 131072])
54587
+ batch tensor: labels torch.Size([2, 131072])
54588
+ batch tensor: loss_mask torch.Size([2, 131072])
54589
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
54590
+ batch tensor: position_ids torch.Size([2, 131072])
54591
+ batch tensor after cp: tokens torch.Size([2, 32768])
54592
+ batch tensor after cp: labels torch.Size([2, 32768])
54593
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54594
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54595
+ batch tensor after cp: position_ids torch.Size([2, 32768])
54596
+ batch tensor after cp: tokens torch.Size([2, 32768])
54597
+ batch tensor after cp: labels torch.Size([2, 32768])
54598
+ batch tensor after cp: loss_mask torch.Size([2, 32768])
54599
+ batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
54600
+ batch tensor after cp: position_ids torch.Size([2, 32768])
attnserver.run_attnserver.slurm.sh.343202.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343202.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343203.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343203.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343204.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343204.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343205.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343205.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343206.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343206.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343207.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343207.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343208.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343208.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343209.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343209.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343210.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343210.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343211.err.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343211.out.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343212.err.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343212.out.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343213.err.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343213.out.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343214.err.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343214.out.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343219.err.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343219.out.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343220.err.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343220.out.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343221.err.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343221.out.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343222.err.log ADDED
@@ -0,0 +1,543 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=4
115
+ + PROF_TP_SIZE=4
116
+ + export PROF_CP_SIZE=4
117
+ + PROF_CP_SIZE=4
118
+ + export PROF_BS=8
119
+ + PROF_BS=8
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp4.bs8.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp4.bs8.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=8'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343222 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
129
+ + which python3
130
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343222 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
131
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
132
+ and will be removed in future. Use torchrun.
133
+ Note that --use-env is set by default in torchrun.
134
+ If your script expects `--local-rank` argument to be set, please
135
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
136
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
137
+ further instructions
138
+
139
+ main()
140
+ W0621 21:27:55.960000 780910 site-packages/torch/distributed/run.py:766]
141
+ W0621 21:27:55.960000 780910 site-packages/torch/distributed/run.py:766] *****************************************
142
+ W0621 21:27:55.960000 780910 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
143
+ W0621 21:27:55.960000 780910 site-packages/torch/distributed/run.py:766] *****************************************
144
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
145
+ and will be removed in future. Use torchrun.
146
+ Note that --use-env is set by default in torchrun.
147
+ If your script expects `--local-rank` argument to be set, please
148
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
149
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
150
+ further instructions
151
+
152
+ main()
153
+ W0621 21:27:56.018000 1057581 site-packages/torch/distributed/run.py:766]
154
+ W0621 21:27:56.018000 1057581 site-packages/torch/distributed/run.py:766] *****************************************
155
+ W0621 21:27:56.018000 1057581 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
156
+ W0621 21:27:56.018000 1057581 site-packages/torch/distributed/run.py:766] *****************************************
157
+ [rank4]:[W621 21:28:18.488983782 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
158
+ [rank12]:[W621 21:28:18.165170645 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
159
+ [rank0]:[W621 21:28:18.692977863 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
160
+ [rank3]:[W621 21:28:18.742549478 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
161
+ [rank7]:[W621 21:28:18.742563333 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
162
+ [rank15]:[W621 21:28:18.404100316 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
163
+ [rank11]:[W621 21:28:18.408209011 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
164
+ [rank8]:[W621 21:28:18.431697385 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
165
+ [rank13]:[W621 21:28:18.438935936 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
166
+ [rank9]:[W621 21:28:18.439050221 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
167
+ [rank1]:[W621 21:28:18.779115590 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
168
+ [rank5]:[W621 21:28:18.779394863 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
169
+ [rank10]:[W621 21:28:18.452279569 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
170
+ [rank14]:[W621 21:28:18.453194189 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
171
+ [rank2]:[W621 21:28:18.795358898 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
172
+ [rank6]:[W621 21:28:18.796874636 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
173
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
174
+ warnings.warn(
175
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
176
+ warnings.warn(
177
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
178
+ warnings.warn(
179
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
180
+ warnings.warn(
181
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
182
+ warnings.warn(
183
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
184
+ warnings.warn(
185
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
186
+ warnings.warn(
187
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
188
+ warnings.warn(
189
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
190
+ warnings.warn(
191
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
192
+ warnings.warn(
193
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
194
+ warnings.warn(
195
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
196
+ warnings.warn(
197
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
198
+ warnings.warn(
199
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
200
+ warnings.warn(
201
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
202
+ warnings.warn(
203
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
204
+ warnings.warn(
205
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
206
+ warnings.warn(
207
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
208
+ warnings.warn(
209
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
210
+ warnings.warn(
211
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
212
+ warnings.warn(
213
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
214
+ warnings.warn(
215
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
216
+ warnings.warn(
217
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
218
+ warnings.warn(
219
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
220
+ warnings.warn(
221
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
222
+ warnings.warn(
223
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
224
+ warnings.warn(
225
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
226
+ warnings.warn(
227
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
228
+ warnings.warn(
229
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
230
+ warnings.warn(
231
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
232
+ warnings.warn(
233
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
234
+ warnings.warn(
235
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
236
+ warnings.warn(
237
+ [rank2]:[W621 21:28:53.283357408 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
238
+ [rank0]:[W621 21:28:53.339847314 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
239
+ [rank3]:[W621 21:28:53.367327493 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
240
+ [rank1]:[W621 21:28:53.379592757 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
241
+ [rank11]:[W621 21:28:53.040643498 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
242
+ [rank12]:[W621 21:28:53.046568042 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
243
+ [rank10]:[W621 21:28:53.070338273 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
244
+ [rank5]:[W621 21:28:53.544322551 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
245
+ [rank13]:[W621 21:28:53.527195182 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
246
+ [rank8]:[W621 21:28:53.560102383 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
247
+ [rank14]:[W621 21:28:53.583807130 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
248
+ [rank15]:[W621 21:28:53.584039250 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
249
+ [rank7]:[W621 21:28:53.933387505 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
250
+ [rank6]:[W621 21:28:53.953506997 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
251
+ [rank4]:[W621 21:28:53.980101630 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
252
+ [rank9]:[W621 21:28:53.729086335 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
253
+ + set +x
254
+ + set +x
255
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
256
+ + export PROF_CTX_LENGTH=2048
257
+ + PROF_CTX_LENGTH=2048
258
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp4.cp4.bs8.json'
259
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp4.cp4.bs8.json' ']'
260
+ + echo 'Running ctx_length=2048, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=8'
261
+ + srun bash ./attnserver.sh
262
+ srun: Step created for StepId=343222.1
263
+ + which python3
264
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343222 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
265
+ + which python3
266
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343222 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
267
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
268
+ and will be removed in future. Use torchrun.
269
+ Note that --use-env is set by default in torchrun.
270
+ If your script expects `--local-rank` argument to be set, please
271
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
272
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
273
+ further instructions
274
+
275
+ main()
276
+ W0621 21:29:00.085000 1061231 site-packages/torch/distributed/run.py:766]
277
+ W0621 21:29:00.085000 1061231 site-packages/torch/distributed/run.py:766] *****************************************
278
+ W0621 21:29:00.085000 1061231 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
279
+ W0621 21:29:00.085000 1061231 site-packages/torch/distributed/run.py:766] *****************************************
280
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
281
+ and will be removed in future. Use torchrun.
282
+ Note that --use-env is set by default in torchrun.
283
+ If your script expects `--local-rank` argument to be set, please
284
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
285
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
286
+ further instructions
287
+
288
+ main()
289
+ W0621 21:29:00.476000 784631 site-packages/torch/distributed/run.py:766]
290
+ W0621 21:29:00.476000 784631 site-packages/torch/distributed/run.py:766] *****************************************
291
+ W0621 21:29:00.476000 784631 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
292
+ W0621 21:29:00.476000 784631 site-packages/torch/distributed/run.py:766] *****************************************
293
+ [rank4]:[W621 21:29:23.728649624 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
294
+ [rank2]:[W621 21:29:23.728656059 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
295
+ [rank6]:[W621 21:29:23.728674260 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
296
+ [rank12]:[W621 21:29:23.395815047 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
297
+ [rank3]:[W621 21:29:23.735183297 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
298
+ [rank7]:[W621 21:29:23.735304313 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
299
+ [rank5]:[W621 21:29:23.735329425 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
300
+ [rank10]:[W621 21:29:23.396762277 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
301
+ [rank14]:[W621 21:29:23.396983837 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
302
+ [rank11]:[W621 21:29:23.397081087 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
303
+ [rank15]:[W621 21:29:23.397194686 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
304
+ [rank13]:[W621 21:29:23.397278175 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
305
+ [rank1]:[W621 21:29:23.737226675 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
306
+ [rank9]:[W621 21:29:23.397314848 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
307
+ [rank8]:[W621 21:29:23.475848265 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
308
+ [rank0]:[W621 21:29:23.866789812 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
309
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
310
+ warnings.warn(
311
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
312
+ warnings.warn(
313
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
314
+ warnings.warn(
315
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
316
+ warnings.warn(
317
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
318
+ warnings.warn(
319
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
320
+ warnings.warn(
321
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
322
+ warnings.warn(
323
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
324
+ warnings.warn(
325
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
326
+ warnings.warn(
327
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
328
+ warnings.warn(
329
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
330
+ warnings.warn(
331
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
332
+ warnings.warn(
333
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
334
+ warnings.warn(
335
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
336
+ warnings.warn(
337
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
338
+ warnings.warn(
339
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
340
+ warnings.warn(
341
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
342
+ warnings.warn(
343
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
344
+ warnings.warn(
345
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
346
+ warnings.warn(
347
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
348
+ warnings.warn(
349
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
350
+ warnings.warn(
351
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
352
+ warnings.warn(
353
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
354
+ warnings.warn(
355
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
356
+ warnings.warn(
357
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
358
+ warnings.warn(
359
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
360
+ warnings.warn(
361
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
362
+ warnings.warn(
363
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
364
+ warnings.warn(
365
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
366
+ warnings.warn(
367
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
368
+ warnings.warn(
369
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
370
+ warnings.warn(
371
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
372
+ warnings.warn(
373
+ [rank1]:[W621 21:29:54.021955365 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
374
+ [rank2]:[W621 21:29:54.131913908 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
375
+ [rank12]:[W621 21:29:54.877377294 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
376
+ [rank15]:[W621 21:29:54.879924554 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
377
+ [rank10]:[W621 21:29:54.883810401 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
378
+ [rank3]:[W621 21:29:54.254421677 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
379
+ [rank0]:[W621 21:29:54.262802713 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
380
+ [rank7]:[W621 21:29:55.571396648 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
381
+ [rank4]:[W621 21:29:55.589664540 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
382
+ [rank9]:[W621 21:29:55.290141605 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
383
+ [rank11]:[W621 21:29:55.302862537 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
384
+ [rank5]:[W621 21:29:55.659768499 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
385
+ [rank8]:[W621 21:29:55.372452319 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
386
+ [rank14]:[W621 21:29:55.377542808 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
387
+ [rank6]:[W621 21:29:55.726896148 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
388
+ [rank13]:[W621 21:29:55.573446127 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
389
+ + set +x
390
+ + set +x
391
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
392
+ + export PROF_CTX_LENGTH=4096
393
+ + PROF_CTX_LENGTH=4096
394
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L4096*tp4.cp4.bs8.json'
395
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L4096*tp4.cp4.bs8.json' ']'
396
+ + echo 'Running ctx_length=4096, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=8'
397
+ + srun bash ./attnserver.sh
398
+ + which python3
399
+ + which python3
400
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343222 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 4096 --max-position-embeddings 4096 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
401
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343222 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 4096 --max-position-embeddings 4096 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
402
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
403
+ and will be removed in future. Use torchrun.
404
+ Note that --use-env is set by default in torchrun.
405
+ If your script expects `--local-rank` argument to be set, please
406
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
407
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
408
+ further instructions
409
+
410
+ main()
411
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
412
+ and will be removed in future. Use torchrun.
413
+ Note that --use-env is set by default in torchrun.
414
+ If your script expects `--local-rank` argument to be set, please
415
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
416
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
417
+ further instructions
418
+
419
+ main()
420
+ W0621 21:30:03.967000 1064462 site-packages/torch/distributed/run.py:766]
421
+ W0621 21:30:03.967000 1064462 site-packages/torch/distributed/run.py:766] *****************************************
422
+ W0621 21:30:03.967000 1064462 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
423
+ W0621 21:30:03.967000 1064462 site-packages/torch/distributed/run.py:766] *****************************************
424
+ W0621 21:30:03.966000 787936 site-packages/torch/distributed/run.py:766]
425
+ W0621 21:30:03.966000 787936 site-packages/torch/distributed/run.py:766] *****************************************
426
+ W0621 21:30:03.966000 787936 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
427
+ W0621 21:30:03.966000 787936 site-packages/torch/distributed/run.py:766] *****************************************
428
+ [rank6]:[W621 21:30:28.432790823 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
429
+ [rank2]:[W621 21:30:28.432795530 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
430
+ [rank3]:[W621 21:30:28.432795406 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
431
+ [rank7]:[W621 21:30:28.432826350 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
432
+ [rank4]:[W621 21:30:28.432845905 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
433
+ [rank5]:[W621 21:30:28.432887468 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
434
+ [rank1]:[W621 21:30:28.432912494 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
435
+ [rank11]:[W621 21:30:28.114796837 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
436
+ [rank15]:[W621 21:30:28.115073416 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
437
+ [rank14]:[W621 21:30:28.115197633 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
438
+ [rank10]:[W621 21:30:28.115367230 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
439
+ [rank13]:[W621 21:30:28.115434657 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
440
+ [rank9]:[W621 21:30:28.116139482 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
441
+ [rank12]:[W621 21:30:28.116387434 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
442
+ [rank8]:[W621 21:30:28.205125133 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
443
+ [rank0]:[W621 21:30:28.561089388 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
444
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
445
+ warnings.warn(
446
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
447
+ warnings.warn(
448
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
449
+ warnings.warn(
450
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
451
+ warnings.warn(
452
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
453
+ warnings.warn(
454
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
455
+ warnings.warn(
456
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
457
+ warnings.warn(
458
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
459
+ warnings.warn(
460
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
461
+ warnings.warn(
462
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
463
+ warnings.warn(
464
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
465
+ warnings.warn(
466
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
467
+ warnings.warn(
468
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
469
+ warnings.warn(
470
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
471
+ warnings.warn(
472
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
473
+ warnings.warn(
474
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
475
+ warnings.warn(
476
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
477
+ warnings.warn(
478
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
479
+ warnings.warn(
480
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
481
+ warnings.warn(
482
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
483
+ warnings.warn(
484
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
485
+ warnings.warn(
486
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
487
+ warnings.warn(
488
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
489
+ warnings.warn(
490
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
491
+ warnings.warn(
492
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
493
+ warnings.warn(
494
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
495
+ warnings.warn(
496
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
497
+ warnings.warn(
498
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
499
+ warnings.warn(
500
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
501
+ warnings.warn(
502
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
503
+ warnings.warn(
504
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
505
+ warnings.warn(
506
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
507
+ warnings.warn(
508
+ [rank0]: Traceback (most recent call last):
509
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
510
+ [rank0]: pretrain(
511
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
512
+ [rank0]: save_checkpoint(
513
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
514
+ [rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
515
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
516
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
517
+ [rank0]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
518
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
519
+ [rank0]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
520
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
521
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
522
+ [rank0]: async_calls.maybe_finalize_async_calls(blocking=True)
523
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
524
+ [rank0]: finalize_fn()
525
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
526
+ [rank0]: save_state_dict_async_finalize(*save_state_dict_ret)
527
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 243, in save_state_dict_async_finalize
528
+ [rank0]: storage_writer.finish(global_metadata, all_results)
529
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 483, in finish
530
+ [rank0]: super().finish(metadata, results)
531
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py", line 697, in finish
532
+ [rank0]: with self.fs.create_stream(tmp_path, "wb") as metadata_file:
533
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
534
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/contextlib.py", line 137, in __enter__
535
+ [rank0]: return next(self.gen)
536
+ [rank0]: ^^^^^^^^^^^^^^
537
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py", line 476, in create_stream
538
+ [rank0]: with path.open(mode) as stream:
539
+ [rank0]: ^^^^^^^^^^^^^^^
540
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/pathlib.py", line 1013, in open
541
+ [rank0]: return io.open(self, mode, buffering, encoding, errors, newline)
542
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
543
+ [rank0]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/.metadata.tmp'
attnserver.run_attnserver.slurm.sh.343222.out.log ADDED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343223.err.log ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=4
115
+ + PROF_TP_SIZE=4
116
+ + export PROF_CP_SIZE=4
117
+ + PROF_CP_SIZE=4
118
+ + export PROF_BS=16
119
+ + PROF_BS=16
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp4.bs16.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp4.bs16.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=16'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + which python3
129
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343223 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-703:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
130
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343223 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-703:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
131
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
132
+ and will be removed in future. Use torchrun.
133
+ Note that --use-env is set by default in torchrun.
134
+ If your script expects `--local-rank` argument to be set, please
135
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
136
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
137
+ further instructions
138
+
139
+ main()
140
+ W0621 21:31:20.472000 2514785 site-packages/torch/distributed/run.py:766]
141
+ W0621 21:31:20.472000 2514785 site-packages/torch/distributed/run.py:766] *****************************************
142
+ W0621 21:31:20.472000 2514785 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
143
+ W0621 21:31:20.472000 2514785 site-packages/torch/distributed/run.py:766] *****************************************
144
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
145
+ and will be removed in future. Use torchrun.
146
+ Note that --use-env is set by default in torchrun.
147
+ If your script expects `--local-rank` argument to be set, please
148
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
149
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
150
+ further instructions
151
+
152
+ main()
153
+ W0621 21:31:20.473000 2471192 site-packages/torch/distributed/run.py:766]
154
+ W0621 21:31:20.473000 2471192 site-packages/torch/distributed/run.py:766] *****************************************
155
+ W0621 21:31:20.473000 2471192 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
156
+ W0621 21:31:20.473000 2471192 site-packages/torch/distributed/run.py:766] *****************************************
attnserver.run_attnserver.slurm.sh.343223.out.log ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Running ctx_length=1024, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=16
2
+ Cleaning up checkpoint directory: gpt-checkpoint
3
+ Cleaning up checkpoint directory: gpt-checkpoint
4
+ --------------------------------
5
+ CTX_LENGTH: 1024
6
+ TP_SIZE: 4
7
+ CP_SIZE: 4
8
+ CHECKPOINT_PATH: gpt-checkpoint
9
+ --------------------------------
10
+ CTX_LENGTH: 1024
11
+ TP_SIZE: 4
12
+ CP_SIZE: 4
13
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
14
+ --------------------------------
15
+ CHECKPOINT_PATH: gpt-checkpoint
16
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
17
+ --------------------------------
18
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
19
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
attnserver.run_attnserver.slurm.sh.343225.err.log ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=4
115
+ + PROF_TP_SIZE=4
116
+ + export PROF_CP_SIZE=2
117
+ + PROF_CP_SIZE=2
118
+ + export PROF_BS=1
119
+ + PROF_BS=1
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp2.bs1.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp2.bs1.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=1'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343225 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-768:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
129
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
130
+ and will be removed in future. Use torchrun.
131
+ Note that --use-env is set by default in torchrun.
132
+ If your script expects `--local-rank` argument to be set, please
133
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
134
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
135
+ further instructions
136
+
137
+ main()
138
+ W0621 21:30:27.433000 2205289 site-packages/torch/distributed/run.py:766]
139
+ W0621 21:30:27.433000 2205289 site-packages/torch/distributed/run.py:766] *****************************************
140
+ W0621 21:30:27.433000 2205289 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
141
+ W0621 21:30:27.433000 2205289 site-packages/torch/distributed/run.py:766] *****************************************
142
+ [rank4]:[W621 21:30:48.113363797 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
143
+ [rank5]:[W621 21:30:48.114231562 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
144
+ [rank1]:[W621 21:30:48.114733794 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
145
+ [rank6]:[W621 21:30:48.121925951 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
146
+ [rank2]:[W621 21:30:48.121938989 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
147
+ [rank7]:[W621 21:30:48.122303951 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
148
+ [rank3]:[W621 21:30:48.122651388 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
149
+ [rank0]:[W621 21:30:48.248971000 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
150
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
151
+ warnings.warn(
152
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
153
+ warnings.warn(
154
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
155
+ warnings.warn(
156
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
157
+ warnings.warn(
158
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
159
+ warnings.warn(
160
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
161
+ warnings.warn(
162
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
163
+ warnings.warn(
164
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
165
+ warnings.warn(
166
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
167
+ warnings.warn(
168
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
169
+ warnings.warn(
170
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
171
+ warnings.warn(
172
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
173
+ warnings.warn(
174
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
175
+ warnings.warn(
176
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
177
+ warnings.warn(
178
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
179
+ warnings.warn(
180
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
181
+ warnings.warn(
182
+ [rank3]:[W621 21:31:17.833792023 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
183
+ [rank1]:[W621 21:31:17.852292424 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
184
+ [rank0]:[W621 21:31:17.871077319 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
185
+ [rank4]:[W621 21:31:17.013619287 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
186
+ [rank2]:[W621 21:31:17.056249802 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
187
+ [rank6]:[W621 21:31:17.096016915 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
188
+ [rank7]:[W621 21:31:17.414145172 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
189
+ [rank5]:[W621 21:31:18.759104793 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
190
+ + set +x
191
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
192
+ + export PROF_CTX_LENGTH=2048
193
+ + PROF_CTX_LENGTH=2048
194
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp4.cp2.bs1.json'
195
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp4.cp2.bs1.json' ']'
196
+ + echo 'Running ctx_length=2048, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=1'
197
+ + srun bash ./attnserver.sh
198
+ + which python3
199
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343225 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-768:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
attnserver.run_attnserver.slurm.sh.343225.out.log ADDED
The diff for this file is too large to render. See raw diff