GindaChen commited on
Commit
bdc3d82
·
verified ·
1 Parent(s): 95992b5

Upload folder using huggingface_hub

Browse files
attnserver.run_attnserver.slurm.sh.343207.out.log CHANGED
@@ -19193,3 +19193,182 @@ done with setup ...
19193
  training ...
19194
  Setting rerun_state_machine.current_iteration to 0...
19195
  [before the start of training step] datetime: 2025-06-21 21:57:50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19193
  training ...
19194
  Setting rerun_state_machine.current_iteration to 0...
19195
  [before the start of training step] datetime: 2025-06-21 21:57:50
19196
+ batch tensor: tokens torch.Size([1, 131072])
19197
+ batch tensor: labels torch.Size([1, 131072])
19198
+ batch tensor: loss_mask torch.Size([1, 131072])
19199
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19200
+ batch tensor: position_ids torch.Size([1, 131072])
19201
+ batch tensor after cp: tokens torch.Size([1, 131072])
19202
+ batch tensor after cp: labels torch.Size([1, 131072])
19203
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19204
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19205
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19206
+ batch tensor: tokens torch.Size([1, 131072])
19207
+ batch tensor: labels torch.Size([1, 131072])
19208
+ batch tensor: loss_mask torch.Size([1, 131072])
19209
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19210
+ batch tensor: position_ids torch.Size([1, 131072])
19211
+ batch tensor after cp: tokens torch.Size([1, 131072])
19212
+ batch tensor after cp: labels torch.Size([1, 131072])
19213
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19214
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19215
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19216
+ batch tensor: tokens torch.Size([1, 131072])
19217
+ batch tensor: labels torch.Size([1, 131072])
19218
+ batch tensor: loss_mask torch.Size([1, 131072])
19219
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19220
+ batch tensor: position_ids torch.Size([1, 131072])
19221
+ batch tensor after cp: tokens torch.Size([1, 131072])
19222
+ batch tensor after cp: labels torch.Size([1, 131072])
19223
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19224
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19225
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19226
+ batch tensor: tokens torch.Size([1, 131072])
19227
+ batch tensor: labels torch.Size([1, 131072])
19228
+ batch tensor: loss_mask torch.Size([1, 131072])
19229
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19230
+ batch tensor: position_ids torch.Size([1, 131072])
19231
+ batch tensor after cp: tokens torch.Size([1, 131072])
19232
+ batch tensor after cp: labels torch.Size([1, 131072])
19233
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19234
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19235
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19236
+ batch tensor: tokens torch.Size([1, 131072])
19237
+ batch tensor: labels torch.Size([1, 131072])
19238
+ batch tensor: loss_mask torch.Size([1, 131072])
19239
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19240
+ batch tensor: position_ids torch.Size([1, 131072])
19241
+ batch tensor after cp: tokens torch.Size([1, 131072])
19242
+ batch tensor after cp: labels torch.Size([1, 131072])
19243
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19244
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19245
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19246
+ batch tensor: tokens torch.Size([1, 131072])
19247
+ batch tensor: labels torch.Size([1, 131072])
19248
+ batch tensor: loss_mask torch.Size([1, 131072])
19249
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19250
+ batch tensor: position_ids torch.Size([1, 131072])
19251
+ batch tensor after cp: tokens torch.Size([1, 131072])
19252
+ batch tensor after cp: labels torch.Size([1, 131072])
19253
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19254
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19255
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19256
+ batch tensor: tokens torch.Size([1, 131072])
19257
+ batch tensor: labels torch.Size([1, 131072])
19258
+ batch tensor: loss_mask torch.Size([1, 131072])
19259
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19260
+ batch tensor: position_ids torch.Size([1, 131072])
19261
+ batch tensor after cp: tokens torch.Size([1, 131072])
19262
+ batch tensor after cp: labels torch.Size([1, 131072])
19263
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19264
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19265
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19266
+ batch tensor: tokens torch.Size([1, 131072])
19267
+ batch tensor: labels torch.Size([1, 131072])
19268
+ batch tensor: loss_mask torch.Size([1, 131072])
19269
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19270
+ batch tensor: position_ids torch.Size([1, 131072])
19271
+ batch tensor after cp: tokens torch.Size([1, 131072])
19272
+ batch tensor after cp: labels torch.Size([1, 131072])
19273
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19274
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19275
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19276
+ Start exporting trace 0
19277
+ Done exporting trace 0
19278
+ [2025-06-21 21:58:40] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 49759.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
19279
+ Number of parameters in transformer block in billions: 0.35
19280
+ Number of parameters in embedding layers in billions: 0.21
19281
+ Total number of parameters in billions: 0.56
19282
+ Number of parameters in most loaded shard in billions: 0.0703
19283
+ Theoretical memory footprints: weight and optimizer=1206.09 MB
19284
+ [Rank 3] (after 1 iterations) memory (MB) | allocated: 23474.22607421875 | max allocated: 41389.73681640625 | reserved: 43860.0 | max reserved: 43860.0
19285
+ [Rank 6] (after 1 iterations) memory (MB) | allocated: 23474.22607421875 | max allocated: 41389.73681640625 | reserved: 43860.0 | max reserved: 43860.0
19286
+ [Rank 7] (after 1 iterations) memory (MB) | allocated: 23474.22607421875 | max allocated: 41389.73681640625 | reserved: 43860.0 | max reserved: 43860.0
19287
+ [Rank 5] (after 1 iterations) memory (MB) | allocated: 23474.22607421875 | max allocated: 41389.73681640625 | reserved: 43860.0 | max reserved: 43860.0
19288
+ [Rank 2] (after 1 iterations) memory (MB) | allocated: 23474.22607421875 | max allocated: 41389.73681640625 | reserved: 43988.0 | max reserved: 43988.0
19289
+ [Rank 0] (after 1 iterations) memory (MB) | allocated: 23474.22607421875 | max allocated: 41389.73681640625 | reserved: 43860.0 | max reserved: 43860.0[Rank 4] (after 1 iterations) memory (MB) | allocated: 23474.22607421875 | max allocated: 41389.73681640625 | reserved: 43860.0 | max reserved: 43860.0
19290
+
19291
+ [Rank 1] (after 1 iterations) memory (MB) | allocated: 23474.22607421875 | max allocated: 41389.73681640625 | reserved: 44660.0 | max reserved: 44660.0
19292
+ batch tensor: tokens torch.Size([1, 131072])
19293
+ batch tensor: labels torch.Size([1, 131072])
19294
+ batch tensor: loss_mask torch.Size([1, 131072])
19295
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19296
+ batch tensor: position_ids torch.Size([1, 131072])
19297
+ batch tensor after cp: tokens torch.Size([1, 131072])
19298
+ batch tensor after cp: labels torch.Size([1, 131072])
19299
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19300
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19301
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19302
+ batch tensor: tokens torch.Size([1, 131072])
19303
+ batch tensor: labels torch.Size([1, 131072])
19304
+ batch tensor: loss_mask torch.Size([1, 131072])
19305
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19306
+ batch tensor: position_ids torch.Size([1, 131072])
19307
+ batch tensor after cp: tokens torch.Size([1, 131072])
19308
+ batch tensor after cp: labels torch.Size([1, 131072])
19309
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19310
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19311
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19312
+ batch tensor: tokens torch.Size([1, 131072])
19313
+ batch tensor: labels torch.Size([1, 131072])
19314
+ batch tensor: loss_mask torch.Size([1, 131072])
19315
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19316
+ batch tensor: position_ids torch.Size([1, 131072])
19317
+ batch tensor after cp: tokens torch.Size([1, 131072])
19318
+ batch tensor after cp: labels torch.Size([1, 131072])
19319
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19320
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19321
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19322
+ batch tensor: tokens torch.Size([1, 131072])
19323
+ batch tensor: labels torch.Size([1, 131072])
19324
+ batch tensor: loss_mask torch.Size([1, 131072])
19325
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19326
+ batch tensor: position_ids torch.Size([1, 131072])
19327
+ batch tensor after cp: tokens torch.Size([1, 131072])
19328
+ batch tensor after cp: labels torch.Size([1, 131072])
19329
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19330
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19331
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19332
+ batch tensor: tokens torch.Size([1, 131072])
19333
+ batch tensor: labels torch.Size([1, 131072])
19334
+ batch tensor: loss_mask torch.Size([1, 131072])
19335
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19336
+ batch tensor: position_ids torch.Size([1, 131072])
19337
+ batch tensor after cp: tokens torch.Size([1, 131072])
19338
+ batch tensor after cp: labels torch.Size([1, 131072])
19339
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19340
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19341
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19342
+ batch tensor: tokens torch.Size([1, 131072])
19343
+ batch tensor: labels torch.Size([1, 131072])
19344
+ batch tensor: loss_mask torch.Size([1, 131072])
19345
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19346
+ batch tensor: position_ids torch.Size([1, 131072])
19347
+ batch tensor after cp: tokens torch.Size([1, 131072])
19348
+ batch tensor after cp: labels torch.Size([1, 131072])
19349
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19350
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19351
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19352
+ batch tensor: tokens torch.Size([1, 131072])
19353
+ batch tensor: labels torch.Size([1, 131072])
19354
+ batch tensor: loss_mask torch.Size([1, 131072])
19355
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19356
+ batch tensor: position_ids torch.Size([1, 131072])
19357
+ batch tensor after cp: tokens torch.Size([1, 131072])
19358
+ batch tensor after cp: labels torch.Size([1, 131072])
19359
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19360
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19361
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19362
+ batch tensor: tokens torch.Size([1, 131072])
19363
+ batch tensor: labels torch.Size([1, 131072])
19364
+ batch tensor: loss_mask torch.Size([1, 131072])
19365
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
19366
+ batch tensor: position_ids torch.Size([1, 131072])
19367
+ batch tensor after cp: tokens torch.Size([1, 131072])
19368
+ batch tensor after cp: labels torch.Size([1, 131072])
19369
+ batch tensor after cp: loss_mask torch.Size([1, 131072])
19370
+ batch tensor after cp: attention_mask torch.Size([1, 1, 131072, 131072])
19371
+ batch tensor after cp: position_ids torch.Size([1, 131072])
19372
+ Start exporting trace 1
19373
+ Done exporting trace 1
19374
+ [2025-06-21 21:59:22] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 41689.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
attnserver.run_attnserver.slurm.sh.343213.out.log CHANGED
@@ -53447,3 +53447,336 @@ batch tensor after cp: labels torch.Size([1, 12288])
53447
  batch tensor after cp: loss_mask torch.Size([1, 12288])
53448
  batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53449
  batch tensor after cp: position_ids torch.Size([1, 12288])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53447
  batch tensor after cp: loss_mask torch.Size([1, 12288])
53448
  batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53449
  batch tensor after cp: position_ids torch.Size([1, 12288])
53450
+ batch tensor: tokens torch.Size([1, 98304])
53451
+ batch tensor: labels torch.Size([1, 98304])
53452
+ batch tensor: loss_mask torch.Size([1, 98304])
53453
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53454
+ batch tensor: position_ids torch.Size([1, 98304])
53455
+ batch tensor after cp: tokens torch.Size([1, 12288])
53456
+ batch tensor after cp: labels torch.Size([1, 12288])
53457
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53458
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53459
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53460
+ batch tensor: tokens torch.Size([1, 98304])
53461
+ batch tensor: labels torch.Size([1, 98304])
53462
+ batch tensor: loss_mask torch.Size([1, 98304])
53463
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53464
+ batch tensor: position_ids torch.Size([1, 98304])
53465
+ batch tensor after cp: tokens torch.Size([1, 12288])
53466
+ batch tensor after cp: labels torch.Size([1, 12288])
53467
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53468
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53469
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53470
+ Start exporting trace 5
53471
+ Done exporting trace 5
53472
+ [2025-06-21 21:58:24] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 80453.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
53473
+ batch tensor: tokens torch.Size([1, 98304])
53474
+ batch tensor: labels torch.Size([1, 98304])
53475
+ batch tensor: loss_mask torch.Size([1, 98304])
53476
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53477
+ batch tensor: position_ids torch.Size([1, 98304])
53478
+ batch tensor after cp: tokens torch.Size([1, 12288])
53479
+ batch tensor after cp: labels torch.Size([1, 12288])
53480
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53481
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53482
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53483
+ batch tensor: tokens torch.Size([1, 98304])
53484
+ batch tensor: labels torch.Size([1, 98304])
53485
+ batch tensor: loss_mask torch.Size([1, 98304])
53486
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53487
+ batch tensor: position_ids torch.Size([1, 98304])
53488
+ batch tensor after cp: tokens torch.Size([1, 12288])
53489
+ batch tensor after cp: labels torch.Size([1, 12288])
53490
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53491
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53492
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53493
+ batch tensor: tokens torch.Size([1, 98304])
53494
+ batch tensor: labels torch.Size([1, 98304])
53495
+ batch tensor: loss_mask torch.Size([1, 98304])
53496
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53497
+ batch tensor: position_ids torch.Size([1, 98304])
53498
+ batch tensor after cp: tokens torch.Size([1, 12288])
53499
+ batch tensor after cp: labels torch.Size([1, 12288])
53500
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53501
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53502
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53503
+ batch tensor: tokens torch.Size([1, 98304])
53504
+ batch tensor: labels torch.Size([1, 98304])
53505
+ batch tensor: loss_mask torch.Size([1, 98304])
53506
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53507
+ batch tensor: position_ids torch.Size([1, 98304])
53508
+ batch tensor after cp: tokens torch.Size([1, 12288])
53509
+ batch tensor after cp: labels torch.Size([1, 12288])
53510
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53511
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53512
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53513
+ batch tensor: tokens torch.Size([1, 98304])
53514
+ batch tensor: labels torch.Size([1, 98304])
53515
+ batch tensor: loss_mask torch.Size([1, 98304])
53516
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53517
+ batch tensor: position_ids torch.Size([1, 98304])
53518
+ batch tensor after cp: tokens torch.Size([1, 12288])
53519
+ batch tensor after cp: labels torch.Size([1, 12288])
53520
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53521
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53522
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53523
+ batch tensor: tokens torch.Size([1, 98304])
53524
+ batch tensor: labels torch.Size([1, 98304])
53525
+ batch tensor: loss_mask torch.Size([1, 98304])
53526
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53527
+ batch tensor: position_ids torch.Size([1, 98304])
53528
+ batch tensor after cp: tokens torch.Size([1, 12288])
53529
+ batch tensor after cp: labels torch.Size([1, 12288])
53530
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53531
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53532
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53533
+ batch tensor: tokens torch.Size([1, 98304])
53534
+ batch tensor: labels torch.Size([1, 98304])
53535
+ batch tensor: loss_mask torch.Size([1, 98304])
53536
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53537
+ batch tensor: position_ids torch.Size([1, 98304])
53538
+ batch tensor after cp: tokens torch.Size([1, 12288])
53539
+ batch tensor after cp: labels torch.Size([1, 12288])
53540
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53541
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53542
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53543
+ batch tensor: tokens torch.Size([1, 98304])
53544
+ batch tensor: labels torch.Size([1, 98304])
53545
+ batch tensor: loss_mask torch.Size([1, 98304])
53546
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53547
+ batch tensor: position_ids torch.Size([1, 98304])
53548
+ batch tensor after cp: tokens torch.Size([1, 12288])
53549
+ batch tensor after cp: labels torch.Size([1, 12288])
53550
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53551
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53552
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53553
+ batch tensor: tokens torch.Size([1, 98304])
53554
+ batch tensor: labels torch.Size([1, 98304])
53555
+ batch tensor: loss_mask torch.Size([1, 98304])
53556
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53557
+ batch tensor: position_ids torch.Size([1, 98304])
53558
+ batch tensor after cp: tokens torch.Size([1, 12288])
53559
+ batch tensor after cp: labels torch.Size([1, 12288])
53560
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53561
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53562
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53563
+ batch tensor: tokens torch.Size([1, 98304])
53564
+ batch tensor: labels torch.Size([1, 98304])
53565
+ batch tensor: loss_mask torch.Size([1, 98304])
53566
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53567
+ batch tensor: position_ids torch.Size([1, 98304])
53568
+ batch tensor after cp: tokens torch.Size([1, 12288])
53569
+ batch tensor after cp: labels torch.Size([1, 12288])
53570
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53571
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53572
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53573
+ batch tensor: tokens torch.Size([1, 98304])
53574
+ batch tensor: labels torch.Size([1, 98304])
53575
+ batch tensor: loss_mask torch.Size([1, 98304])
53576
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53577
+ batch tensor: position_ids torch.Size([1, 98304])
53578
+ batch tensor after cp: tokens torch.Size([1, 12288])
53579
+ batch tensor after cp: labels torch.Size([1, 12288])
53580
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53581
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53582
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53583
+ batch tensor: tokens torch.Size([1, 98304])
53584
+ batch tensor: labels torch.Size([1, 98304])
53585
+ batch tensor: loss_mask torch.Size([1, 98304])
53586
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53587
+ batch tensor: position_ids torch.Size([1, 98304])
53588
+ batch tensor after cp: tokens torch.Size([1, 12288])
53589
+ batch tensor after cp: labels torch.Size([1, 12288])
53590
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53591
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53592
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53593
+ batch tensor: tokens torch.Size([1, 98304])
53594
+ batch tensor: labels torch.Size([1, 98304])
53595
+ batch tensor: loss_mask torch.Size([1, 98304])
53596
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53597
+ batch tensor: position_ids torch.Size([1, 98304])
53598
+ batch tensor after cp: tokens torch.Size([1, 12288])
53599
+ batch tensor after cp: labels torch.Size([1, 12288])
53600
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53601
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53602
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53603
+ batch tensor: tokens torch.Size([1, 98304])
53604
+ batch tensor: labels torch.Size([1, 98304])
53605
+ batch tensor: loss_mask torch.Size([1, 98304])
53606
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53607
+ batch tensor: position_ids torch.Size([1, 98304])
53608
+ batch tensor after cp: tokens torch.Size([1, 12288])
53609
+ batch tensor after cp: labels torch.Size([1, 12288])
53610
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53611
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53612
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53613
+ batch tensor: tokens torch.Size([1, 98304])
53614
+ batch tensor: labels torch.Size([1, 98304])
53615
+ batch tensor: loss_mask torch.Size([1, 98304])
53616
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53617
+ batch tensor: position_ids torch.Size([1, 98304])
53618
+ batch tensor after cp: tokens torch.Size([1, 12288])
53619
+ batch tensor after cp: labels torch.Size([1, 12288])
53620
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53621
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53622
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53623
+ batch tensor: tokens torch.Size([1, 98304])
53624
+ batch tensor: labels torch.Size([1, 98304])
53625
+ batch tensor: loss_mask torch.Size([1, 98304])
53626
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53627
+ batch tensor: position_ids torch.Size([1, 98304])
53628
+ batch tensor after cp: tokens torch.Size([1, 12288])
53629
+ batch tensor after cp: labels torch.Size([1, 12288])
53630
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53631
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53632
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53633
+ batch tensor: tokens torch.Size([1, 98304])
53634
+ batch tensor: labels torch.Size([1, 98304])
53635
+ batch tensor: loss_mask torch.Size([1, 98304])
53636
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53637
+ batch tensor: position_ids torch.Size([1, 98304])
53638
+ batch tensor after cp: tokens torch.Size([1, 12288])
53639
+ batch tensor after cp: labels torch.Size([1, 12288])
53640
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53641
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53642
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53643
+ batch tensor: tokens torch.Size([1, 98304])
53644
+ batch tensor: labels torch.Size([1, 98304])
53645
+ batch tensor: loss_mask torch.Size([1, 98304])
53646
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53647
+ batch tensor: position_ids torch.Size([1, 98304])
53648
+ batch tensor after cp: tokens torch.Size([1, 12288])
53649
+ batch tensor after cp: labels torch.Size([1, 12288])
53650
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53651
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53652
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53653
+ batch tensor: tokens torch.Size([1, 98304])
53654
+ batch tensor: labels torch.Size([1, 98304])
53655
+ batch tensor: loss_mask torch.Size([1, 98304])
53656
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53657
+ batch tensor: position_ids torch.Size([1, 98304])
53658
+ batch tensor after cp: tokens torch.Size([1, 12288])
53659
+ batch tensor after cp: labels torch.Size([1, 12288])
53660
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53661
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53662
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53663
+ batch tensor: tokens torch.Size([1, 98304])
53664
+ batch tensor: labels torch.Size([1, 98304])
53665
+ batch tensor: loss_mask torch.Size([1, 98304])
53666
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53667
+ batch tensor: position_ids torch.Size([1, 98304])
53668
+ batch tensor after cp: tokens torch.Size([1, 12288])
53669
+ batch tensor after cp: labels torch.Size([1, 12288])
53670
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53671
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53672
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53673
+ batch tensor: tokens torch.Size([1, 98304])
53674
+ batch tensor: labels torch.Size([1, 98304])
53675
+ batch tensor: loss_mask torch.Size([1, 98304])
53676
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53677
+ batch tensor: position_ids torch.Size([1, 98304])
53678
+ batch tensor after cp: tokens torch.Size([1, 12288])
53679
+ batch tensor after cp: labels torch.Size([1, 12288])
53680
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53681
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53682
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53683
+ batch tensor: tokens torch.Size([1, 98304])
53684
+ batch tensor: labels torch.Size([1, 98304])
53685
+ batch tensor: loss_mask torch.Size([1, 98304])
53686
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53687
+ batch tensor: position_ids torch.Size([1, 98304])
53688
+ batch tensor after cp: tokens torch.Size([1, 12288])
53689
+ batch tensor after cp: labels torch.Size([1, 12288])
53690
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53691
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53692
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53693
+ batch tensor: tokens torch.Size([1, 98304])
53694
+ batch tensor: labels torch.Size([1, 98304])
53695
+ batch tensor: loss_mask torch.Size([1, 98304])
53696
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53697
+ batch tensor: position_ids torch.Size([1, 98304])
53698
+ batch tensor after cp: tokens torch.Size([1, 12288])
53699
+ batch tensor after cp: labels torch.Size([1, 12288])
53700
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53701
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53702
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53703
+ batch tensor: tokens torch.Size([1, 98304])
53704
+ batch tensor: labels torch.Size([1, 98304])
53705
+ batch tensor: loss_mask torch.Size([1, 98304])
53706
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53707
+ batch tensor: position_ids torch.Size([1, 98304])
53708
+ batch tensor after cp: tokens torch.Size([1, 12288])
53709
+ batch tensor after cp: labels torch.Size([1, 12288])
53710
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53711
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53712
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53713
+ batch tensor: tokens torch.Size([1, 98304])
53714
+ batch tensor: labels torch.Size([1, 98304])
53715
+ batch tensor: loss_mask torch.Size([1, 98304])
53716
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53717
+ batch tensor: position_ids torch.Size([1, 98304])
53718
+ batch tensor after cp: tokens torch.Size([1, 12288])
53719
+ batch tensor after cp: labels torch.Size([1, 12288])
53720
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53721
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53722
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53723
+ batch tensor: tokens torch.Size([1, 98304])
53724
+ batch tensor: labels torch.Size([1, 98304])
53725
+ batch tensor: loss_mask torch.Size([1, 98304])
53726
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53727
+ batch tensor: position_ids torch.Size([1, 98304])
53728
+ batch tensor after cp: tokens torch.Size([1, 12288])
53729
+ batch tensor after cp: labels torch.Size([1, 12288])
53730
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53731
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53732
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53733
+ batch tensor: tokens torch.Size([1, 98304])
53734
+ batch tensor: labels torch.Size([1, 98304])
53735
+ batch tensor: loss_mask torch.Size([1, 98304])
53736
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53737
+ batch tensor: position_ids torch.Size([1, 98304])
53738
+ batch tensor after cp: tokens torch.Size([1, 12288])
53739
+ batch tensor after cp: labels torch.Size([1, 12288])
53740
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53741
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53742
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53743
+ batch tensor: tokens torch.Size([1, 98304])
53744
+ batch tensor: labels torch.Size([1, 98304])
53745
+ batch tensor: loss_mask torch.Size([1, 98304])
53746
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53747
+ batch tensor: position_ids torch.Size([1, 98304])
53748
+ batch tensor after cp: tokens torch.Size([1, 12288])
53749
+ batch tensor after cp: labels torch.Size([1, 12288])
53750
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53751
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53752
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53753
+ batch tensor: tokens torch.Size([1, 98304])
53754
+ batch tensor: labels torch.Size([1, 98304])
53755
+ batch tensor: loss_mask torch.Size([1, 98304])
53756
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53757
+ batch tensor: position_ids torch.Size([1, 98304])
53758
+ batch tensor after cp: tokens torch.Size([1, 12288])
53759
+ batch tensor after cp: labels torch.Size([1, 12288])
53760
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53761
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53762
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53763
+ batch tensor: tokens torch.Size([1, 98304])
53764
+ batch tensor: labels torch.Size([1, 98304])
53765
+ batch tensor: loss_mask torch.Size([1, 98304])
53766
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53767
+ batch tensor: position_ids torch.Size([1, 98304])
53768
+ batch tensor after cp: tokens torch.Size([1, 12288])
53769
+ batch tensor after cp: labels torch.Size([1, 12288])
53770
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53771
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53772
+ batch tensor after cp: position_ids torch.Size([1, 12288])
53773
+ batch tensor: tokens torch.Size([1, 98304])
53774
+ batch tensor: labels torch.Size([1, 98304])
53775
+ batch tensor: loss_mask torch.Size([1, 98304])
53776
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
53777
+ batch tensor: position_ids torch.Size([1, 98304])
53778
+ batch tensor after cp: tokens torch.Size([1, 12288])
53779
+ batch tensor after cp: labels torch.Size([1, 12288])
53780
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
53781
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 98304])
53782
+ batch tensor after cp: position_ids torch.Size([1, 12288])
attnserver.run_attnserver.slurm.sh.343214.out.log CHANGED
@@ -39716,3 +39716,586 @@ batch tensor after cp: position_ids torch.Size([2, 20480])
39716
  Start exporting trace 5
39717
  Done exporting trace 5
39718
  [2025-06-21 21:58:04] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 57588.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39716
  Start exporting trace 5
39717
  Done exporting trace 5
39718
  [2025-06-21 21:58:04] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 57588.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
39719
+ batch tensor: tokens torch.Size([2, 163840])
39720
+ batch tensor: labels torch.Size([2, 163840])
39721
+ batch tensor: loss_mask torch.Size([2, 163840])
39722
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39723
+ batch tensor: position_ids torch.Size([2, 163840])
39724
+ batch tensor after cp: tokens torch.Size([2, 20480])
39725
+ batch tensor after cp: labels torch.Size([2, 20480])
39726
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39727
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39728
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39729
+ batch tensor: tokens torch.Size([2, 163840])
39730
+ batch tensor: labels torch.Size([2, 163840])
39731
+ batch tensor: loss_mask torch.Size([2, 163840])
39732
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39733
+ batch tensor: position_ids torch.Size([2, 163840])
39734
+ batch tensor after cp: tokens torch.Size([2, 20480])
39735
+ batch tensor after cp: labels torch.Size([2, 20480])
39736
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39737
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39738
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39739
+ batch tensor: tokens torch.Size([2, 163840])
39740
+ batch tensor: labels torch.Size([2, 163840])
39741
+ batch tensor: loss_mask torch.Size([2, 163840])
39742
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39743
+ batch tensor: position_ids torch.Size([2, 163840])
39744
+ batch tensor after cp: tokens torch.Size([2, 20480])
39745
+ batch tensor after cp: labels torch.Size([2, 20480])
39746
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39747
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39748
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39749
+ batch tensor: tokens torch.Size([2, 163840])
39750
+ batch tensor: labels torch.Size([2, 163840])
39751
+ batch tensor: loss_mask torch.Size([2, 163840])
39752
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39753
+ batch tensor: position_ids torch.Size([2, 163840])
39754
+ batch tensor after cp: tokens torch.Size([2, 20480])
39755
+ batch tensor after cp: labels torch.Size([2, 20480])
39756
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39757
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39758
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39759
+ batch tensor: tokens torch.Size([2, 163840])
39760
+ batch tensor: labels torch.Size([2, 163840])
39761
+ batch tensor: loss_mask torch.Size([2, 163840])
39762
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39763
+ batch tensor: position_ids torch.Size([2, 163840])
39764
+ batch tensor after cp: tokens torch.Size([2, 20480])
39765
+ batch tensor after cp: labels torch.Size([2, 20480])
39766
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39767
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39768
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39769
+ batch tensor: tokens torch.Size([2, 163840])
39770
+ batch tensor: labels torch.Size([2, 163840])
39771
+ batch tensor: loss_mask torch.Size([2, 163840])
39772
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39773
+ batch tensor: position_ids torch.Size([2, 163840])
39774
+ batch tensor after cp: tokens torch.Size([2, 20480])
39775
+ batch tensor after cp: labels torch.Size([2, 20480])
39776
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39777
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39778
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39779
+ batch tensor: tokens torch.Size([2, 163840])
39780
+ batch tensor: labels torch.Size([2, 163840])
39781
+ batch tensor: loss_mask torch.Size([2, 163840])
39782
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39783
+ batch tensor: position_ids torch.Size([2, 163840])
39784
+ batch tensor: tokens torch.Size([2, 163840])
39785
+ batch tensor: labels torch.Size([2, 163840])
39786
+ batch tensor: loss_mask torch.Size([2, 163840])
39787
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39788
+ batch tensor: position_ids torch.Size([2, 163840])
39789
+ batch tensor after cp: tokens torch.Size([2, 20480])
39790
+ batch tensor after cp: labels torch.Size([2, 20480])
39791
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39792
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39793
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39794
+ batch tensor after cp: tokens torch.Size([2, 20480])
39795
+ batch tensor after cp: labels torch.Size([2, 20480])
39796
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39797
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39798
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39799
+ batch tensor: tokens torch.Size([2, 163840])
39800
+ batch tensor: labels torch.Size([2, 163840])
39801
+ batch tensor: loss_mask torch.Size([2, 163840])
39802
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39803
+ batch tensor: position_ids torch.Size([2, 163840])
39804
+ batch tensor after cp: tokens torch.Size([2, 20480])
39805
+ batch tensor after cp: labels torch.Size([2, 20480])
39806
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39807
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39808
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39809
+ batch tensor: tokens torch.Size([2, 163840])
39810
+ batch tensor: labels torch.Size([2, 163840])
39811
+ batch tensor: loss_mask torch.Size([2, 163840])
39812
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39813
+ batch tensor: position_ids torch.Size([2, 163840])
39814
+ batch tensor after cp: tokens torch.Size([2, 20480])
39815
+ batch tensor after cp: labels torch.Size([2, 20480])
39816
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39817
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39818
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39819
+ batch tensor: tokens torch.Size([2, 163840])
39820
+ batch tensor: labels torch.Size([2, 163840])
39821
+ batch tensor: loss_mask torch.Size([2, 163840])
39822
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39823
+ batch tensor: position_ids torch.Size([2, 163840])
39824
+ batch tensor after cp: tokens torch.Size([2, 20480])
39825
+ batch tensor after cp: labels torch.Size([2, 20480])
39826
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39827
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39828
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39829
+ batch tensor: tokens torch.Size([2, 163840])
39830
+ batch tensor: labels torch.Size([2, 163840])
39831
+ batch tensor: loss_mask torch.Size([2, 163840])
39832
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39833
+ batch tensor: position_ids torch.Size([2, 163840])
39834
+ batch tensor after cp: tokens torch.Size([2, 20480])
39835
+ batch tensor after cp: labels torch.Size([2, 20480])
39836
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39837
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39838
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39839
+ batch tensor: tokens torch.Size([2, 163840])
39840
+ batch tensor: labels torch.Size([2, 163840])
39841
+ batch tensor: loss_mask torch.Size([2, 163840])
39842
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39843
+ batch tensor: position_ids torch.Size([2, 163840])
39844
+ batch tensor after cp: tokens torch.Size([2, 20480])
39845
+ batch tensor after cp: labels torch.Size([2, 20480])
39846
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39847
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39848
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39849
+ batch tensor: tokens torch.Size([2, 163840])
39850
+ batch tensor: labels torch.Size([2, 163840])
39851
+ batch tensor: loss_mask torch.Size([2, 163840])
39852
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39853
+ batch tensor: position_ids torch.Size([2, 163840])
39854
+ batch tensor after cp: tokens torch.Size([2, 20480])
39855
+ batch tensor after cp: labels torch.Size([2, 20480])
39856
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39857
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39858
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39859
+ batch tensor: tokens torch.Size([2, 163840])
39860
+ batch tensor: labels torch.Size([2, 163840])
39861
+ batch tensor: loss_mask torch.Size([2, 163840])
39862
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39863
+ batch tensor: position_ids torch.Size([2, 163840])
39864
+ batch tensor after cp: tokens torch.Size([2, 20480])
39865
+ batch tensor after cp: labels torch.Size([2, 20480])
39866
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39867
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39868
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39869
+ batch tensor: tokens torch.Size([2, 163840])
39870
+ batch tensor: labels torch.Size([2, 163840])
39871
+ batch tensor: loss_mask torch.Size([2, 163840])
39872
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39873
+ batch tensor: position_ids torch.Size([2, 163840])
39874
+ batch tensor after cp: tokens torch.Size([2, 20480])
39875
+ batch tensor after cp: labels torch.Size([2, 20480])
39876
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39877
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39878
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39879
+ batch tensor: tokens torch.Size([2, 163840])
39880
+ batch tensor: labels torch.Size([2, 163840])
39881
+ batch tensor: loss_mask torch.Size([2, 163840])
39882
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39883
+ batch tensor: position_ids torch.Size([2, 163840])
39884
+ batch tensor after cp: tokens torch.Size([2, 20480])
39885
+ batch tensor after cp: labels torch.Size([2, 20480])
39886
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39887
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39888
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39889
+ batch tensor: tokens torch.Size([2, 163840])
39890
+ batch tensor: labels torch.Size([2, 163840])
39891
+ batch tensor: loss_mask torch.Size([2, 163840])
39892
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39893
+ batch tensor: position_ids torch.Size([2, 163840])
39894
+ batch tensor after cp: tokens torch.Size([2, 20480])
39895
+ batch tensor after cp: labels torch.Size([2, 20480])
39896
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39897
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39898
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39899
+ batch tensor: tokens torch.Size([2, 163840])
39900
+ batch tensor: labels torch.Size([2, 163840])
39901
+ batch tensor: loss_mask torch.Size([2, 163840])
39902
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39903
+ batch tensor: position_ids torch.Size([2, 163840])
39904
+ batch tensor after cp: tokens torch.Size([2, 20480])
39905
+ batch tensor after cp: labels torch.Size([2, 20480])
39906
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39907
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39908
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39909
+ batch tensor: tokens torch.Size([2, 163840])
39910
+ batch tensor: labels torch.Size([2, 163840])
39911
+ batch tensor: loss_mask torch.Size([2, 163840])
39912
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39913
+ batch tensor: position_ids torch.Size([2, 163840])
39914
+ batch tensor after cp: tokens torch.Size([2, 20480])
39915
+ batch tensor after cp: labels torch.Size([2, 20480])
39916
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39917
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39918
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39919
+ batch tensor: tokens torch.Size([2, 163840])
39920
+ batch tensor: labels torch.Size([2, 163840])
39921
+ batch tensor: loss_mask torch.Size([2, 163840])
39922
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39923
+ batch tensor: position_ids torch.Size([2, 163840])
39924
+ batch tensor after cp: tokens torch.Size([2, 20480])
39925
+ batch tensor after cp: labels torch.Size([2, 20480])
39926
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39927
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39928
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39929
+ batch tensor: tokens torch.Size([2, 163840])
39930
+ batch tensor: labels torch.Size([2, 163840])
39931
+ batch tensor: loss_mask torch.Size([2, 163840])
39932
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39933
+ batch tensor: position_ids torch.Size([2, 163840])
39934
+ batch tensor after cp: tokens torch.Size([2, 20480])
39935
+ batch tensor after cp: labels torch.Size([2, 20480])
39936
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39937
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39938
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39939
+ batch tensor: tokens torch.Size([2, 163840])
39940
+ batch tensor: labels torch.Size([2, 163840])
39941
+ batch tensor: loss_mask torch.Size([2, 163840])
39942
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39943
+ batch tensor: position_ids torch.Size([2, 163840])
39944
+ batch tensor after cp: tokens torch.Size([2, 20480])
39945
+ batch tensor after cp: labels torch.Size([2, 20480])
39946
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39947
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39948
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39949
+ batch tensor: tokens torch.Size([2, 163840])
39950
+ batch tensor: labels torch.Size([2, 163840])
39951
+ batch tensor: loss_mask torch.Size([2, 163840])
39952
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39953
+ batch tensor: position_ids torch.Size([2, 163840])
39954
+ batch tensor after cp: tokens torch.Size([2, 20480])
39955
+ batch tensor after cp: labels torch.Size([2, 20480])
39956
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39957
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39958
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39959
+ batch tensor: tokens torch.Size([2, 163840])
39960
+ batch tensor: labels torch.Size([2, 163840])
39961
+ batch tensor: loss_mask torch.Size([2, 163840])
39962
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39963
+ batch tensor: position_ids torch.Size([2, 163840])
39964
+ batch tensor after cp: tokens torch.Size([2, 20480])
39965
+ batch tensor after cp: labels torch.Size([2, 20480])
39966
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39967
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39968
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39969
+ batch tensor: tokens torch.Size([2, 163840])
39970
+ batch tensor: labels torch.Size([2, 163840])
39971
+ batch tensor: loss_mask torch.Size([2, 163840])
39972
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39973
+ batch tensor: position_ids torch.Size([2, 163840])
39974
+ batch tensor after cp: tokens torch.Size([2, 20480])
39975
+ batch tensor after cp: labels torch.Size([2, 20480])
39976
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39977
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39978
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39979
+ batch tensor: tokens torch.Size([2, 163840])
39980
+ batch tensor: labels torch.Size([2, 163840])
39981
+ batch tensor: loss_mask torch.Size([2, 163840])
39982
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39983
+ batch tensor: position_ids torch.Size([2, 163840])
39984
+ batch tensor after cp: tokens torch.Size([2, 20480])
39985
+ batch tensor after cp: labels torch.Size([2, 20480])
39986
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39987
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39988
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39989
+ batch tensor: tokens torch.Size([2, 163840])
39990
+ batch tensor: labels torch.Size([2, 163840])
39991
+ batch tensor: loss_mask torch.Size([2, 163840])
39992
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
39993
+ batch tensor: position_ids torch.Size([2, 163840])
39994
+ batch tensor after cp: tokens torch.Size([2, 20480])
39995
+ batch tensor after cp: labels torch.Size([2, 20480])
39996
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
39997
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
39998
+ batch tensor after cp: position_ids torch.Size([2, 20480])
39999
+ batch tensor: tokens torch.Size([2, 163840])
40000
+ batch tensor: labels torch.Size([2, 163840])
40001
+ batch tensor: loss_mask torch.Size([2, 163840])
40002
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40003
+ batch tensor: position_ids torch.Size([2, 163840])
40004
+ batch tensor after cp: tokens torch.Size([2, 20480])
40005
+ batch tensor after cp: labels torch.Size([2, 20480])
40006
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40007
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40008
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40009
+ batch tensor: tokens torch.Size([2, 163840])
40010
+ batch tensor: labels torch.Size([2, 163840])
40011
+ batch tensor: loss_mask torch.Size([2, 163840])
40012
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40013
+ batch tensor: position_ids torch.Size([2, 163840])
40014
+ batch tensor after cp: tokens torch.Size([2, 20480])
40015
+ batch tensor after cp: labels torch.Size([2, 20480])
40016
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40017
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40018
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40019
+ batch tensor: tokens torch.Size([2, 163840])
40020
+ batch tensor: labels torch.Size([2, 163840])
40021
+ batch tensor: loss_mask torch.Size([2, 163840])
40022
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40023
+ batch tensor: position_ids torch.Size([2, 163840])
40024
+ batch tensor after cp: tokens torch.Size([2, 20480])
40025
+ batch tensor after cp: labels torch.Size([2, 20480])
40026
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40027
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40028
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40029
+ batch tensor: tokens torch.Size([2, 163840])
40030
+ batch tensor: labels torch.Size([2, 163840])
40031
+ batch tensor: loss_mask torch.Size([2, 163840])
40032
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40033
+ batch tensor: position_ids torch.Size([2, 163840])
40034
+ batch tensor after cp: tokens torch.Size([2, 20480])
40035
+ batch tensor after cp: labels torch.Size([2, 20480])
40036
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40037
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40038
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40039
+ Start exporting trace 6
40040
+ Done exporting trace 6
40041
+ [2025-06-21 21:59:02] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 58097.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
40042
+ batch tensor: tokens torch.Size([2, 163840])
40043
+ batch tensor: labels torch.Size([2, 163840])
40044
+ batch tensor: loss_mask torch.Size([2, 163840])
40045
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40046
+ batch tensor: position_ids torch.Size([2, 163840])
40047
+ batch tensor after cp: tokens torch.Size([2, 20480])
40048
+ batch tensor after cp: labels torch.Size([2, 20480])
40049
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40050
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40051
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40052
+ batch tensor: tokens torch.Size([2, 163840])
40053
+ batch tensor: labels torch.Size([2, 163840])
40054
+ batch tensor: loss_mask torch.Size([2, 163840])
40055
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40056
+ batch tensor: position_ids torch.Size([2, 163840])
40057
+ batch tensor after cp: tokens torch.Size([2, 20480])
40058
+ batch tensor after cp: labels torch.Size([2, 20480])
40059
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40060
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40061
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40062
+ batch tensor: tokens torch.Size([2, 163840])
40063
+ batch tensor: labels torch.Size([2, 163840])
40064
+ batch tensor: loss_mask torch.Size([2, 163840])
40065
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40066
+ batch tensor: position_ids torch.Size([2, 163840])
40067
+ batch tensor after cp: tokens torch.Size([2, 20480])
40068
+ batch tensor after cp: labels torch.Size([2, 20480])
40069
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40070
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40071
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40072
+ batch tensor: tokens torch.Size([2, 163840])
40073
+ batch tensor: labels torch.Size([2, 163840])
40074
+ batch tensor: loss_mask torch.Size([2, 163840])
40075
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40076
+ batch tensor: position_ids torch.Size([2, 163840])
40077
+ batch tensor after cp: tokens torch.Size([2, 20480])
40078
+ batch tensor after cp: labels torch.Size([2, 20480])
40079
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40080
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40081
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40082
+ batch tensor: tokens torch.Size([2, 163840])
40083
+ batch tensor: labels torch.Size([2, 163840])
40084
+ batch tensor: loss_mask torch.Size([2, 163840])
40085
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40086
+ batch tensor: position_ids torch.Size([2, 163840])
40087
+ batch tensor after cp: tokens torch.Size([2, 20480])
40088
+ batch tensor after cp: labels torch.Size([2, 20480])
40089
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40090
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40091
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40092
+ batch tensor: tokens torch.Size([2, 163840])
40093
+ batch tensor: labels torch.Size([2, 163840])
40094
+ batch tensor: loss_mask torch.Size([2, 163840])
40095
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40096
+ batch tensor: position_ids torch.Size([2, 163840])
40097
+ batch tensor after cp: tokens torch.Size([2, 20480])
40098
+ batch tensor after cp: labels torch.Size([2, 20480])
40099
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40100
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40101
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40102
+ batch tensor: tokens torch.Size([2, 163840])
40103
+ batch tensor: labels torch.Size([2, 163840])
40104
+ batch tensor: loss_mask torch.Size([2, 163840])
40105
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40106
+ batch tensor: position_ids torch.Size([2, 163840])
40107
+ batch tensor after cp: tokens torch.Size([2, 20480])
40108
+ batch tensor after cp: labels torch.Size([2, 20480])
40109
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40110
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40111
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40112
+ batch tensor: tokens torch.Size([2, 163840])
40113
+ batch tensor: labels torch.Size([2, 163840])
40114
+ batch tensor: loss_mask torch.Size([2, 163840])
40115
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40116
+ batch tensor: position_ids torch.Size([2, 163840])
40117
+ batch tensor after cp: tokens torch.Size([2, 20480])
40118
+ batch tensor after cp: labels torch.Size([2, 20480])
40119
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40120
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40121
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40122
+ batch tensor: tokens torch.Size([2, 163840])
40123
+ batch tensor: labels torch.Size([2, 163840])
40124
+ batch tensor: loss_mask torch.Size([2, 163840])
40125
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40126
+ batch tensor: position_ids torch.Size([2, 163840])
40127
+ batch tensor after cp: tokens torch.Size([2, 20480])
40128
+ batch tensor after cp: labels torch.Size([2, 20480])
40129
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40130
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40131
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40132
+ batch tensor: tokens torch.Size([2, 163840])
40133
+ batch tensor: labels torch.Size([2, 163840])
40134
+ batch tensor: loss_mask torch.Size([2, 163840])
40135
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40136
+ batch tensor: position_ids torch.Size([2, 163840])
40137
+ batch tensor after cp: tokens torch.Size([2, 20480])
40138
+ batch tensor after cp: labels torch.Size([2, 20480])
40139
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40140
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40141
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40142
+ batch tensor: tokens torch.Size([2, 163840])
40143
+ batch tensor: labels torch.Size([2, 163840])
40144
+ batch tensor: loss_mask torch.Size([2, 163840])
40145
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40146
+ batch tensor: position_ids torch.Size([2, 163840])
40147
+ batch tensor after cp: tokens torch.Size([2, 20480])
40148
+ batch tensor after cp: labels torch.Size([2, 20480])
40149
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40150
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40151
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40152
+ batch tensor: tokens torch.Size([2, 163840])
40153
+ batch tensor: labels torch.Size([2, 163840])
40154
+ batch tensor: loss_mask torch.Size([2, 163840])
40155
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40156
+ batch tensor: position_ids torch.Size([2, 163840])
40157
+ batch tensor after cp: tokens torch.Size([2, 20480])
40158
+ batch tensor after cp: labels torch.Size([2, 20480])
40159
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40160
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40161
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40162
+ batch tensor: tokens torch.Size([2, 163840])
40163
+ batch tensor: labels torch.Size([2, 163840])
40164
+ batch tensor: loss_mask torch.Size([2, 163840])
40165
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40166
+ batch tensor: position_ids torch.Size([2, 163840])
40167
+ batch tensor after cp: tokens torch.Size([2, 20480])
40168
+ batch tensor after cp: labels torch.Size([2, 20480])
40169
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40170
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40171
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40172
+ batch tensor: tokens torch.Size([2, 163840])
40173
+ batch tensor: labels torch.Size([2, 163840])
40174
+ batch tensor: loss_mask torch.Size([2, 163840])
40175
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40176
+ batch tensor: position_ids torch.Size([2, 163840])
40177
+ batch tensor after cp: tokens torch.Size([2, 20480])
40178
+ batch tensor after cp: labels torch.Size([2, 20480])
40179
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40180
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40181
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40182
+ batch tensor: tokens torch.Size([2, 163840])
40183
+ batch tensor: labels torch.Size([2, 163840])
40184
+ batch tensor: loss_mask torch.Size([2, 163840])
40185
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40186
+ batch tensor: position_ids torch.Size([2, 163840])
40187
+ batch tensor after cp: tokens torch.Size([2, 20480])
40188
+ batch tensor after cp: labels torch.Size([2, 20480])
40189
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40190
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40191
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40192
+ batch tensor: tokens torch.Size([2, 163840])
40193
+ batch tensor: labels torch.Size([2, 163840])
40194
+ batch tensor: loss_mask torch.Size([2, 163840])
40195
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40196
+ batch tensor: position_ids torch.Size([2, 163840])
40197
+ batch tensor after cp: tokens torch.Size([2, 20480])
40198
+ batch tensor after cp: labels torch.Size([2, 20480])
40199
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40200
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40201
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40202
+ batch tensor: tokens torch.Size([2, 163840])
40203
+ batch tensor: labels torch.Size([2, 163840])
40204
+ batch tensor: loss_mask torch.Size([2, 163840])
40205
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40206
+ batch tensor: position_ids torch.Size([2, 163840])
40207
+ batch tensor after cp: tokens torch.Size([2, 20480])
40208
+ batch tensor after cp: labels torch.Size([2, 20480])
40209
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40210
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40211
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40212
+ batch tensor: tokens torch.Size([2, 163840])
40213
+ batch tensor: labels torch.Size([2, 163840])
40214
+ batch tensor: loss_mask torch.Size([2, 163840])
40215
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40216
+ batch tensor: position_ids torch.Size([2, 163840])
40217
+ batch tensor after cp: tokens torch.Size([2, 20480])
40218
+ batch tensor after cp: labels torch.Size([2, 20480])
40219
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40220
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40221
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40222
+ batch tensor: tokens torch.Size([2, 163840])
40223
+ batch tensor: labels torch.Size([2, 163840])
40224
+ batch tensor: loss_mask torch.Size([2, 163840])
40225
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40226
+ batch tensor: position_ids torch.Size([2, 163840])
40227
+ batch tensor after cp: tokens torch.Size([2, 20480])
40228
+ batch tensor after cp: labels torch.Size([2, 20480])
40229
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40230
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40231
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40232
+ batch tensor: tokens torch.Size([2, 163840])
40233
+ batch tensor: labels torch.Size([2, 163840])
40234
+ batch tensor: loss_mask torch.Size([2, 163840])
40235
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40236
+ batch tensor: position_ids torch.Size([2, 163840])
40237
+ batch tensor after cp: tokens torch.Size([2, 20480])
40238
+ batch tensor after cp: labels torch.Size([2, 20480])
40239
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40240
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40241
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40242
+ batch tensor: tokens torch.Size([2, 163840])
40243
+ batch tensor: labels torch.Size([2, 163840])
40244
+ batch tensor: loss_mask torch.Size([2, 163840])
40245
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40246
+ batch tensor: position_ids torch.Size([2, 163840])
40247
+ batch tensor after cp: tokens torch.Size([2, 20480])
40248
+ batch tensor after cp: labels torch.Size([2, 20480])
40249
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40250
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40251
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40252
+ batch tensor: tokens torch.Size([2, 163840])
40253
+ batch tensor: labels torch.Size([2, 163840])
40254
+ batch tensor: loss_mask torch.Size([2, 163840])
40255
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40256
+ batch tensor: position_ids torch.Size([2, 163840])
40257
+ batch tensor after cp: tokens torch.Size([2, 20480])
40258
+ batch tensor after cp: labels torch.Size([2, 20480])
40259
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40260
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40261
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40262
+ batch tensor: tokens torch.Size([2, 163840])
40263
+ batch tensor: labels torch.Size([2, 163840])
40264
+ batch tensor: loss_mask torch.Size([2, 163840])
40265
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40266
+ batch tensor: position_ids torch.Size([2, 163840])
40267
+ batch tensor after cp: tokens torch.Size([2, 20480])
40268
+ batch tensor after cp: labels torch.Size([2, 20480])
40269
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40270
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40271
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40272
+ batch tensor: tokens torch.Size([2, 163840])
40273
+ batch tensor: labels torch.Size([2, 163840])
40274
+ batch tensor: loss_mask torch.Size([2, 163840])
40275
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40276
+ batch tensor: position_ids torch.Size([2, 163840])
40277
+ batch tensor after cp: tokens torch.Size([2, 20480])
40278
+ batch tensor after cp: labels torch.Size([2, 20480])
40279
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40280
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40281
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40282
+ batch tensor: tokens torch.Size([2, 163840])
40283
+ batch tensor: labels torch.Size([2, 163840])
40284
+ batch tensor: loss_mask torch.Size([2, 163840])
40285
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40286
+ batch tensor: position_ids torch.Size([2, 163840])
40287
+ batch tensor after cp: tokens torch.Size([2, 20480])
40288
+ batch tensor after cp: labels torch.Size([2, 20480])
40289
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40290
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40291
+ batch tensor after cp: position_ids torch.Size([2, 20480])
40292
+ batch tensor: tokens torch.Size([2, 163840])
40293
+ batch tensor: labels torch.Size([2, 163840])
40294
+ batch tensor: loss_mask torch.Size([2, 163840])
40295
+ batch tensor: attention_mask torch.Size([2, 1, 163840, 163840])
40296
+ batch tensor: position_ids torch.Size([2, 163840])
40297
+ batch tensor after cp: tokens torch.Size([2, 20480])
40298
+ batch tensor after cp: labels torch.Size([2, 20480])
40299
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
40300
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 163840])
40301
+ batch tensor after cp: position_ids torch.Size([2, 20480])
attnserver.run_attnserver.slurm.sh.343215.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343220.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343220.out.log CHANGED
@@ -25136,3 +25136,103 @@ WARNING: constraints for invoking optimized fused softmax kernel are not met. We
25136
  time to initialize megatron (seconds): 8.313
25137
  [after megatron is initialized] datetime: 2025-06-21 21:58:11
25138
  building GPT model ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25136
  time to initialize megatron (seconds): 8.313
25137
  [after megatron is initialized] datetime: 2025-06-21 21:58:11
25138
  building GPT model ...
25139
+ >>> embedding
25140
+ >>> decoder
25141
+ >>> output_layer
25142
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 676924416
25143
+ >>> embedding
25144
+ >>> decoder
25145
+ >>> output_layer
25146
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 676924416
25147
+ >>> embedding
25148
+ >>> decoder
25149
+ >>> output_layer
25150
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 676924416
25151
+ >>> embedding
25152
+ >>> decoder
25153
+ >>> output_layer
25154
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 676924416
25155
+ >>> embedding
25156
+ >>> decoder
25157
+ >>> output_layer
25158
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 676924416
25159
+ >>> embedding
25160
+ >>> decoder
25161
+ >>> output_layer
25162
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 676924416
25163
+ >>> embedding
25164
+ >>> decoder
25165
+ >>> output_layer
25166
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 676924416
25167
+ >>> embedding
25168
+ >>> decoder
25169
+ >>> output_layer
25170
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 676924416
25171
+ >>> embedding
25172
+ >>> decoder
25173
+ >>> output_layer
25174
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 676924416
25175
+ >>> embedding
25176
+ >>> decoder
25177
+ >>> output_layer
25178
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 676924416
25179
+ >>> embedding
25180
+ >>> decoder
25181
+ >>> output_layer
25182
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 676924416
25183
+ >>> embedding
25184
+ >>> decoder
25185
+ >>> output_layer
25186
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 676924416
25187
+ >>> embedding
25188
+ >>> decoder
25189
+ >>> output_layer
25190
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 676924416
25191
+ >>> embedding
25192
+ >>> decoder
25193
+ >>> output_layer
25194
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 676924416
25195
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
25196
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
25197
+ Params for bucket 1 (676924416 elements, 676924416 padded size):
25198
+ module.decoder.final_layernorm.bias
25199
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
25200
+ module.decoder.layers.0.mlp.linear_fc2.weight
25201
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
25202
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
25203
+ module.decoder.layers.1.self_attention.linear_qkv.bias
25204
+ module.decoder.layers.0.mlp.linear_fc2.bias
25205
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
25206
+ module.decoder.layers.0.self_attention.linear_qkv.bias
25207
+ module.decoder.final_layernorm.weight
25208
+ module.decoder.layers.1.mlp.linear_fc1.weight
25209
+ module.decoder.layers.0.mlp.linear_fc1.weight
25210
+ module.decoder.layers.1.mlp.linear_fc2.bias
25211
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
25212
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
25213
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
25214
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
25215
+ module.decoder.layers.1.mlp.linear_fc1.bias
25216
+ module.decoder.layers.0.mlp.linear_fc1.bias
25217
+ module.embedding.word_embeddings.weight
25218
+ module.decoder.layers.1.self_attention.linear_qkv.weight
25219
+ module.decoder.layers.1.self_attention.linear_proj.weight
25220
+ module.decoder.layers.0.self_attention.linear_qkv.weight
25221
+ module.decoder.layers.0.self_attention.linear_proj.weight
25222
+ module.decoder.layers.1.mlp.linear_fc2.weight
25223
+ module.decoder.layers.1.self_attention.linear_proj.bias
25224
+ module.decoder.layers.0.self_attention.linear_proj.bias
25225
+ module.embedding.position_embeddings.weight
25226
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14990c75e420>, config_logger_dir='')
25227
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
25228
+ >>> embedding
25229
+ >>> decoder
25230
+ >>> output_layer
25231
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 676924416
25232
+ >>> embedding
25233
+ >>> decoder
25234
+ >>> output_layer
25235
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 676924416
25236
+ (TP, PP, encoder TP, encoder PP) mismatch after resume ((4, 1, 0, 0) vs (2, 1, 0, 0) from checkpoint): RNG state will be ignored
25237
+ (TP, PP, encoder TP, encoder PP) mismatch after resume ((4, 1, 0, 0) vs (2, 1, 0, 0) from checkpoint): Rerun state will be ignored
25238
+ loading distributed checkpoint from gpt-checkpoint at iteration 10
attnserver.run_attnserver.slurm.sh.343225.out.log CHANGED
@@ -21350,3 +21350,96 @@ batch tensor after cp: labels torch.Size([1, 49152])
21350
  batch tensor after cp: loss_mask torch.Size([1, 49152])
21351
  batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21352
  batch tensor after cp: position_ids torch.Size([1, 49152])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21350
  batch tensor after cp: loss_mask torch.Size([1, 49152])
21351
  batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21352
  batch tensor after cp: position_ids torch.Size([1, 49152])
21353
+ batch tensor: tokens torch.Size([1, 98304])
21354
+ batch tensor: labels torch.Size([1, 98304])
21355
+ batch tensor: loss_mask torch.Size([1, 98304])
21356
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21357
+ batch tensor: position_ids torch.Size([1, 98304])
21358
+ batch tensor after cp: tokens torch.Size([1, 49152])
21359
+ batch tensor after cp: labels torch.Size([1, 49152])
21360
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21361
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21362
+ batch tensor after cp: position_ids torch.Size([1, 49152])
21363
+ batch tensor: tokens torch.Size([1, 98304])
21364
+ batch tensor: labels torch.Size([1, 98304])
21365
+ batch tensor: loss_mask torch.Size([1, 98304])
21366
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21367
+ batch tensor: position_ids torch.Size([1, 98304])
21368
+ batch tensor after cp: tokens torch.Size([1, 49152])
21369
+ batch tensor after cp: labels torch.Size([1, 49152])
21370
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21371
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21372
+ batch tensor after cp: position_ids torch.Size([1, 49152])
21373
+ Start exporting trace 5
21374
+ Done exporting trace 5
21375
+ [2025-06-21 21:58:24] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 64617.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
21376
+ batch tensor: tokens torch.Size([1, 98304])
21377
+ batch tensor: labels torch.Size([1, 98304])
21378
+ batch tensor: loss_mask torch.Size([1, 98304])
21379
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21380
+ batch tensor: position_ids torch.Size([1, 98304])
21381
+ batch tensor after cp: tokens torch.Size([1, 49152])
21382
+ batch tensor after cp: labels torch.Size([1, 49152])
21383
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21384
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21385
+ batch tensor after cp: position_ids torch.Size([1, 49152])
21386
+ batch tensor: tokens torch.Size([1, 98304])
21387
+ batch tensor: labels torch.Size([1, 98304])
21388
+ batch tensor: loss_mask torch.Size([1, 98304])
21389
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21390
+ batch tensor: position_ids torch.Size([1, 98304])
21391
+ batch tensor after cp: tokens torch.Size([1, 49152])
21392
+ batch tensor after cp: labels torch.Size([1, 49152])
21393
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21394
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21395
+ batch tensor after cp: position_ids torch.Size([1, 49152])
21396
+ batch tensor: tokens torch.Size([1, 98304])
21397
+ batch tensor: labels torch.Size([1, 98304])
21398
+ batch tensor: loss_mask torch.Size([1, 98304])
21399
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21400
+ batch tensor: position_ids torch.Size([1, 98304])
21401
+ batch tensor after cp: tokens torch.Size([1, 49152])
21402
+ batch tensor after cp: labels torch.Size([1, 49152])
21403
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21404
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21405
+ batch tensor after cp: position_ids torch.Size([1, 49152])
21406
+ batch tensor: tokens torch.Size([1, 98304])
21407
+ batch tensor: labels torch.Size([1, 98304])
21408
+ batch tensor: loss_mask torch.Size([1, 98304])
21409
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21410
+ batch tensor: position_ids torch.Size([1, 98304])
21411
+ batch tensor after cp: tokens torch.Size([1, 49152])
21412
+ batch tensor after cp: labels torch.Size([1, 49152])
21413
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21414
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21415
+ batch tensor after cp: position_ids torch.Size([1, 49152])
21416
+ batch tensor: tokens torch.Size([1, 98304])
21417
+ batch tensor: labels torch.Size([1, 98304])
21418
+ batch tensor: loss_mask torch.Size([1, 98304])
21419
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21420
+ batch tensor: position_ids torch.Size([1, 98304])
21421
+ batch tensor after cp: tokens torch.Size([1, 49152])
21422
+ batch tensor after cp: labels torch.Size([1, 49152])
21423
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21424
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21425
+ batch tensor after cp: position_ids torch.Size([1, 49152])
21426
+ batch tensor: tokens torch.Size([1, 98304])
21427
+ batch tensor: labels torch.Size([1, 98304])
21428
+ batch tensor: loss_mask torch.Size([1, 98304])
21429
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21430
+ batch tensor: position_ids torch.Size([1, 98304])
21431
+ batch tensor after cp: tokens torch.Size([1, 49152])
21432
+ batch tensor after cp: labels torch.Size([1, 49152])
21433
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21434
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21435
+ batch tensor after cp: position_ids torch.Size([1, 49152])
21436
+ batch tensor: tokens torch.Size([1, 98304])
21437
+ batch tensor: labels torch.Size([1, 98304])
21438
+ batch tensor: loss_mask torch.Size([1, 98304])
21439
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
21440
+ batch tensor: position_ids torch.Size([1, 98304])
21441
+ batch tensor after cp: tokens torch.Size([1, 49152])
21442
+ batch tensor after cp: labels torch.Size([1, 49152])
21443
+ batch tensor after cp: loss_mask torch.Size([1, 49152])
21444
+ batch tensor after cp: attention_mask torch.Size([1, 1, 49152, 98304])
21445
+ batch tensor after cp: position_ids torch.Size([1, 49152])
attnserver.run_attnserver.slurm.sh.343226.out.log CHANGED
@@ -17266,3 +17266,136 @@ batch tensor after cp: labels torch.Size([2, 65536])
17266
  batch tensor after cp: loss_mask torch.Size([2, 65536])
17267
  batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17268
  batch tensor after cp: position_ids torch.Size([2, 65536])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17266
  batch tensor after cp: loss_mask torch.Size([2, 65536])
17267
  batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17268
  batch tensor after cp: position_ids torch.Size([2, 65536])
17269
+ batch tensor: tokens torch.Size([2, 131072])
17270
+ batch tensor: labels torch.Size([2, 131072])
17271
+ batch tensor: loss_mask torch.Size([2, 131072])
17272
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17273
+ batch tensor: position_ids torch.Size([2, 131072])
17274
+ batch tensor after cp: tokens torch.Size([2, 65536])
17275
+ batch tensor after cp: labels torch.Size([2, 65536])
17276
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17277
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17278
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17279
+ batch tensor: tokens torch.Size([2, 131072])
17280
+ batch tensor: labels torch.Size([2, 131072])
17281
+ batch tensor: loss_mask torch.Size([2, 131072])
17282
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17283
+ batch tensor: position_ids torch.Size([2, 131072])
17284
+ batch tensor after cp: tokens torch.Size([2, 65536])
17285
+ batch tensor after cp: labels torch.Size([2, 65536])
17286
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17287
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17288
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17289
+ batch tensor: tokens torch.Size([2, 131072])
17290
+ batch tensor: labels torch.Size([2, 131072])
17291
+ batch tensor: loss_mask torch.Size([2, 131072])
17292
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17293
+ batch tensor: position_ids torch.Size([2, 131072])
17294
+ batch tensor after cp: tokens torch.Size([2, 65536])
17295
+ batch tensor after cp: labels torch.Size([2, 65536])
17296
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17297
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17298
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17299
+ batch tensor: tokens torch.Size([2, 131072])
17300
+ batch tensor: labels torch.Size([2, 131072])
17301
+ batch tensor: loss_mask torch.Size([2, 131072])
17302
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17303
+ batch tensor: position_ids torch.Size([2, 131072])
17304
+ batch tensor after cp: tokens torch.Size([2, 65536])
17305
+ batch tensor after cp: labels torch.Size([2, 65536])
17306
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17307
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17308
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17309
+ batch tensor: tokens torch.Size([2, 131072])
17310
+ batch tensor: labels torch.Size([2, 131072])
17311
+ batch tensor: loss_mask torch.Size([2, 131072])
17312
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17313
+ batch tensor: position_ids torch.Size([2, 131072])
17314
+ batch tensor after cp: tokens torch.Size([2, 65536])
17315
+ batch tensor after cp: labels torch.Size([2, 65536])
17316
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17317
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17318
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17319
+ batch tensor: tokens torch.Size([2, 131072])
17320
+ batch tensor: labels torch.Size([2, 131072])
17321
+ batch tensor: loss_mask torch.Size([2, 131072])
17322
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17323
+ batch tensor: position_ids torch.Size([2, 131072])
17324
+ batch tensor after cp: tokens torch.Size([2, 65536])
17325
+ batch tensor after cp: labels torch.Size([2, 65536])
17326
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17327
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17328
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17329
+ Start exporting trace 2
17330
+ Done exporting trace 2
17331
+ [2025-06-21 21:58:49] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 53213.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
17332
+ batch tensor: tokens torch.Size([2, 131072])
17333
+ batch tensor: labels torch.Size([2, 131072])
17334
+ batch tensor: loss_mask torch.Size([2, 131072])
17335
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17336
+ batch tensor: position_ids torch.Size([2, 131072])
17337
+ batch tensor after cp: tokens torch.Size([2, 65536])
17338
+ batch tensor after cp: labels torch.Size([2, 65536])
17339
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17340
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17341
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17342
+ batch tensor: tokens torch.Size([2, 131072])
17343
+ batch tensor: labels torch.Size([2, 131072])
17344
+ batch tensor: loss_mask torch.Size([2, 131072])
17345
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17346
+ batch tensor: position_ids torch.Size([2, 131072])
17347
+ batch tensor after cp: tokens torch.Size([2, 65536])
17348
+ batch tensor after cp: labels torch.Size([2, 65536])
17349
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17350
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17351
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17352
+ batch tensor: tokens torch.Size([2, 131072])
17353
+ batch tensor: labels torch.Size([2, 131072])
17354
+ batch tensor: loss_mask torch.Size([2, 131072])
17355
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17356
+ batch tensor: position_ids torch.Size([2, 131072])
17357
+ batch tensor after cp: tokens torch.Size([2, 65536])
17358
+ batch tensor after cp: labels torch.Size([2, 65536])
17359
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17360
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17361
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17362
+ batch tensor: tokens torch.Size([2, 131072])
17363
+ batch tensor: labels torch.Size([2, 131072])
17364
+ batch tensor: loss_mask torch.Size([2, 131072])
17365
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17366
+ batch tensor: position_ids torch.Size([2, 131072])
17367
+ batch tensor after cp: tokens torch.Size([2, 65536])
17368
+ batch tensor after cp: labels torch.Size([2, 65536])
17369
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17370
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17371
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17372
+ batch tensor: tokens torch.Size([2, 131072])
17373
+ batch tensor: labels torch.Size([2, 131072])
17374
+ batch tensor: loss_mask torch.Size([2, 131072])
17375
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17376
+ batch tensor: position_ids torch.Size([2, 131072])
17377
+ batch tensor after cp: tokens torch.Size([2, 65536])
17378
+ batch tensor after cp: labels torch.Size([2, 65536])
17379
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17380
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17381
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17382
+ batch tensor: tokens torch.Size([2, 131072])
17383
+ batch tensor: labels torch.Size([2, 131072])
17384
+ batch tensor: loss_mask torch.Size([2, 131072])
17385
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17386
+ batch tensor: position_ids torch.Size([2, 131072])
17387
+ batch tensor after cp: tokens torch.Size([2, 65536])
17388
+ batch tensor after cp: labels torch.Size([2, 65536])
17389
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17390
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17391
+ batch tensor after cp: position_ids torch.Size([2, 65536])
17392
+ batch tensor: tokens torch.Size([2, 131072])
17393
+ batch tensor: labels torch.Size([2, 131072])
17394
+ batch tensor: loss_mask torch.Size([2, 131072])
17395
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
17396
+ batch tensor: position_ids torch.Size([2, 131072])
17397
+ batch tensor after cp: tokens torch.Size([2, 65536])
17398
+ batch tensor after cp: labels torch.Size([2, 65536])
17399
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
17400
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
17401
+ batch tensor after cp: position_ids torch.Size([2, 65536])
attnserver.run_attnserver.slurm.sh.343237.err.log CHANGED
@@ -2046,3 +2046,75 @@ W0621 21:54:51.707000 1120324 site-packages/torch/distributed/run.py:766] ******
2046
  warnings.warn(
2047
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
2048
  warnings.warn(
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2046
  warnings.warn(
2047
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
2048
  warnings.warn(
2049
+ [rank1]:[W621 21:58:38.212320465 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2050
+ [rank3]:[W621 21:58:39.540096928 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2051
+ [rank7]:[W621 21:58:39.600834705 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2052
+ [rank11]:[W621 21:58:39.271604056 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2053
+ [rank13]:[W621 21:58:39.281585895 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2054
+ [rank15]:[W621 21:58:39.341994445 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2055
+ [rank10]:[W621 21:58:39.666457382 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2056
+ [rank0]:[W621 21:58:39.083150728 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2057
+ [rank9]:[W621 21:58:39.774865528 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2058
+ [rank5]:[W621 21:58:39.154577396 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2059
+ [rank12]:[W621 21:58:39.908045005 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2060
+ [rank8]:[W621 21:58:40.928288102 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2061
+ [rank6]:[W621 21:58:40.328026038 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2062
+ [rank14]:[W621 21:58:40.331172688 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2063
+ [rank4]:[W621 21:58:40.992897374 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2064
+ [rank2]:[W621 21:58:41.506227065 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2065
+ W0621 21:59:05.149000 1120324 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-476_1120324_0' has failed to send a keep-alive heartbeat to the rendezvous '343237' due to an error of type RendezvousTimeoutError.
2066
+ + set +x
2067
+ + set +x
2068
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
2069
+ + export PROF_CTX_LENGTH=65536
2070
+ + PROF_CTX_LENGTH=65536
2071
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp2.cp8.bs1.json'
2072
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp2.cp8.bs1.json' ']'
2073
+ + echo 'Running ctx_length=65536, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=1'
2074
+ + srun bash ./attnserver.sh
2075
+ + which python3
2076
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343237 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 65536 --max-position-embeddings 65536 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
2077
+ + which python3
2078
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343237 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 65536 --max-position-embeddings 65536 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
2079
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
2080
+ and will be removed in future. Use torchrun.
2081
+ Note that --use-env is set by default in torchrun.
2082
+ If your script expects `--local-rank` argument to be set, please
2083
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
2084
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
2085
+ further instructions
2086
+
2087
+ main()
2088
+ W0621 21:59:08.538000 849579 site-packages/torch/distributed/run.py:766]
2089
+ W0621 21:59:08.538000 849579 site-packages/torch/distributed/run.py:766] *****************************************
2090
+ W0621 21:59:08.538000 849579 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
2091
+ W0621 21:59:08.538000 849579 site-packages/torch/distributed/run.py:766] *****************************************
2092
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
2093
+ and will be removed in future. Use torchrun.
2094
+ Note that --use-env is set by default in torchrun.
2095
+ If your script expects `--local-rank` argument to be set, please
2096
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
2097
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
2098
+ further instructions
2099
+
2100
+ main()
2101
+ W0621 21:59:08.581000 1124226 site-packages/torch/distributed/run.py:766]
2102
+ W0621 21:59:08.581000 1124226 site-packages/torch/distributed/run.py:766] *****************************************
2103
+ W0621 21:59:08.581000 1124226 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
2104
+ W0621 21:59:08.581000 1124226 site-packages/torch/distributed/run.py:766] *****************************************
2105
+ [rank2]:[W621 21:59:34.872941463 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2106
+ [rank6]:[W621 21:59:34.873518740 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2107
+ [rank1]:[W621 21:59:34.873538258 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2108
+ [rank4]:[W621 21:59:34.873652088 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2109
+ [rank5]:[W621 21:59:34.874032995 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2110
+ [rank3]:[W621 21:59:34.874207362 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2111
+ [rank7]:[W621 21:59:34.875025848 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2112
+ [rank11]:[W621 21:59:34.544452962 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2113
+ [rank10]:[W621 21:59:34.544486163 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2114
+ [rank12]:[W621 21:59:34.544489592 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2115
+ [rank15]:[W621 21:59:34.544505034 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2116
+ [rank14]:[W621 21:59:34.544539355 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2117
+ [rank9]:[W621 21:59:34.544642645 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2118
+ [rank13]:[W621 21:59:34.544805119 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2119
+ [rank8]:[W621 21:59:34.671912210 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
2120
+ [rank0]:[W621 21:59:34.026347727 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
attnserver.run_attnserver.slurm.sh.343237.out.log CHANGED
@@ -28995,3 +28995,818 @@ batch tensor after cp: labels torch.Size([1, 6144])
28995
  batch tensor after cp: loss_mask torch.Size([1, 6144])
28996
  batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
28997
  batch tensor after cp: position_ids torch.Size([1, 6144])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28995
  batch tensor after cp: loss_mask torch.Size([1, 6144])
28996
  batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
28997
  batch tensor after cp: position_ids torch.Size([1, 6144])
28998
+ batch tensor: tokens torch.Size([1, 49152])
28999
+ batch tensor: labels torch.Size([1, 49152])
29000
+ batch tensor: loss_mask torch.Size([1, 49152])
29001
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29002
+ batch tensor: position_ids torch.Size([1, 49152])
29003
+ batch tensor after cp: tokens torch.Size([1, 6144])
29004
+ batch tensor after cp: labels torch.Size([1, 6144])
29005
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29006
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29007
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29008
+ batch tensor: tokens torch.Size([1, 49152])
29009
+ batch tensor: labels torch.Size([1, 49152])
29010
+ batch tensor: loss_mask torch.Size([1, 49152])
29011
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29012
+ batch tensor: position_ids torch.Size([1, 49152])
29013
+ batch tensor after cp: tokens torch.Size([1, 6144])
29014
+ batch tensor after cp: labels torch.Size([1, 6144])
29015
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29016
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29017
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29018
+ batch tensor: tokens torch.Size([1, 49152])
29019
+ batch tensor: labels torch.Size([1, 49152])
29020
+ batch tensor: loss_mask torch.Size([1, 49152])
29021
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29022
+ batch tensor: position_ids torch.Size([1, 49152])
29023
+ batch tensor after cp: tokens torch.Size([1, 6144])
29024
+ batch tensor after cp: labels torch.Size([1, 6144])
29025
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29026
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29027
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29028
+ batch tensor: tokens torch.Size([1, 49152])
29029
+ batch tensor: labels torch.Size([1, 49152])
29030
+ batch tensor: loss_mask torch.Size([1, 49152])
29031
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29032
+ batch tensor: position_ids torch.Size([1, 49152])
29033
+ batch tensor after cp: tokens torch.Size([1, 6144])
29034
+ batch tensor after cp: labels torch.Size([1, 6144])
29035
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29036
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29037
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29038
+ batch tensor: tokens torch.Size([1, 49152])
29039
+ batch tensor: labels torch.Size([1, 49152])
29040
+ batch tensor: loss_mask torch.Size([1, 49152])
29041
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29042
+ batch tensor: position_ids torch.Size([1, 49152])
29043
+ batch tensor after cp: tokens torch.Size([1, 6144])
29044
+ batch tensor after cp: labels torch.Size([1, 6144])
29045
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29046
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29047
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29048
+ batch tensor: tokens torch.Size([1, 49152])
29049
+ batch tensor: labels torch.Size([1, 49152])
29050
+ batch tensor: loss_mask torch.Size([1, 49152])
29051
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29052
+ batch tensor: position_ids torch.Size([1, 49152])
29053
+ batch tensor after cp: tokens torch.Size([1, 6144])
29054
+ batch tensor after cp: labels torch.Size([1, 6144])
29055
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29056
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29057
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29058
+ batch tensor: tokens torch.Size([1, 49152])
29059
+ batch tensor: labels torch.Size([1, 49152])
29060
+ batch tensor: loss_mask torch.Size([1, 49152])
29061
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29062
+ batch tensor: position_ids torch.Size([1, 49152])
29063
+ batch tensor after cp: tokens torch.Size([1, 6144])
29064
+ batch tensor after cp: labels torch.Size([1, 6144])
29065
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29066
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29067
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29068
+ batch tensor: tokens torch.Size([1, 49152])
29069
+ batch tensor: labels torch.Size([1, 49152])
29070
+ batch tensor: loss_mask torch.Size([1, 49152])
29071
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29072
+ batch tensor: position_ids torch.Size([1, 49152])
29073
+ batch tensor after cp: tokens torch.Size([1, 6144])
29074
+ batch tensor after cp: labels torch.Size([1, 6144])
29075
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29076
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29077
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29078
+ Start exporting trace 10
29079
+ Done exporting trace 10
29080
+ WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED
29081
+ (min, max) time across ranks (ms):
29082
+ evaluate .......................................: (17500.76, 17506.43)
29083
+ ----------------------------------------------------------------------------------------------------------------
29084
+ validation loss at iteration 10 on validation set | lm loss value: 1.070103E+01 | lm loss PPL: 4.440147E+04 |
29085
+ ----------------------------------------------------------------------------------------------------------------
29086
+ WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED
29087
+ WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED
29088
+ Evaluating on 1 samples
29089
+ Evaluating iter 1/1
29090
+ batch tensor: tokens torch.Size([1, 49152])
29091
+ batch tensor: labels torch.Size([1, 49152])
29092
+ batch tensor: loss_mask torch.Size([1, 49152])
29093
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29094
+ batch tensor: position_ids torch.Size([1, 49152])
29095
+ batch tensor after cp: tokens torch.Size([1, 6144])
29096
+ batch tensor after cp: labels torch.Size([1, 6144])
29097
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29098
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29099
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29100
+ batch tensor: tokens torch.Size([1, 49152])
29101
+ batch tensor: labels torch.Size([1, 49152])
29102
+ batch tensor: loss_mask torch.Size([1, 49152])
29103
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29104
+ batch tensor: position_ids torch.Size([1, 49152])
29105
+ batch tensor after cp: tokens torch.Size([1, 6144])
29106
+ batch tensor after cp: labels torch.Size([1, 6144])
29107
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29108
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29109
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29110
+ batch tensor: tokens torch.Size([1, 49152])
29111
+ batch tensor: labels torch.Size([1, 49152])
29112
+ batch tensor: loss_mask torch.Size([1, 49152])
29113
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29114
+ batch tensor: position_ids torch.Size([1, 49152])
29115
+ batch tensor after cp: tokens torch.Size([1, 6144])
29116
+ batch tensor after cp: labels torch.Size([1, 6144])
29117
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29118
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29119
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29120
+ batch tensor: tokens torch.Size([1, 49152])
29121
+ batch tensor: labels torch.Size([1, 49152])
29122
+ batch tensor: loss_mask torch.Size([1, 49152])
29123
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29124
+ batch tensor: position_ids torch.Size([1, 49152])
29125
+ batch tensor after cp: tokens torch.Size([1, 6144])
29126
+ batch tensor after cp: labels torch.Size([1, 6144])
29127
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29128
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29129
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29130
+ batch tensor: tokens torch.Size([1, 49152])
29131
+ batch tensor: labels torch.Size([1, 49152])
29132
+ batch tensor: loss_mask torch.Size([1, 49152])
29133
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29134
+ batch tensor: position_ids torch.Size([1, 49152])
29135
+ batch tensor after cp: tokens torch.Size([1, 6144])
29136
+ batch tensor after cp: labels torch.Size([1, 6144])
29137
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29138
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29139
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29140
+ batch tensor: tokens torch.Size([1, 49152])
29141
+ batch tensor: labels torch.Size([1, 49152])
29142
+ batch tensor: loss_mask torch.Size([1, 49152])
29143
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29144
+ batch tensor: position_ids torch.Size([1, 49152])
29145
+ batch tensor after cp: tokens torch.Size([1, 6144])
29146
+ batch tensor after cp: labels torch.Size([1, 6144])
29147
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29148
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29149
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29150
+ batch tensor: tokens torch.Size([1, 49152])
29151
+ batch tensor: labels torch.Size([1, 49152])
29152
+ batch tensor: loss_mask torch.Size([1, 49152])
29153
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29154
+ batch tensor: position_ids torch.Size([1, 49152])
29155
+ batch tensor after cp: tokens torch.Size([1, 6144])
29156
+ batch tensor after cp: labels torch.Size([1, 6144])
29157
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29158
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29159
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29160
+ batch tensor: tokens torch.Size([1, 49152])
29161
+ batch tensor: labels torch.Size([1, 49152])
29162
+ batch tensor: loss_mask torch.Size([1, 49152])
29163
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29164
+ batch tensor: position_ids torch.Size([1, 49152])
29165
+ batch tensor after cp: tokens torch.Size([1, 6144])
29166
+ batch tensor after cp: labels torch.Size([1, 6144])
29167
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29168
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29169
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29170
+ batch tensor: tokens torch.Size([1, 49152])
29171
+ batch tensor: labels torch.Size([1, 49152])
29172
+ batch tensor: loss_mask torch.Size([1, 49152])
29173
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29174
+ batch tensor: position_ids torch.Size([1, 49152])
29175
+ batch tensor after cp: tokens torch.Size([1, 6144])
29176
+ batch tensor after cp: labels torch.Size([1, 6144])
29177
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29178
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29179
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29180
+ batch tensor: tokens torch.Size([1, 49152])
29181
+ batch tensor: labels torch.Size([1, 49152])
29182
+ batch tensor: loss_mask torch.Size([1, 49152])
29183
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29184
+ batch tensor: position_ids torch.Size([1, 49152])
29185
+ batch tensor after cp: tokens torch.Size([1, 6144])
29186
+ batch tensor after cp: labels torch.Size([1, 6144])
29187
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29188
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29189
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29190
+ batch tensor: tokens torch.Size([1, 49152])
29191
+ batch tensor: labels torch.Size([1, 49152])
29192
+ batch tensor: loss_mask torch.Size([1, 49152])
29193
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29194
+ batch tensor: position_ids torch.Size([1, 49152])
29195
+ batch tensor after cp: tokens torch.Size([1, 6144])
29196
+ batch tensor after cp: labels torch.Size([1, 6144])
29197
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29198
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29199
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29200
+ batch tensor: tokens torch.Size([1, 49152])
29201
+ batch tensor: labels torch.Size([1, 49152])
29202
+ batch tensor: loss_mask torch.Size([1, 49152])
29203
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29204
+ batch tensor: position_ids torch.Size([1, 49152])
29205
+ batch tensor after cp: tokens torch.Size([1, 6144])
29206
+ batch tensor after cp: labels torch.Size([1, 6144])
29207
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29208
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29209
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29210
+ batch tensor: tokens torch.Size([1, 49152])
29211
+ batch tensor: labels torch.Size([1, 49152])
29212
+ batch tensor: loss_mask torch.Size([1, 49152])
29213
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29214
+ batch tensor: position_ids torch.Size([1, 49152])
29215
+ batch tensor after cp: tokens torch.Size([1, 6144])
29216
+ batch tensor after cp: labels torch.Size([1, 6144])
29217
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29218
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29219
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29220
+ batch tensor: tokens torch.Size([1, 49152])
29221
+ batch tensor: labels torch.Size([1, 49152])
29222
+ batch tensor: loss_mask torch.Size([1, 49152])
29223
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29224
+ batch tensor: position_ids torch.Size([1, 49152])
29225
+ batch tensor after cp: tokens torch.Size([1, 6144])
29226
+ batch tensor after cp: labels torch.Size([1, 6144])
29227
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29228
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29229
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29230
+ batch tensor: tokens torch.Size([1, 49152])
29231
+ batch tensor: labels torch.Size([1, 49152])
29232
+ batch tensor: loss_mask torch.Size([1, 49152])
29233
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29234
+ batch tensor: position_ids torch.Size([1, 49152])
29235
+ batch tensor after cp: tokens torch.Size([1, 6144])
29236
+ batch tensor after cp: labels torch.Size([1, 6144])
29237
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29238
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29239
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29240
+ batch tensor: tokens torch.Size([1, 49152])
29241
+ batch tensor: labels torch.Size([1, 49152])
29242
+ batch tensor: loss_mask torch.Size([1, 49152])
29243
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
29244
+ batch tensor: position_ids torch.Size([1, 49152])
29245
+ batch tensor after cp: tokens torch.Size([1, 6144])
29246
+ batch tensor after cp: labels torch.Size([1, 6144])
29247
+ batch tensor after cp: loss_mask torch.Size([1, 6144])
29248
+ batch tensor after cp: attention_mask torch.Size([1, 1, 6144, 49152])
29249
+ batch tensor after cp: position_ids torch.Size([1, 6144])
29250
+ Start exporting trace 11
29251
+ Done exporting trace 11
29252
+ WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED
29253
+ WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMode.DISABLED
29254
+ (min, max) time across ranks (ms):
29255
+ evaluate .......................................: (10344.77, 10347.15)
29256
+ ----------------------------------------------------------------------------------------------------------
29257
+ validation loss at iteration 10 on test set | lm loss value: 1.070103E+01 | lm loss PPL: 4.440147E+04 |
29258
+ ----------------------------------------------------------------------------------------------------------
29259
+ Running ctx_length=65536, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=1
29260
+ Cleaning up checkpoint directory: gpt-checkpoint
29261
+ Cleaning up checkpoint directory: gpt-checkpoint
29262
+ --------------------------------
29263
+ CTX_LENGTH: 65536
29264
+ TP_SIZE: 2
29265
+ CP_SIZE: 8
29266
+ CHECKPOINT_PATH: gpt-checkpoint
29267
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
29268
+ --------------------------------
29269
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
29270
+ --------------------------------
29271
+ CTX_LENGTH: 65536
29272
+ TP_SIZE: 2
29273
+ CP_SIZE: 8
29274
+ CHECKPOINT_PATH: gpt-checkpoint
29275
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
29276
+ --------------------------------
29277
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
29278
+ INFO:megatron.training.initialize:Setting logging level to 0
29279
+ INFO:megatron.training.initialize:Setting logging level to 0
29280
+ INFO:megatron.training.initialize:Setting logging level to 0
29281
+ INFO:megatron.training.initialize:Setting logging level to 0
29282
+ INFO:megatron.training.initialize:Setting logging level to 0
29283
+ INFO:megatron.training.initialize:Setting logging level to 0
29284
+ using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
29285
+ Number of virtual stages per pipeline stage: None
29286
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
29287
+ using torch.float16 for parameters ...
29288
+ ------------------------ arguments ------------------------
29289
+ account_for_embedding_in_pipeline_split ......... False
29290
+ account_for_loss_in_pipeline_split .............. False
29291
+ accumulate_allreduce_grads_in_fp32 .............. False
29292
+ adam_beta1 ...................................... 0.9
29293
+ adam_beta2 ...................................... 0.999
29294
+ adam_eps ........................................ 1e-08
29295
+ add_bias_linear ................................. True
29296
+ add_position_embedding .......................... True
29297
+ add_qkv_bias .................................... True
29298
+ adlr_autoresume ................................. False
29299
+ adlr_autoresume_interval ........................ 1000
29300
+ align_grad_reduce ............................... True
29301
+ align_param_gather .............................. False
29302
+ app_tag_run_name ................................ None
29303
+ app_tag_run_version ............................. 0.0.0
29304
+ apply_layernorm_1p .............................. False
29305
+ apply_query_key_layer_scaling ................... False
29306
+ apply_residual_connection_post_layernorm ........ False
29307
+ apply_rope_fusion ............................... False
29308
+ async_save ...................................... None
29309
+ async_tensor_model_parallel_allreduce ........... True
29310
+ attention_backend ............................... AttnBackend.auto
29311
+ attention_dropout ............................... 0.1
29312
+ attention_softmax_in_fp32 ....................... False
29313
+ auto_detect_ckpt_format ......................... False
29314
+ barrier_with_L1_time ............................ True
29315
+ bert_binary_head ................................ True
29316
+ bert_embedder_type .............................. megatron
29317
+ bert_load ....................................... None
29318
+ bf16 ............................................ False
29319
+ bias_dropout_fusion ............................. True
29320
+ bias_gelu_fusion ................................ True
29321
+ bias_swiglu_fusion .............................. True
29322
+ biencoder_projection_dim ........................ 0
29323
+ biencoder_shared_query_context_model ............ False
29324
+ block_data_path ................................. None
29325
+ calc_ft_timeouts ................................ False
29326
+ calculate_per_token_loss ........................ False
29327
+ check_for_large_grads ........................... False
29328
+ check_for_nan_in_loss_and_grad .................. False
29329
+ check_for_spiky_loss ............................ False
29330
+ check_weight_hash_across_dp_replicas_interval ... None
29331
+ ckpt_assume_constant_structure .................. False
29332
+ ckpt_convert_format ............................. None
29333
+ ckpt_convert_save ............................... None
29334
+ ckpt_convert_update_legacy_dist_opt_format ...... False
29335
+ ckpt_format ..................................... torch_dist
29336
+ ckpt_fully_parallel_load ........................ False
29337
+ ckpt_fully_parallel_save ........................ True
29338
+ ckpt_fully_parallel_save_deprecated ............. False
29339
+ ckpt_step ....................................... None
29340
+ classes_fraction ................................ 1.0
29341
+ clip_grad ....................................... 1.0
29342
+ clone_scatter_output_in_embedding ............... True
29343
+ config_logger_dir ...............................
29344
+ consumed_train_samples .......................... 0
29345
+ consumed_valid_samples .......................... 0
29346
+ context_parallel_size ........................... 8
29347
+ cp_comm_type .................................... ['p2p']
29348
+ create_attention_mask_in_dataloader ............. True
29349
+ cross_entropy_fusion_impl ....................... native
29350
+ cross_entropy_loss_fusion ....................... False
29351
+ cuda_graph_scope ................................ full
29352
+ cuda_graph_warmup_steps ......................... 3
29353
+ data_args_path .................................. None
29354
+ data_cache_path ................................. None
29355
+ data_parallel_random_init ....................... False
29356
+ data_parallel_sharding_strategy ................. no_shard
29357
+ data_parallel_size .............................. 1
29358
+ data_path ....................................... None
29359
+ data_per_class_fraction ......................... 1.0
29360
+ data_sharding ................................... True
29361
+ dataloader_type ................................. single
29362
+ ddp_average_in_collective ....................... False
29363
+ ddp_bucket_size ................................. None
29364
+ ddp_num_buckets ................................. None
29365
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
29366
+ decoder_first_pipeline_num_layers ............... None
29367
+ decoder_last_pipeline_num_layers ................ None
29368
+ decoder_num_layers .............................. None
29369
+ decoder_seq_length .............................. None
29370
+ decoupled_lr .................................... None
29371
+ decoupled_min_lr ................................ None
29372
+ decrease_batch_size_if_needed ................... False
29373
+ defer_embedding_wgrad_compute ................... False
29374
+ deprecated_use_mcore_models ..................... False
29375
+ deterministic_mode .............................. False
29376
+ dino_bottleneck_size ............................ 256
29377
+ dino_freeze_last_layer .......................... 1
29378
+ dino_head_hidden_size ........................... 2048
29379
+ dino_local_crops_number ......................... 10
29380
+ dino_local_img_size ............................. 96
29381
+ dino_norm_last_layer ............................ False
29382
+ dino_teacher_temp ............................... 0.07
29383
+ dino_warmup_teacher_temp ........................ 0.04
29384
+ dino_warmup_teacher_temp_epochs ................. 30
29385
+ disable_bf16_reduced_precision_matmul ........... False
29386
+ disable_mamba_mem_eff_path ...................... False
29387
+ disable_straggler_on_startup .................... False
29388
+ dist_ckpt_format_deprecated ..................... None
29389
+ dist_ckpt_strictness ............................ assume_ok_unexpected
29390
+ distribute_saved_activations .................... False
29391
+ distributed_backend ............................. nccl
29392
+ distributed_timeout_minutes ..................... 10
29393
+ embedding_path .................................. None
29394
+ empty_unused_memory_level ....................... 0
29395
+ enable_cuda_graph ............................... False
29396
+ enable_ft_package ............................... False
29397
+ enable_gloo_process_groups ...................... True
29398
+ enable_msc ...................................... True
29399
+ enable_one_logger ............................... True
29400
+ encoder_num_layers .............................. 2
29401
+ encoder_pipeline_model_parallel_size ............ 0
29402
+ encoder_seq_length .............................. 65536
29403
+ encoder_tensor_model_parallel_size .............. 0
29404
+ end_weight_decay ................................ 0.1
29405
+ eod_mask_loss ................................... False
29406
+ error_injection_rate ............................ 0
29407
+ error_injection_type ............................ transient_error
29408
+ eval_interval ................................... 16
29409
+ eval_iters ...................................... 1
29410
+ evidence_data_path .............................. None
29411
+ exit_duration_in_mins ........................... None
29412
+ exit_interval ................................... None
29413
+ exit_on_missing_checkpoint ...................... False
29414
+ exit_signal_handler ............................. False
29415
+ exp_avg_dtype ................................... torch.float32
29416
+ exp_avg_sq_dtype ................................ torch.float32
29417
+ expert_model_parallel_size ...................... 1
29418
+ expert_tensor_parallel_size ..................... 2
29419
+ external_cuda_graph ............................. False
29420
+ ffn_hidden_size ................................. 16384
29421
+ finetune ........................................ False
29422
+ first_last_layers_bf16 .......................... False
29423
+ flash_decode .................................... False
29424
+ fp16 ............................................ True
29425
+ fp16_lm_cross_entropy ........................... False
29426
+ fp32_residual_connection ........................ False
29427
+ fp8 ............................................. None
29428
+ fp8_amax_compute_algo ........................... most_recent
29429
+ fp8_amax_history_len ............................ 1
29430
+ fp8_interval .................................... 1
29431
+ fp8_margin ...................................... 0
29432
+ fp8_param_gather ................................ False
29433
+ fp8_recipe ...................................... delayed
29434
+ fp8_wgrad ....................................... True
29435
+ fsdp_double_buffer .............................. False
29436
+ global_batch_size ............................... 1
29437
+ grad_reduce_in_bf16 ............................. False
29438
+ gradient_accumulation_fusion .................... True
29439
+ gradient_reduce_div_fusion ...................... True
29440
+ group_query_attention ........................... True
29441
+ head_lr_mult .................................... 1.0
29442
+ heterogeneous_layers_config_encoded_json ........ None
29443
+ heterogeneous_layers_config_path ................ None
29444
+ hidden_dropout .................................. 0.1
29445
+ hidden_size ..................................... 4096
29446
+ hierarchical_context_parallel_sizes ............. None
29447
+ high_priority_stream_groups ..................... []
29448
+ hybrid_attention_ratio .......................... 0.0
29449
+ hybrid_mlp_ratio ................................ 0.0
29450
+ hybrid_override_pattern ......................... None
29451
+ hysteresis ...................................... 2
29452
+ ict_head_size ................................... None
29453
+ ict_load ........................................ None
29454
+ img_h ........................................... 224
29455
+ img_w ........................................... 224
29456
+ indexer_batch_size .............................. 128
29457
+ indexer_log_interval ............................ 1000
29458
+ inference_batch_times_seqlen_threshold .......... -1
29459
+ inference_dynamic_batching ...................... False
29460
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
29461
+ inference_dynamic_batching_buffer_overflow_factor None
29462
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
29463
+ inference_dynamic_batching_chunk_size ........... 256
29464
+ inference_dynamic_batching_max_requests_override None
29465
+ inference_dynamic_batching_max_tokens_override .. None
29466
+ inference_max_batch_size ........................ 8
29467
+ inference_max_seq_length ........................ 2560
29468
+ inference_rng_tracker ........................... False
29469
+ init_method_std ................................. 0.02
29470
+ init_method_xavier_uniform ...................... False
29471
+ init_model_with_meta_device ..................... False
29472
+ initial_loss_scale .............................. 4294967296
29473
+ inprocess_active_world_size ..................... 16
29474
+ inprocess_barrier_timeout ....................... 120
29475
+ inprocess_completion_timeout .................... 120
29476
+ inprocess_empty_cuda_cache ...................... False
29477
+ inprocess_granularity ........................... node
29478
+ inprocess_hard_timeout .......................... 90
29479
+ inprocess_heartbeat_interval .................... 30
29480
+ inprocess_heartbeat_timeout ..................... 60
29481
+ inprocess_last_call_wait ........................ 1
29482
+ inprocess_max_iterations ........................ None
29483
+ inprocess_monitor_process_interval .............. 1.0
29484
+ inprocess_monitor_thread_interval ............... 1.0
29485
+ inprocess_progress_watchdog_interval ............ 1.0
29486
+ inprocess_restart ............................... False
29487
+ inprocess_soft_timeout .......................... 60
29488
+ inprocess_termination_grace_time ................ 1
29489
+ is_hybrid_model ................................. False
29490
+ iter_per_epoch .................................. 1250
29491
+ iterations_to_skip .............................. []
29492
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
29493
+ kv_channels ..................................... 64
29494
+ kv_lora_rank .................................... 32
29495
+ lazy_mpu_init ................................... None
29496
+ load ............................................ gpt-checkpoint
29497
+ load_model_opt_format ........................... False
29498
+ local_rank ...................................... 0
29499
+ log_interval .................................... 1
29500
+ log_loss_scale_to_tensorboard ................... True
29501
+ log_memory_to_tensorboard ....................... False
29502
+ log_num_zeros_in_grad ........................... False
29503
+ log_params_norm ................................. False
29504
+ log_progress .................................... False
29505
+ log_straggler ................................... False
29506
+ log_throughput .................................. False
29507
+ log_timers_to_tensorboard ....................... False
29508
+ log_validation_ppl_to_tensorboard ............... False
29509
+ log_world_size_to_tensorboard ................... False
29510
+ logging_level ................................... 0
29511
+ loss_scale ...................................... None
29512
+ loss_scale_window ............................... 1000
29513
+ lr .............................................. 0.0005
29514
+ lr_decay_iters .................................. 150000
29515
+ lr_decay_samples ................................ None
29516
+ lr_decay_style .................................. cosine
29517
+ lr_warmup_fraction .............................. None
29518
+ lr_warmup_init .................................. 0.0
29519
+ lr_warmup_iters ................................. 2
29520
+ lr_warmup_samples ............................... 0
29521
+ lr_wsd_decay_iters .............................. None
29522
+ lr_wsd_decay_samples ............................ None
29523
+ lr_wsd_decay_style .............................. exponential
29524
+ main_grads_dtype ................................ torch.float32
29525
+ main_params_dtype ............................... torch.float32
29526
+ make_vocab_size_divisible_by .................... 128
29527
+ mamba_head_dim .................................. 64
29528
+ mamba_num_groups ................................ 8
29529
+ mamba_num_heads ................................. None
29530
+ mamba_state_dim ................................. 128
29531
+ manual_gc ....................................... False
29532
+ manual_gc_eval .................................. True
29533
+ manual_gc_interval .............................. 0
29534
+ mask_factor ..................................... 1.0
29535
+ mask_prob ....................................... 0.15
29536
+ mask_type ....................................... random
29537
+ masked_softmax_fusion ........................... True
29538
+ max_position_embeddings ......................... 65536
29539
+ max_tokens_to_oom ............................... 12000
29540
+ memory_snapshot_path ............................ snapshot.pickle
29541
+ merge_file ...................................... merges.txt
29542
+ micro_batch_size ................................ 1
29543
+ microbatch_group_size_per_vp_stage .............. None
29544
+ mid_level_dataset_surplus ....................... 0.005
29545
+ min_loss_scale .................................. 1.0
29546
+ min_lr .......................................... 0.0
29547
+ mlp_chunks_for_prefill .......................... 1
29548
+ mmap_bin_files .................................. True
29549
+ mock_data ....................................... True
29550
+ moe_apply_probs_on_input ........................ False
29551
+ moe_aux_loss_coeff .............................. 0.0
29552
+ moe_enable_deepep ............................... False
29553
+ moe_expert_capacity_factor ...................... None
29554
+ moe_extended_tp ................................. False
29555
+ moe_ffn_hidden_size ............................. None
29556
+ moe_grouped_gemm ................................ False
29557
+ moe_input_jitter_eps ............................ None
29558
+ moe_layer_freq .................................. 1
29559
+ moe_layer_recompute ............................. False
29560
+ moe_pad_expert_input_to_capacity ................ False
29561
+ moe_per_layer_logging ........................... False
29562
+ moe_permute_fusion .............................. False
29563
+ moe_router_bias_update_rate ..................... 0.001
29564
+ moe_router_dtype ................................ None
29565
+ moe_router_enable_expert_bias ................... False
29566
+ moe_router_force_load_balancing ................. False
29567
+ moe_router_group_topk ........................... None
29568
+ moe_router_load_balancing_type .................. aux_loss
29569
+ moe_router_num_groups ........................... None
29570
+ moe_router_padding_for_fp8 ...................... False
29571
+ moe_router_pre_softmax .......................... False
29572
+ moe_router_score_function ....................... softmax
29573
+ moe_router_topk ................................. 2
29574
+ moe_router_topk_scaling_factor .................. None
29575
+ moe_shared_expert_intermediate_size ............. None
29576
+ moe_shared_expert_overlap ....................... False
29577
+ moe_token_dispatcher_type ....................... allgather
29578
+ moe_token_drop_policy ........................... probs
29579
+ moe_use_legacy_grouped_gemm ..................... False
29580
+ moe_use_upcycling ............................... False
29581
+ moe_z_loss_coeff ................................ None
29582
+ mrope_section ................................... None
29583
+ mscale .......................................... 1.0
29584
+ mscale_all_dim .................................. 1.0
29585
+ mtp_loss_scaling_factor ......................... 0.1
29586
+ mtp_num_layers .................................. None
29587
+ multi_latent_attention .......................... False
29588
+ nccl_all_reduce_for_prefill ..................... False
29589
+ nccl_communicator_config_path ................... None
29590
+ nccl_ub ......................................... False
29591
+ no_load_optim ................................... None
29592
+ no_load_rng ..................................... None
29593
+ no_persist_layer_norm ........................... False
29594
+ no_rope_freq .................................... None
29595
+ no_save_optim ................................... None
29596
+ no_save_rng ..................................... None
29597
+ non_persistent_ckpt_type ........................ None
29598
+ non_persistent_global_ckpt_dir .................. None
29599
+ non_persistent_local_ckpt_algo .................. fully_parallel
29600
+ non_persistent_local_ckpt_dir ................... None
29601
+ non_persistent_save_interval .................... None
29602
+ norm_epsilon .................................... 1e-05
29603
+ normalization ................................... LayerNorm
29604
+ num_attention_heads ............................. 64
29605
+ num_channels .................................... 3
29606
+ num_classes ..................................... 1000
29607
+ num_dataset_builder_threads ..................... 1
29608
+ num_distributed_optimizer_instances ............. 1
29609
+ num_experts ..................................... None
29610
+ num_layers ...................................... 2
29611
+ num_layers_at_end_in_bf16 ....................... 1
29612
+ num_layers_at_start_in_bf16 ..................... 1
29613
+ num_layers_per_virtual_pipeline_stage ........... None
29614
+ num_query_groups ................................ 16
29615
+ num_virtual_stages_per_pipeline_rank ............ None
29616
+ num_workers ..................................... 2
29617
+ object_storage_cache_path ....................... None
29618
+ one_logger_async ................................ False
29619
+ one_logger_project .............................. megatron-lm
29620
+ one_logger_run_name ............................. None
29621
+ onnx_safe ....................................... None
29622
+ openai_gelu ..................................... False
29623
+ optimizer ....................................... adam
29624
+ optimizer_cpu_offload ........................... False
29625
+ optimizer_offload_fraction ...................... 1.0
29626
+ output_bert_embeddings .......................... False
29627
+ overlap_cpu_optimizer_d2h_h2d ................... False
29628
+ overlap_grad_reduce ............................. False
29629
+ overlap_p2p_comm ................................ False
29630
+ overlap_p2p_comm_warmup_flush ................... False
29631
+ overlap_param_gather ............................ False
29632
+ overlap_param_gather_with_optimizer_step ........ False
29633
+ override_opt_param_scheduler .................... False
29634
+ params_dtype .................................... torch.float16
29635
+ patch_dim ....................................... 16
29636
+ per_split_data_args_path ........................ None
29637
+ perform_initialization .......................... True
29638
+ pin_cpu_grads ................................... True
29639
+ pin_cpu_params .................................. True
29640
+ pipeline_model_parallel_comm_backend ............ None
29641
+ pipeline_model_parallel_size .................... 1
29642
+ pipeline_model_parallel_split_rank .............. None
29643
+ position_embedding_type ......................... learned_absolute
29644
+ pretrained_checkpoint ........................... None
29645
+ profile ......................................... False
29646
+ profile_ranks ................................... [0]
29647
+ profile_step_end ................................ 12
29648
+ profile_step_start .............................. 10
29649
+ q_lora_rank ..................................... None
29650
+ qk_head_dim ..................................... 128
29651
+ qk_l2_norm ...................................... False
29652
+ qk_layernorm .................................... False
29653
+ qk_pos_emb_head_dim ............................. 64
29654
+ query_in_block_prob ............................. 0.1
29655
+ rampup_batch_size ............................... None
29656
+ rank ............................................ 0
29657
+ recompute_granularity ........................... None
29658
+ recompute_method ................................ None
29659
+ recompute_modules ............................... None
29660
+ recompute_num_layers ............................ None
29661
+ record_memory_history ........................... False
29662
+ relative_attention_max_distance ................. 128
29663
+ relative_attention_num_buckets .................. 32
29664
+ replication ..................................... False
29665
+ replication_factor .............................. 2
29666
+ replication_jump ................................ None
29667
+ rerun_mode ...................................... disabled
29668
+ reset_attention_mask ............................ False
29669
+ reset_position_ids .............................. False
29670
+ result_rejected_tracker_filename ................ None
29671
+ retriever_report_topk_accuracies ................ []
29672
+ retriever_score_scaling ......................... False
29673
+ retriever_seq_length ............................ 256
29674
+ retro_add_retriever ............................. False
29675
+ retro_attention_gate ............................ 1
29676
+ retro_cyclic_train_iters ........................ None
29677
+ retro_encoder_attention_dropout ................. 0.1
29678
+ retro_encoder_hidden_dropout .................... 0.1
29679
+ retro_encoder_layers ............................ 2
29680
+ retro_num_neighbors ............................. 2
29681
+ retro_num_retrieved_chunks ...................... 2
29682
+ retro_project_dir ............................... None
29683
+ retro_verify_neighbor_count ..................... True
29684
+ rope_scaling_factor ............................. 8.0
29685
+ rotary_base ..................................... 10000
29686
+ rotary_interleaved .............................. False
29687
+ rotary_percent .................................. 1.0
29688
+ rotary_scaling_factor ........................... 1.0
29689
+ rotary_seq_len_interpolation_factor ............. None
29690
+ run_workload_inspector_server ................... False
29691
+ sample_rate ..................................... 1.0
29692
+ save ............................................ gpt-checkpoint
29693
+ save_interval ................................... 16
29694
+ scatter_gather_tensors_in_pipeline .............. True
29695
+ seed ............................................ 1234
29696
+ seq_length ...................................... 65536
29697
+ sequence_parallel ............................... False
29698
+ sgd_momentum .................................... 0.9
29699
+ short_seq_prob .................................. 0.1
29700
+ skip_train ...................................... False
29701
+ skipped_train_samples ........................... 0
29702
+ spec ............................................ None
29703
+ split ........................................... None
29704
+ squared_relu .................................... False
29705
+ start_weight_decay .............................. 0.1
29706
+ straggler_ctrlr_port ............................ 65535
29707
+ straggler_minmax_count .......................... 1
29708
+ suggested_communication_unit_size ............... None
29709
+ swiglu .......................................... False
29710
+ swin_backbone_type .............................. tiny
29711
+ symmetric_ar_type ............................... None
29712
+ te_rng_tracker .................................. False
29713
+ tensor_model_parallel_size ...................... 2
29714
+ tensorboard_dir ................................. tensorboard-logs/
29715
+ tensorboard_log_interval ........................ 1
29716
+ tensorboard_queue_size .......................... 1000
29717
+ test_data_path .................................. None
29718
+ test_mode ....................................... False
29719
+ tiktoken_num_special_tokens ..................... 1000
29720
+ tiktoken_pattern ................................ None
29721
+ tiktoken_special_tokens ......................... None
29722
+ timing_log_level ................................ 0
29723
+ timing_log_option ............................... minmax
29724
+ titles_data_path ................................ None
29725
+ tokenizer_model ................................. None
29726
+ tokenizer_type .................................. GPT2BPETokenizer
29727
+ torch_fsdp2_reshard_after_forward ............... True
29728
+ tp_comm_bootstrap_backend ....................... nccl
29729
+ tp_comm_bulk_dgrad .............................. True
29730
+ tp_comm_bulk_wgrad .............................. True
29731
+ tp_comm_overlap ................................. False
29732
+ tp_comm_overlap_ag .............................. True
29733
+ tp_comm_overlap_cfg ............................. None
29734
+ tp_comm_overlap_rs .............................. True
29735
+ tp_comm_overlap_rs_dgrad ........................ False
29736
+ tp_comm_split_ag ................................ True
29737
+ tp_comm_split_rs ................................ True
29738
+ train_data_path ................................. None
29739
+ train_iters ..................................... 10
29740
+ train_samples ................................... None
29741
+ train_sync_interval ............................. None
29742
+ transformer_impl ................................ transformer_engine
29743
+ transformer_pipeline_model_parallel_size ........ 1
29744
+ untie_embeddings_and_output_weights ............. False
29745
+ use_checkpoint_args ............................. False
29746
+ use_checkpoint_opt_param_scheduler .............. False
29747
+ use_cpu_initialization .......................... None
29748
+ use_custom_fsdp ................................. False
29749
+ use_dist_ckpt ................................... True
29750
+ use_dist_ckpt_deprecated ........................ False
29751
+ use_distributed_optimizer ....................... False
29752
+ use_flash_attn .................................. False
29753
+ use_legacy_models ............................... False
29754
+ use_mp_args_from_checkpoint_args ................ False
29755
+ use_one_sent_docs ............................... False
29756
+ use_persistent_ckpt_worker ...................... False
29757
+ use_precision_aware_optimizer ................... False
29758
+ use_pytorch_profiler ............................ False
29759
+ use_ring_exchange_p2p ........................... False
29760
+ use_rope_scaling ................................ False
29761
+ use_rotary_position_embeddings .................. False
29762
+ use_sharp ....................................... False
29763
+ use_tokenizer_model_from_checkpoint_args ........ True
29764
+ use_torch_fsdp2 ................................. False
29765
+ use_torch_optimizer_for_cpu_offload ............. False
29766
+ use_tp_pp_dp_mapping ............................ False
29767
+ v_head_dim ...................................... 128
29768
+ valid_data_path ................................. None
29769
+ variable_seq_lengths ............................ False
29770
+ virtual_pipeline_model_parallel_size ............ None
29771
+ vision_backbone_type ............................ vit
29772
+ vision_pretraining .............................. False
29773
+ vision_pretraining_type ......................... classify
29774
+ vocab_extra_ids ................................. 0
29775
+ vocab_file ...................................... vocab.json
29776
+ vocab_size ...................................... None
29777
+ wandb_exp_name ..................................
29778
+ wandb_project ...................................
29779
+ wandb_save_dir ..................................
29780
+ weight_decay .................................... 0.1
29781
+ weight_decay_incr_style ......................... constant
29782
+ wgrad_deferral_limit ............................ 0
29783
+ world_size ...................................... 16
29784
+ yaml_cfg ........................................ None
29785
+ -------------------- end of arguments ---------------------
29786
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
29787
+ > building GPT2BPETokenizer tokenizer ...
29788
+ INFO:megatron.training.initialize:Setting logging level to 0
29789
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
29790
+ INFO:megatron.training.initialize:Setting logging level to 0
29791
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
29792
+ > initializing torch distributed ...
29793
+ INFO:megatron.training.initialize:Setting logging level to 0
29794
+ INFO:megatron.training.initialize:Setting logging level to 0
29795
+ INFO:megatron.training.initialize:Setting logging level to 0
29796
+ INFO:megatron.training.initialize:Setting logging level to 0
29797
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
29798
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
29799
+ INFO:megatron.training.initialize:Setting logging level to 0
29800
+ INFO:megatron.training.initialize:Setting logging level to 0
29801
+ INFO:megatron.training.initialize:Setting logging level to 0
29802
+ INFO:megatron.training.initialize:Setting logging level to 0
29803
+ > initialized tensor model parallel with size 2
29804
+ > initialized pipeline model parallel with size 1
29805
+ > setting random seeds to 1234 ...
29806
+ > compiling dataset index builder ...
29807
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
29808
+ make: Nothing to be done for 'default'.
29809
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
29810
+ >>> done with dataset index builder. Compilation time: 0.043 seconds
29811
+ WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
29812
+ > compiling and loading fused kernels ...
attnserver.run_attnserver.slurm.sh.343238.err.log CHANGED
@@ -6502,3 +6502,32 @@ W0621 21:56:13.850000 3515598 site-packages/torch/distributed/run.py:766] ******
6502
  warnings.warn(
6503
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6504
  warnings.warn(
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6502
  warnings.warn(
6503
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6504
  warnings.warn(
6505
+ [rank0]: Traceback (most recent call last):
6506
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
6507
+ [rank0]: pretrain(
6508
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
6509
+ [rank0]: save_checkpoint(
6510
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
6511
+ [rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
6512
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
6513
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
6514
+ [rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
6515
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
6516
+ [rank0]: torch.save(common_state_dict, path)
6517
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
6518
+ [rank0]: with _open_zipfile_writer(f) as opened_zipfile:
6519
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
6520
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
6521
+ [rank0]: return container(name_or_buffer)
6522
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
6523
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
6524
+ [rank0]: torch._C.PyTorchFileWriter(
6525
+ [rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
6526
+ [rank0]:[W621 21:59:24.782079410 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
6527
+ W0621 21:59:31.544000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515671 closing signal SIGTERM
6528
+ W0621 21:59:31.547000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515672 closing signal SIGTERM
6529
+ W0621 21:59:31.555000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515673 closing signal SIGTERM
6530
+ W0621 21:59:31.558000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515674 closing signal SIGTERM
6531
+ W0621 21:59:31.561000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515675 closing signal SIGTERM
6532
+ W0621 21:59:31.584000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515676 closing signal SIGTERM
6533
+ W0621 21:59:31.588000 3515598 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3515677 closing signal SIGTERM
attnserver.run_attnserver.slurm.sh.343238.out.log CHANGED
@@ -22630,3 +22630,696 @@ batch tensor after cp: position_ids torch.Size([2, 10240])
22630
  Start exporting trace 5
22631
  Done exporting trace 5
22632
  [2025-06-21 21:58:10] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 8301.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22630
  Start exporting trace 5
22631
  Done exporting trace 5
22632
  [2025-06-21 21:58:10] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 8301.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
22633
+ batch tensor: tokens torch.Size([2, 81920])
22634
+ batch tensor: labels torch.Size([2, 81920])
22635
+ batch tensor: loss_mask torch.Size([2, 81920])
22636
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22637
+ batch tensor: position_ids torch.Size([2, 81920])
22638
+ batch tensor after cp: tokens torch.Size([2, 10240])
22639
+ batch tensor after cp: labels torch.Size([2, 10240])
22640
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22641
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22642
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22643
+ batch tensor: tokens torch.Size([2, 81920])
22644
+ batch tensor: labels torch.Size([2, 81920])
22645
+ batch tensor: loss_mask torch.Size([2, 81920])
22646
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22647
+ batch tensor: position_ids torch.Size([2, 81920])
22648
+ batch tensor after cp: tokens torch.Size([2, 10240])
22649
+ batch tensor after cp: labels torch.Size([2, 10240])
22650
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22651
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22652
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22653
+ batch tensor: tokens torch.Size([2, 81920])
22654
+ batch tensor: labels torch.Size([2, 81920])
22655
+ batch tensor: loss_mask torch.Size([2, 81920])
22656
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22657
+ batch tensor: position_ids torch.Size([2, 81920])
22658
+ batch tensor after cp: tokens torch.Size([2, 10240])
22659
+ batch tensor after cp: labels torch.Size([2, 10240])
22660
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22661
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22662
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22663
+ batch tensor: tokens torch.Size([2, 81920])
22664
+ batch tensor: labels torch.Size([2, 81920])
22665
+ batch tensor: loss_mask torch.Size([2, 81920])
22666
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22667
+ batch tensor: position_ids torch.Size([2, 81920])
22668
+ batch tensor after cp: tokens torch.Size([2, 10240])
22669
+ batch tensor after cp: labels torch.Size([2, 10240])
22670
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22671
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22672
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22673
+ batch tensor: tokens torch.Size([2, 81920])
22674
+ batch tensor: labels torch.Size([2, 81920])
22675
+ batch tensor: loss_mask torch.Size([2, 81920])
22676
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22677
+ batch tensor: position_ids torch.Size([2, 81920])
22678
+ batch tensor after cp: tokens torch.Size([2, 10240])
22679
+ batch tensor after cp: labels torch.Size([2, 10240])
22680
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22681
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22682
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22683
+ batch tensor: tokens torch.Size([2, 81920])
22684
+ batch tensor: labels torch.Size([2, 81920])
22685
+ batch tensor: loss_mask torch.Size([2, 81920])
22686
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22687
+ batch tensor: position_ids torch.Size([2, 81920])
22688
+ batch tensor after cp: tokens torch.Size([2, 10240])
22689
+ batch tensor after cp: labels torch.Size([2, 10240])
22690
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22691
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22692
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22693
+ batch tensor: tokens torch.Size([2, 81920])
22694
+ batch tensor: labels torch.Size([2, 81920])
22695
+ batch tensor: loss_mask torch.Size([2, 81920])
22696
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22697
+ batch tensor: position_ids torch.Size([2, 81920])
22698
+ batch tensor after cp: tokens torch.Size([2, 10240])
22699
+ batch tensor after cp: labels torch.Size([2, 10240])
22700
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22701
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22702
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22703
+ batch tensor: tokens torch.Size([2, 81920])
22704
+ batch tensor: labels torch.Size([2, 81920])
22705
+ batch tensor: loss_mask torch.Size([2, 81920])
22706
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22707
+ batch tensor: position_ids torch.Size([2, 81920])
22708
+ batch tensor after cp: tokens torch.Size([2, 10240])
22709
+ batch tensor after cp: labels torch.Size([2, 10240])
22710
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22711
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22712
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22713
+ batch tensor: tokens torch.Size([2, 81920])
22714
+ batch tensor: labels torch.Size([2, 81920])
22715
+ batch tensor: loss_mask torch.Size([2, 81920])
22716
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22717
+ batch tensor: position_ids torch.Size([2, 81920])
22718
+ batch tensor after cp: tokens torch.Size([2, 10240])
22719
+ batch tensor after cp: labels torch.Size([2, 10240])
22720
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22721
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22722
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22723
+ batch tensor: tokens torch.Size([2, 81920])
22724
+ batch tensor: labels torch.Size([2, 81920])
22725
+ batch tensor: loss_mask torch.Size([2, 81920])
22726
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22727
+ batch tensor: position_ids torch.Size([2, 81920])
22728
+ batch tensor after cp: tokens torch.Size([2, 10240])
22729
+ batch tensor after cp: labels torch.Size([2, 10240])
22730
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22731
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22732
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22733
+ batch tensor: tokens torch.Size([2, 81920])
22734
+ batch tensor: labels torch.Size([2, 81920])
22735
+ batch tensor: loss_mask torch.Size([2, 81920])
22736
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22737
+ batch tensor: position_ids torch.Size([2, 81920])
22738
+ batch tensor after cp: tokens torch.Size([2, 10240])
22739
+ batch tensor after cp: labels torch.Size([2, 10240])
22740
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22741
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22742
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22743
+ batch tensor: tokens torch.Size([2, 81920])
22744
+ batch tensor: labels torch.Size([2, 81920])
22745
+ batch tensor: loss_mask torch.Size([2, 81920])
22746
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22747
+ batch tensor: position_ids torch.Size([2, 81920])
22748
+ batch tensor after cp: tokens torch.Size([2, 10240])
22749
+ batch tensor after cp: labels torch.Size([2, 10240])
22750
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22751
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22752
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22753
+ batch tensor: tokens torch.Size([2, 81920])
22754
+ batch tensor: labels torch.Size([2, 81920])
22755
+ batch tensor: loss_mask torch.Size([2, 81920])
22756
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22757
+ batch tensor: position_ids torch.Size([2, 81920])
22758
+ batch tensor after cp: tokens torch.Size([2, 10240])
22759
+ batch tensor after cp: labels torch.Size([2, 10240])
22760
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22761
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22762
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22763
+ batch tensor: tokens torch.Size([2, 81920])
22764
+ batch tensor: labels torch.Size([2, 81920])
22765
+ batch tensor: loss_mask torch.Size([2, 81920])
22766
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22767
+ batch tensor: position_ids torch.Size([2, 81920])
22768
+ batch tensor after cp: tokens torch.Size([2, 10240])
22769
+ batch tensor after cp: labels torch.Size([2, 10240])
22770
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22771
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22772
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22773
+ batch tensor: tokens torch.Size([2, 81920])
22774
+ batch tensor: labels torch.Size([2, 81920])
22775
+ batch tensor: loss_mask torch.Size([2, 81920])
22776
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22777
+ batch tensor: position_ids torch.Size([2, 81920])
22778
+ batch tensor after cp: tokens torch.Size([2, 10240])
22779
+ batch tensor after cp: labels torch.Size([2, 10240])
22780
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22781
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22782
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22783
+ batch tensor: tokens torch.Size([2, 81920])
22784
+ batch tensor: labels torch.Size([2, 81920])
22785
+ batch tensor: loss_mask torch.Size([2, 81920])
22786
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22787
+ batch tensor: position_ids torch.Size([2, 81920])
22788
+ batch tensor after cp: tokens torch.Size([2, 10240])
22789
+ batch tensor after cp: labels torch.Size([2, 10240])
22790
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22791
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22792
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22793
+ Start exporting trace 6
22794
+ Done exporting trace 6
22795
+ [2025-06-21 21:58:18] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 8399.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
22796
+ batch tensor: tokens torch.Size([2, 81920])
22797
+ batch tensor: labels torch.Size([2, 81920])
22798
+ batch tensor: loss_mask torch.Size([2, 81920])
22799
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22800
+ batch tensor: position_ids torch.Size([2, 81920])
22801
+ batch tensor after cp: tokens torch.Size([2, 10240])
22802
+ batch tensor after cp: labels torch.Size([2, 10240])
22803
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22804
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22805
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22806
+ batch tensor: tokens torch.Size([2, 81920])
22807
+ batch tensor: labels torch.Size([2, 81920])
22808
+ batch tensor: loss_mask torch.Size([2, 81920])
22809
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22810
+ batch tensor: position_ids torch.Size([2, 81920])
22811
+ batch tensor after cp: tokens torch.Size([2, 10240])
22812
+ batch tensor after cp: labels torch.Size([2, 10240])
22813
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22814
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22815
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22816
+ batch tensor: tokens torch.Size([2, 81920])
22817
+ batch tensor: labels torch.Size([2, 81920])
22818
+ batch tensor: loss_mask torch.Size([2, 81920])
22819
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22820
+ batch tensor: position_ids torch.Size([2, 81920])
22821
+ batch tensor after cp: tokens torch.Size([2, 10240])
22822
+ batch tensor after cp: labels torch.Size([2, 10240])
22823
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22824
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22825
+ batch tensor: tokens torch.Size([2, 81920])
22826
+ batch tensor: labels torch.Size([2, 81920])
22827
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22828
+ batch tensor: loss_mask torch.Size([2, 81920])
22829
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22830
+ batch tensor: position_ids torch.Size([2, 81920])
22831
+ batch tensor after cp: tokens torch.Size([2, 10240])
22832
+ batch tensor after cp: labels torch.Size([2, 10240])
22833
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22834
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22835
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22836
+ batch tensor: tokens torch.Size([2, 81920])
22837
+ batch tensor: labels torch.Size([2, 81920])
22838
+ batch tensor: loss_mask torch.Size([2, 81920])
22839
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22840
+ batch tensor: position_ids torch.Size([2, 81920])
22841
+ batch tensor after cp: tokens torch.Size([2, 10240])
22842
+ batch tensor after cp: labels torch.Size([2, 10240])
22843
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22844
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22845
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22846
+ batch tensor: tokens torch.Size([2, 81920])
22847
+ batch tensor: labels torch.Size([2, 81920])
22848
+ batch tensor: loss_mask torch.Size([2, 81920])
22849
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22850
+ batch tensor: position_ids torch.Size([2, 81920])
22851
+ batch tensor after cp: tokens torch.Size([2, 10240])
22852
+ batch tensor after cp: labels torch.Size([2, 10240])
22853
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22854
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22855
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22856
+ batch tensor: tokens torch.Size([2, 81920])
22857
+ batch tensor: labels torch.Size([2, 81920])
22858
+ batch tensor: loss_mask torch.Size([2, 81920])
22859
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22860
+ batch tensor: position_ids torch.Size([2, 81920])
22861
+ batch tensor after cp: tokens torch.Size([2, 10240])
22862
+ batch tensor after cp: labels torch.Size([2, 10240])
22863
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22864
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22865
+ batch tensor: tokens torch.Size([2, 81920])
22866
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22867
+ batch tensor: labels torch.Size([2, 81920])
22868
+ batch tensor: loss_mask torch.Size([2, 81920])
22869
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22870
+ batch tensor: position_ids torch.Size([2, 81920])
22871
+ batch tensor after cp: tokens torch.Size([2, 10240])
22872
+ batch tensor after cp: labels torch.Size([2, 10240])
22873
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22874
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22875
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22876
+ batch tensor: tokens torch.Size([2, 81920])
22877
+ batch tensor: labels torch.Size([2, 81920])
22878
+ batch tensor: loss_mask torch.Size([2, 81920])
22879
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22880
+ batch tensor: position_ids torch.Size([2, 81920])
22881
+ batch tensor after cp: tokens torch.Size([2, 10240])
22882
+ batch tensor after cp: labels torch.Size([2, 10240])
22883
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22884
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22885
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22886
+ batch tensor: tokens torch.Size([2, 81920])
22887
+ batch tensor: labels torch.Size([2, 81920])
22888
+ batch tensor: loss_mask torch.Size([2, 81920])
22889
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22890
+ batch tensor: position_ids torch.Size([2, 81920])
22891
+ batch tensor after cp: tokens torch.Size([2, 10240])
22892
+ batch tensor after cp: labels torch.Size([2, 10240])
22893
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22894
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22895
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22896
+ batch tensor: tokens torch.Size([2, 81920])
22897
+ batch tensor: labels torch.Size([2, 81920])
22898
+ batch tensor: loss_mask torch.Size([2, 81920])
22899
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22900
+ batch tensor: position_ids torch.Size([2, 81920])
22901
+ batch tensor after cp: tokens torch.Size([2, 10240])
22902
+ batch tensor after cp: labels torch.Size([2, 10240])
22903
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22904
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22905
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22906
+ batch tensor: tokens torch.Size([2, 81920])
22907
+ batch tensor: labels torch.Size([2, 81920])
22908
+ batch tensor: loss_mask torch.Size([2, 81920])
22909
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22910
+ batch tensor: position_ids torch.Size([2, 81920])
22911
+ batch tensor after cp: tokens torch.Size([2, 10240])
22912
+ batch tensor after cp: labels torch.Size([2, 10240])
22913
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22914
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22915
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22916
+ batch tensor: tokens torch.Size([2, 81920])
22917
+ batch tensor: labels torch.Size([2, 81920])
22918
+ batch tensor: loss_mask torch.Size([2, 81920])
22919
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22920
+ batch tensor: position_ids torch.Size([2, 81920])
22921
+ batch tensor after cp: tokens torch.Size([2, 10240])
22922
+ batch tensor after cp: labels torch.Size([2, 10240])
22923
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22924
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22925
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22926
+ batch tensor: tokens torch.Size([2, 81920])
22927
+ batch tensor: labels torch.Size([2, 81920])
22928
+ batch tensor: loss_mask torch.Size([2, 81920])
22929
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22930
+ batch tensor: position_ids torch.Size([2, 81920])
22931
+ batch tensor after cp: tokens torch.Size([2, 10240])
22932
+ batch tensor after cp: labels torch.Size([2, 10240])
22933
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22934
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22935
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22936
+ batch tensor: tokens torch.Size([2, 81920])
22937
+ batch tensor: labels torch.Size([2, 81920])
22938
+ batch tensor: loss_mask torch.Size([2, 81920])
22939
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22940
+ batch tensor: position_ids torch.Size([2, 81920])
22941
+ batch tensor after cp: tokens torch.Size([2, 10240])
22942
+ batch tensor after cp: labels torch.Size([2, 10240])
22943
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22944
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22945
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22946
+ batch tensor: tokens torch.Size([2, 81920])
22947
+ batch tensor: labels torch.Size([2, 81920])
22948
+ batch tensor: loss_mask torch.Size([2, 81920])
22949
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22950
+ batch tensor: position_ids torch.Size([2, 81920])
22951
+ batch tensor after cp: tokens torch.Size([2, 10240])
22952
+ batch tensor after cp: labels torch.Size([2, 10240])
22953
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22954
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22955
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22956
+ Start exporting trace 7
22957
+ Done exporting trace 7
22958
+ [2025-06-21 21:58:26] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 8378.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
22959
+ batch tensor: tokens torch.Size([2, 81920])
22960
+ batch tensor: labels torch.Size([2, 81920])
22961
+ batch tensor: loss_mask torch.Size([2, 81920])
22962
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22963
+ batch tensor: position_ids torch.Size([2, 81920])
22964
+ batch tensor after cp: tokens torch.Size([2, 10240])
22965
+ batch tensor after cp: labels torch.Size([2, 10240])
22966
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22967
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22968
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22969
+ batch tensor: tokens torch.Size([2, 81920])
22970
+ batch tensor: labels torch.Size([2, 81920])
22971
+ batch tensor: loss_mask torch.Size([2, 81920])
22972
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22973
+ batch tensor: position_ids torch.Size([2, 81920])
22974
+ batch tensor after cp: tokens torch.Size([2, 10240])
22975
+ batch tensor after cp: labels torch.Size([2, 10240])
22976
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22977
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22978
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22979
+ batch tensor: tokens torch.Size([2, 81920])
22980
+ batch tensor: labels torch.Size([2, 81920])
22981
+ batch tensor: loss_mask torch.Size([2, 81920])
22982
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22983
+ batch tensor: position_ids torch.Size([2, 81920])
22984
+ batch tensor after cp: tokens torch.Size([2, 10240])
22985
+ batch tensor after cp: labels torch.Size([2, 10240])
22986
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22987
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22988
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22989
+ batch tensor: tokens torch.Size([2, 81920])
22990
+ batch tensor: labels torch.Size([2, 81920])
22991
+ batch tensor: loss_mask torch.Size([2, 81920])
22992
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
22993
+ batch tensor: position_ids torch.Size([2, 81920])
22994
+ batch tensor after cp: tokens torch.Size([2, 10240])
22995
+ batch tensor after cp: labels torch.Size([2, 10240])
22996
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
22997
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
22998
+ batch tensor after cp: position_ids torch.Size([2, 10240])
22999
+ batch tensor: tokens torch.Size([2, 81920])
23000
+ batch tensor: labels torch.Size([2, 81920])
23001
+ batch tensor: loss_mask torch.Size([2, 81920])
23002
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23003
+ batch tensor: position_ids torch.Size([2, 81920])
23004
+ batch tensor after cp: tokens torch.Size([2, 10240])
23005
+ batch tensor after cp: labels torch.Size([2, 10240])
23006
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23007
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23008
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23009
+ batch tensor: tokens torch.Size([2, 81920])
23010
+ batch tensor: labels torch.Size([2, 81920])
23011
+ batch tensor: loss_mask torch.Size([2, 81920])
23012
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23013
+ batch tensor: position_ids torch.Size([2, 81920])
23014
+ batch tensor after cp: tokens torch.Size([2, 10240])
23015
+ batch tensor after cp: labels torch.Size([2, 10240])
23016
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23017
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23018
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23019
+ batch tensor: tokens torch.Size([2, 81920])
23020
+ batch tensor: labels torch.Size([2, 81920])
23021
+ batch tensor: loss_mask torch.Size([2, 81920])
23022
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23023
+ batch tensor: position_ids torch.Size([2, 81920])
23024
+ batch tensor after cp: tokens torch.Size([2, 10240])
23025
+ batch tensor after cp: labels torch.Size([2, 10240])
23026
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23027
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23028
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23029
+ batch tensor: tokens torch.Size([2, 81920])
23030
+ batch tensor: labels torch.Size([2, 81920])
23031
+ batch tensor: loss_mask torch.Size([2, 81920])
23032
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23033
+ batch tensor: position_ids torch.Size([2, 81920])
23034
+ batch tensor after cp: tokens torch.Size([2, 10240])
23035
+ batch tensor after cp: labels torch.Size([2, 10240])
23036
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23037
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23038
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23039
+ batch tensor: tokens torch.Size([2, 81920])
23040
+ batch tensor: labels torch.Size([2, 81920])
23041
+ batch tensor: loss_mask torch.Size([2, 81920])
23042
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23043
+ batch tensor: position_ids torch.Size([2, 81920])
23044
+ batch tensor after cp: tokens torch.Size([2, 10240])
23045
+ batch tensor after cp: labels torch.Size([2, 10240])
23046
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23047
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23048
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23049
+ batch tensor: tokens torch.Size([2, 81920])
23050
+ batch tensor: labels torch.Size([2, 81920])
23051
+ batch tensor: loss_mask torch.Size([2, 81920])
23052
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23053
+ batch tensor: position_ids torch.Size([2, 81920])
23054
+ batch tensor after cp: tokens torch.Size([2, 10240])
23055
+ batch tensor after cp: labels torch.Size([2, 10240])
23056
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23057
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23058
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23059
+ batch tensor: tokens torch.Size([2, 81920])
23060
+ batch tensor: labels torch.Size([2, 81920])
23061
+ batch tensor: loss_mask torch.Size([2, 81920])
23062
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23063
+ batch tensor: position_ids torch.Size([2, 81920])
23064
+ batch tensor after cp: tokens torch.Size([2, 10240])
23065
+ batch tensor after cp: labels torch.Size([2, 10240])
23066
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23067
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23068
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23069
+ batch tensor: tokens torch.Size([2, 81920])
23070
+ batch tensor: labels torch.Size([2, 81920])
23071
+ batch tensor: loss_mask torch.Size([2, 81920])
23072
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23073
+ batch tensor: position_ids torch.Size([2, 81920])
23074
+ batch tensor after cp: tokens torch.Size([2, 10240])
23075
+ batch tensor after cp: labels torch.Size([2, 10240])
23076
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23077
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23078
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23079
+ batch tensor: tokens torch.Size([2, 81920])
23080
+ batch tensor: labels torch.Size([2, 81920])
23081
+ batch tensor: loss_mask torch.Size([2, 81920])
23082
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23083
+ batch tensor: position_ids torch.Size([2, 81920])
23084
+ batch tensor after cp: tokens torch.Size([2, 10240])
23085
+ batch tensor after cp: labels torch.Size([2, 10240])
23086
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23087
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23088
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23089
+ batch tensor: tokens torch.Size([2, 81920])
23090
+ batch tensor: labels torch.Size([2, 81920])
23091
+ batch tensor: loss_mask torch.Size([2, 81920])
23092
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23093
+ batch tensor: position_ids torch.Size([2, 81920])
23094
+ batch tensor after cp: tokens torch.Size([2, 10240])
23095
+ batch tensor after cp: labels torch.Size([2, 10240])
23096
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23097
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23098
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23099
+ batch tensor: tokens torch.Size([2, 81920])
23100
+ batch tensor: labels torch.Size([2, 81920])
23101
+ batch tensor: loss_mask torch.Size([2, 81920])
23102
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23103
+ batch tensor: position_ids torch.Size([2, 81920])
23104
+ batch tensor after cp: tokens torch.Size([2, 10240])
23105
+ batch tensor after cp: labels torch.Size([2, 10240])
23106
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23107
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23108
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23109
+ batch tensor: tokens torch.Size([2, 81920])
23110
+ batch tensor: labels torch.Size([2, 81920])
23111
+ batch tensor: loss_mask torch.Size([2, 81920])
23112
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23113
+ batch tensor: position_ids torch.Size([2, 81920])
23114
+ batch tensor after cp: tokens torch.Size([2, 10240])
23115
+ batch tensor after cp: labels torch.Size([2, 10240])
23116
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23117
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23118
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23119
+ Start exporting trace 8
23120
+ Done exporting trace 8
23121
+ [2025-06-21 21:58:35] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 8393.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
23122
+ batch tensor: tokens torch.Size([2, 81920])
23123
+ batch tensor: labels torch.Size([2, 81920])
23124
+ batch tensor: loss_mask torch.Size([2, 81920])
23125
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23126
+ batch tensor: position_ids torch.Size([2, 81920])
23127
+ batch tensor after cp: tokens torch.Size([2, 10240])
23128
+ batch tensor after cp: labels torch.Size([2, 10240])
23129
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23130
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23131
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23132
+ batch tensor: tokens torch.Size([2, 81920])
23133
+ batch tensor: labels torch.Size([2, 81920])
23134
+ batch tensor: loss_mask torch.Size([2, 81920])
23135
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23136
+ batch tensor: position_ids torch.Size([2, 81920])
23137
+ batch tensor after cp: tokens torch.Size([2, 10240])
23138
+ batch tensor after cp: labels torch.Size([2, 10240])
23139
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23140
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23141
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23142
+ batch tensor: tokens torch.Size([2, 81920])
23143
+ batch tensor: labels torch.Size([2, 81920])
23144
+ batch tensor: loss_mask torch.Size([2, 81920])
23145
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23146
+ batch tensor: position_ids torch.Size([2, 81920])
23147
+ batch tensor after cp: tokens torch.Size([2, 10240])
23148
+ batch tensor after cp: labels torch.Size([2, 10240])
23149
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23150
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23151
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23152
+ batch tensor: tokens torch.Size([2, 81920])
23153
+ batch tensor: labels torch.Size([2, 81920])
23154
+ batch tensor: loss_mask torch.Size([2, 81920])
23155
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23156
+ batch tensor: position_ids torch.Size([2, 81920])
23157
+ batch tensor after cp: tokens torch.Size([2, 10240])
23158
+ batch tensor after cp: labels torch.Size([2, 10240])
23159
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23160
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23161
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23162
+ batch tensor: tokens torch.Size([2, 81920])
23163
+ batch tensor: labels torch.Size([2, 81920])
23164
+ batch tensor: loss_mask torch.Size([2, 81920])
23165
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23166
+ batch tensor: position_ids torch.Size([2, 81920])
23167
+ batch tensor after cp: tokens torch.Size([2, 10240])
23168
+ batch tensor after cp: labels torch.Size([2, 10240])
23169
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23170
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23171
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23172
+ batch tensor: tokens torch.Size([2, 81920])
23173
+ batch tensor: labels torch.Size([2, 81920])
23174
+ batch tensor: loss_mask torch.Size([2, 81920])
23175
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23176
+ batch tensor: position_ids torch.Size([2, 81920])
23177
+ batch tensor after cp: tokens torch.Size([2, 10240])
23178
+ batch tensor after cp: labels torch.Size([2, 10240])
23179
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23180
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23181
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23182
+ batch tensor: tokens torch.Size([2, 81920])
23183
+ batch tensor: labels torch.Size([2, 81920])
23184
+ batch tensor: loss_mask torch.Size([2, 81920])
23185
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23186
+ batch tensor: position_ids torch.Size([2, 81920])
23187
+ batch tensor after cp: tokens torch.Size([2, 10240])
23188
+ batch tensor after cp: labels torch.Size([2, 10240])
23189
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23190
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23191
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23192
+ batch tensor: tokens torch.Size([2, 81920])
23193
+ batch tensor: labels torch.Size([2, 81920])
23194
+ batch tensor: loss_mask torch.Size([2, 81920])
23195
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23196
+ batch tensor: position_ids torch.Size([2, 81920])
23197
+ batch tensor after cp: tokens torch.Size([2, 10240])
23198
+ batch tensor after cp: labels torch.Size([2, 10240])
23199
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23200
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23201
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23202
+ batch tensor: tokens torch.Size([2, 81920])
23203
+ batch tensor: labels torch.Size([2, 81920])
23204
+ batch tensor: loss_mask torch.Size([2, 81920])
23205
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23206
+ batch tensor: position_ids torch.Size([2, 81920])
23207
+ batch tensor after cp: tokens torch.Size([2, 10240])
23208
+ batch tensor after cp: labels torch.Size([2, 10240])
23209
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23210
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23211
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23212
+ batch tensor: tokens torch.Size([2, 81920])
23213
+ batch tensor: labels torch.Size([2, 81920])
23214
+ batch tensor: loss_mask torch.Size([2, 81920])
23215
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23216
+ batch tensor: position_ids torch.Size([2, 81920])
23217
+ batch tensor after cp: tokens torch.Size([2, 10240])
23218
+ batch tensor after cp: labels torch.Size([2, 10240])
23219
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23220
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23221
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23222
+ batch tensor: tokens torch.Size([2, 81920])
23223
+ batch tensor: labels torch.Size([2, 81920])
23224
+ batch tensor: loss_mask torch.Size([2, 81920])
23225
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23226
+ batch tensor: position_ids torch.Size([2, 81920])
23227
+ batch tensor after cp: tokens torch.Size([2, 10240])
23228
+ batch tensor after cp: labels torch.Size([2, 10240])
23229
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23230
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23231
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23232
+ batch tensor: tokens torch.Size([2, 81920])
23233
+ batch tensor: labels torch.Size([2, 81920])
23234
+ batch tensor: loss_mask torch.Size([2, 81920])
23235
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23236
+ batch tensor: position_ids torch.Size([2, 81920])
23237
+ batch tensor after cp: tokens torch.Size([2, 10240])
23238
+ batch tensor after cp: labels torch.Size([2, 10240])
23239
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23240
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23241
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23242
+ batch tensor: tokens torch.Size([2, 81920])
23243
+ batch tensor: labels torch.Size([2, 81920])
23244
+ batch tensor: loss_mask torch.Size([2, 81920])
23245
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23246
+ batch tensor: position_ids torch.Size([2, 81920])
23247
+ batch tensor after cp: tokens torch.Size([2, 10240])
23248
+ batch tensor after cp: labels torch.Size([2, 10240])
23249
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23250
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23251
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23252
+ batch tensor: tokens torch.Size([2, 81920])
23253
+ batch tensor: labels torch.Size([2, 81920])
23254
+ batch tensor: loss_mask torch.Size([2, 81920])
23255
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23256
+ batch tensor: position_ids torch.Size([2, 81920])
23257
+ batch tensor after cp: tokens torch.Size([2, 10240])
23258
+ batch tensor after cp: labels torch.Size([2, 10240])
23259
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23260
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23261
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23262
+ batch tensor: tokens torch.Size([2, 81920])
23263
+ batch tensor: labels torch.Size([2, 81920])
23264
+ batch tensor: loss_mask torch.Size([2, 81920])
23265
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23266
+ batch tensor: position_ids torch.Size([2, 81920])
23267
+ batch tensor after cp: tokens torch.Size([2, 10240])
23268
+ batch tensor after cp: labels torch.Size([2, 10240])
23269
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23270
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23271
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23272
+ batch tensor: tokens torch.Size([2, 81920])
23273
+ batch tensor: labels torch.Size([2, 81920])
23274
+ batch tensor: loss_mask torch.Size([2, 81920])
23275
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
23276
+ batch tensor: position_ids torch.Size([2, 81920])
23277
+ batch tensor after cp: tokens torch.Size([2, 10240])
23278
+ batch tensor after cp: labels torch.Size([2, 10240])
23279
+ batch tensor after cp: loss_mask torch.Size([2, 10240])
23280
+ batch tensor after cp: attention_mask torch.Size([2, 1, 10240, 81920])
23281
+ batch tensor after cp: position_ids torch.Size([2, 10240])
23282
+ Start exporting trace 9
23283
+ Done exporting trace 9
23284
+ [2025-06-21 21:58:43] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 8070.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
23285
+ [after training is done] datetime: 2025-06-21 21:58:43
23286
+ saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format
23287
+ DEBUG:megatron.training.checkpointing:rank: 13, takes 0.03457331657409668 to prepare state dict for ckpt
23288
+ DEBUG:megatron.training.checkpointing:rank: 9, takes 0.03459668159484863 to prepare state dict for ckpt
23289
+ DEBUG:megatron.training.checkpointing:rank: 15, takes 0.03460335731506348 to prepare state dict for ckpt
23290
+ DEBUG:megatron.training.checkpointing:rank: 11, takes 0.03466534614562988 to prepare state dict for ckpt
23291
+ DEBUG:megatron.training.checkpointing:rank: 8, takes 0.0349271297454834 to prepare state dict for ckpt
23292
+ DEBUG:megatron.training.checkpointing:rank: 12, takes 0.03521132469177246 to prepare state dict for ckpt
23293
+ DEBUG:megatron.training.checkpointing:rank: 5, takes 0.037848711013793945 to prepare state dict for ckpt
23294
+ DEBUG:megatron.training.checkpointing:rank: 1, takes 0.03790855407714844 to prepare state dict for ckpt
23295
+ DEBUG:megatron.training.checkpointing:rank: 0, takes 0.03832292556762695 to prepare state dict for ckpt
23296
+ DEBUG:megatron.training.checkpointing:rank: 2, takes 0.038640737533569336 to prepare state dict for ckpt
23297
+ DEBUG:megatron.training.checkpointing:rank: 6, takes 0.03871750831604004 to prepare state dict for ckpt
23298
+ DEBUG:megatron.training.checkpointing:rank: 14, takes 0.03852057456970215 to prepare state dict for ckpt
23299
+ DEBUG:megatron.training.checkpointing:rank: 7, takes 0.03897690773010254 to prepare state dict for ckpt
23300
+ DEBUG:megatron.training.checkpointing:rank: 10, takes 0.042994022369384766 to prepare state dict for ckpt
23301
+ DEBUG:megatron.training.checkpointing:rank: 4, takes 0.045691728591918945 to prepare state dict for ckpt
23302
+ DEBUG:megatron.training.checkpointing:rank: 3, takes 0.046314239501953125 to prepare state dict for ckpt
23303
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23304
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23305
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23306
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23307
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23308
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23309
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23310
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23311
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23312
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23313
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23314
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23315
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23316
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23317
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
23318
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)]
23319
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)]
23320
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)]
23321
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)]
23322
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)]
23323
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)]
23324
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)]
23325
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(413138944), 0), (np.int64(206569472), 1), (np.int64(184549376), 2), (np.int64(184549376), 3), (np.int64(167839744), 4), (np.int64(167839744), 5), (np.int64(176160768), 6), (np.int64(176160768), 7)]
attnserver.run_attnserver.slurm.sh.343239.err.log CHANGED
@@ -639,3 +639,457 @@ W0621 21:48:34.474000 792627 site-packages/torch/distributed/run.py:766] *******
639
  warnings.warn(
640
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
641
  warnings.warn(
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
639
  warnings.warn(
640
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
641
  warnings.warn(
642
+ [rank4]:[E621 21:59:05.835457414 ProcessGroupNCCL.cpp:632] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600012 milliseconds before timing out.
643
+ [rank4]:[E621 21:59:05.838947487 ProcessGroupNCCL.cpp:2268] [PG ID 0 PG GUID 0(default_pg) Rank 4] failure detected by watchdog at work sequence id: 8 PG status: last enqueued work: 8, last completed work: 7
644
+ [rank4]:[E621 21:59:05.838973444 ProcessGroupNCCL.cpp:670] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
645
+ [rank4]:[E621 21:59:05.839017292 ProcessGroupNCCL.cpp:2103] [PG ID 0 PG GUID 0(default_pg) Rank 4] First PG on this rank to signal dumping.
646
+ [rank6]:[E621 21:59:05.841279436 ProcessGroupNCCL.cpp:632] [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600040 milliseconds before timing out.
647
+ [rank10]:[E621 21:59:05.263170673 ProcessGroupNCCL.cpp:1682] [PG ID 0 PG GUID 0(default_pg) Rank 10] Observed flight recorder dump signal from another rank via TCPStore.
648
+ [rank14]:[E621 21:59:05.263188285 ProcessGroupNCCL.cpp:1682] [PG ID 0 PG GUID 0(default_pg) Rank 14] Observed flight recorder dump signal from another rank via TCPStore.
649
+ [rank6]:[E621 21:59:05.841893069 ProcessGroupNCCL.cpp:2268] [PG ID 0 PG GUID 0(default_pg) Rank 6] failure detected by watchdog at work sequence id: 8 PG status: last enqueued work: 8, last completed work: 7
650
+ [rank6]:[E621 21:59:05.841907689 ProcessGroupNCCL.cpp:670] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
651
+ [rank10]:[E621 21:59:05.263518632 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 10] Received a dump signal due to a collective timeout from rank 6 and we will try our best to dump the debug info. Last enqueued NCCL work: 7, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
652
+ [rank6]:[E621 21:59:05.841950398 ProcessGroupNCCL.cpp:2103] [PG ID 0 PG GUID 0(default_pg) Rank 6] First PG on this rank to signal dumping.
653
+ [rank14]:[E621 21:59:05.263527176 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 14] Received a dump signal due to a collective timeout from rank 6 and we will try our best to dump the debug info. Last enqueued NCCL work: 7, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
654
+ [rank14]:[E621 21:59:05.265583056 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 14] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
655
+ [rank10]:[E621 21:59:05.265584315 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 10] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
656
+ [rank3]:[E621 21:59:05.847201991 ProcessGroupNCCL.cpp:632] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600020 milliseconds before timing out.
657
+ [rank3]:[E621 21:59:05.847789234 ProcessGroupNCCL.cpp:2268] [PG ID 0 PG GUID 0(default_pg) Rank 3] failure detected by watchdog at work sequence id: 8 PG status: last enqueued work: 8, last completed work: 7
658
+ [rank3]:[E621 21:59:05.847813647 ProcessGroupNCCL.cpp:670] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
659
+ [rank3]:[E621 21:59:05.847852370 ProcessGroupNCCL.cpp:2103] [PG ID 0 PG GUID 0(default_pg) Rank 3] First PG on this rank to signal dumping.
660
+ [rank1]:[E621 21:59:05.861509950 ProcessGroupNCCL.cpp:632] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600041 milliseconds before timing out.
661
+ [rank1]:[E621 21:59:05.862185767 ProcessGroupNCCL.cpp:2268] [PG ID 0 PG GUID 0(default_pg) Rank 1] failure detected by watchdog at work sequence id: 8 PG status: last enqueued work: 8, last completed work: 7
662
+ [rank1]:[E621 21:59:05.862201821 ProcessGroupNCCL.cpp:670] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
663
+ [rank1]:[E621 21:59:05.862247317 ProcessGroupNCCL.cpp:2103] [PG ID 0 PG GUID 0(default_pg) Rank 1] First PG on this rank to signal dumping.
664
+ [rank7]:[E621 21:59:05.874078529 ProcessGroupNCCL.cpp:632] [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600051 milliseconds before timing out.
665
+ [rank7]:[E621 21:59:05.874698822 ProcessGroupNCCL.cpp:2268] [PG ID 0 PG GUID 0(default_pg) Rank 7] failure detected by watchdog at work sequence id: 8 PG status: last enqueued work: 8, last completed work: 7
666
+ [rank7]:[E621 21:59:05.874714009 ProcessGroupNCCL.cpp:670] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
667
+ [rank7]:[E621 21:59:05.874751064 ProcessGroupNCCL.cpp:2103] [PG ID 0 PG GUID 0(default_pg) Rank 7] First PG on this rank to signal dumping.
668
+ [rank0]:[E621 21:59:05.886110614 ProcessGroupNCCL.cpp:632] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600063 milliseconds before timing out.
669
+ [rank0]:[E621 21:59:05.886715752 ProcessGroupNCCL.cpp:2268] [PG ID 0 PG GUID 0(default_pg) Rank 0] failure detected by watchdog at work sequence id: 8 PG status: last enqueued work: 8, last completed work: 7
670
+ [rank0]:[E621 21:59:05.886729098 ProcessGroupNCCL.cpp:670] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
671
+ [rank0]:[E621 21:59:05.886768276 ProcessGroupNCCL.cpp:2103] [PG ID 0 PG GUID 0(default_pg) Rank 0] First PG on this rank to signal dumping.
672
+ [rank2]:[E621 21:59:05.904074971 ProcessGroupNCCL.cpp:632] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600077 milliseconds before timing out.
673
+ [rank2]:[E621 21:59:05.904682497 ProcessGroupNCCL.cpp:2268] [PG ID 0 PG GUID 0(default_pg) Rank 2] failure detected by watchdog at work sequence id: 8 PG status: last enqueued work: 8, last completed work: 7
674
+ [rank2]:[E621 21:59:05.904696114 ProcessGroupNCCL.cpp:670] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
675
+ [rank2]:[E621 21:59:05.904736894 ProcessGroupNCCL.cpp:2103] [PG ID 0 PG GUID 0(default_pg) Rank 2] First PG on this rank to signal dumping.
676
+ [rank5]:[E621 21:59:05.919745642 ProcessGroupNCCL.cpp:632] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600081 milliseconds before timing out.
677
+ [rank5]:[E621 21:59:05.920397850 ProcessGroupNCCL.cpp:2268] [PG ID 0 PG GUID 0(default_pg) Rank 5] failure detected by watchdog at work sequence id: 8 PG status: last enqueued work: 8, last completed work: 7
678
+ [rank5]:[E621 21:59:05.920414087 ProcessGroupNCCL.cpp:670] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
679
+ [rank5]:[E621 21:59:05.920456736 ProcessGroupNCCL.cpp:2103] [PG ID 0 PG GUID 0(default_pg) Rank 5] First PG on this rank to signal dumping.
680
+ [rank0]:[E621 21:59:06.177554073 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 0] Received a dump signal due to a collective timeout from this local rank and we will try our best to dump the debug info. Last enqueued NCCL work: 8, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
681
+ [rank0]:[E621 21:59:06.177941327 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 0] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
682
+ [rank2]: Traceback (most recent call last):
683
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
684
+ [rank2]: pretrain(
685
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 805, in pretrain
686
+ [rank2]: model, optimizer, opt_param_scheduler = setup_model_and_optimizer(
687
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
688
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1283, in setup_model_and_optimizer
689
+ [rank2]: args.iteration, args.num_floating_point_operations_so_far = load_checkpoint(
690
+ [rank2]: ^^^^^^^^^^^^^^^^
691
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 1374, in load_checkpoint
692
+ [rank2]: state_dict, checkpoint_name, release, ckpt_type = _load_base_checkpoint(
693
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^
694
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 959, in _load_base_checkpoint
695
+ [rank2]: ckpt_format = _get_checkpoint_format(checkpoint_name)
696
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
697
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 882, in _get_checkpoint_format
698
+ [rank2]: is_torch_ckpt = any([f.startswith("mp_rank_0") for f in os.listdir(checkpoint_name)])
699
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
700
+ [rank2]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010'
701
+ [rank1]: Traceback (most recent call last):
702
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
703
+ [rank1]: pretrain(
704
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 805, in pretrain
705
+ [rank1]: model, optimizer, opt_param_scheduler = setup_model_and_optimizer(
706
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
707
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1283, in setup_model_and_optimizer
708
+ [rank1]: args.iteration, args.num_floating_point_operations_so_far = load_checkpoint(
709
+ [rank1]: ^^^^^^^^^^^^^^^^
710
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 1374, in load_checkpoint
711
+ [rank1]: state_dict, checkpoint_name, release, ckpt_type = _load_base_checkpoint(
712
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^
713
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 959, in _load_base_checkpoint
714
+ [rank1]: ckpt_format = _get_checkpoint_format(checkpoint_name)
715
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
716
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 882, in _get_checkpoint_format
717
+ [rank1]: is_torch_ckpt = any([f.startswith("mp_rank_0") for f in os.listdir(checkpoint_name)])
718
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
719
+ [rank1]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010'
720
+ [rank6]: Traceback (most recent call last):
721
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
722
+ [rank6]: pretrain(
723
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 805, in pretrain
724
+ [rank6]: model, optimizer, opt_param_scheduler = setup_model_and_optimizer(
725
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
726
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1283, in setup_model_and_optimizer
727
+ [rank6]: args.iteration, args.num_floating_point_operations_so_far = load_checkpoint(
728
+ [rank6]: ^^^^^^^^^^^^^^^^
729
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 1374, in load_checkpoint
730
+ [rank6]: state_dict, checkpoint_name, release, ckpt_type = _load_base_checkpoint(
731
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^
732
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 959, in _load_base_checkpoint
733
+ [rank6]: ckpt_format = _get_checkpoint_format(checkpoint_name)
734
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
735
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 882, in _get_checkpoint_format
736
+ [rank6]: is_torch_ckpt = any([f.startswith("mp_rank_0") for f in os.listdir(checkpoint_name)])
737
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
738
+ [rank6]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010'
739
+ [rank0]: Traceback (most recent call last):
740
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
741
+ [rank0]: pretrain(
742
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 805, in pretrain
743
+ [rank0]: model, optimizer, opt_param_scheduler = setup_model_and_optimizer(
744
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
745
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1283, in setup_model_and_optimizer
746
+ [rank0]: args.iteration, args.num_floating_point_operations_so_far = load_checkpoint(
747
+ [rank0]: ^^^^^^^^^^^^^^^^
748
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 1374, in load_checkpoint
749
+ [rank0]: state_dict, checkpoint_name, release, ckpt_type = _load_base_checkpoint(
750
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^
751
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 959, in _load_base_checkpoint
752
+ [rank0]: ckpt_format = _get_checkpoint_format(checkpoint_name)
753
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
754
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 882, in _get_checkpoint_format
755
+ [rank0]: is_torch_ckpt = any([f.startswith("mp_rank_0") for f in os.listdir(checkpoint_name)])
756
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
757
+ [rank0]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010'
758
+ [rank5]: Traceback (most recent call last):
759
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
760
+ [rank5]: pretrain(
761
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 805, in pretrain
762
+ [rank5]: model, optimizer, opt_param_scheduler = setup_model_and_optimizer(
763
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
764
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1283, in setup_model_and_optimizer
765
+ [rank5]: args.iteration, args.num_floating_point_operations_so_far = load_checkpoint(
766
+ [rank5]: ^^^^^^^^^^^^^^^^
767
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 1374, in load_checkpoint
768
+ [rank5]: state_dict, checkpoint_name, release, ckpt_type = _load_base_checkpoint(
769
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^
770
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 959, in _load_base_checkpoint
771
+ [rank5]: ckpt_format = _get_checkpoint_format(checkpoint_name)
772
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
773
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 882, in _get_checkpoint_format
774
+ [rank5]: is_torch_ckpt = any([f.startswith("mp_rank_0") for f in os.listdir(checkpoint_name)])
775
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
776
+ [rank5]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010'
777
+ [rank4]: Traceback (most recent call last):
778
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
779
+ [rank4]: pretrain(
780
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 805, in pretrain
781
+ [rank4]: model, optimizer, opt_param_scheduler = setup_model_and_optimizer(
782
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
783
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1283, in setup_model_and_optimizer
784
+ [rank4]: args.iteration, args.num_floating_point_operations_so_far = load_checkpoint(
785
+ [rank4]: ^^^^^^^^^^^^^^^^
786
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 1374, in load_checkpoint
787
+ [rank4]: state_dict, checkpoint_name, release, ckpt_type = _load_base_checkpoint(
788
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^
789
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 959, in _load_base_checkpoint
790
+ [rank4]: ckpt_format = _get_checkpoint_format(checkpoint_name)
791
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
792
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 882, in _get_checkpoint_format
793
+ [rank4]: is_torch_ckpt = any([f.startswith("mp_rank_0") for f in os.listdir(checkpoint_name)])
794
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
795
+ [rank4]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010'
796
+ [rank7]: Traceback (most recent call last):
797
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
798
+ [rank7]: pretrain(
799
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 805, in pretrain
800
+ [rank7]: model, optimizer, opt_param_scheduler = setup_model_and_optimizer(
801
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
802
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1283, in setup_model_and_optimizer
803
+ [rank7]: args.iteration, args.num_floating_point_operations_so_far = load_checkpoint(
804
+ [rank7]: ^^^^^^^^^^^^^^^^
805
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 1374, in load_checkpoint
806
+ [rank7]: state_dict, checkpoint_name, release, ckpt_type = _load_base_checkpoint(
807
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^
808
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 959, in _load_base_checkpoint
809
+ [rank7]: ckpt_format = _get_checkpoint_format(checkpoint_name)
810
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
811
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 882, in _get_checkpoint_format
812
+ [rank7]: is_torch_ckpt = any([f.startswith("mp_rank_0") for f in os.listdir(checkpoint_name)])
813
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
814
+ [rank7]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010'
815
+ [rank3]: Traceback (most recent call last):
816
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
817
+ [rank3]: pretrain(
818
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 805, in pretrain
819
+ [rank3]: model, optimizer, opt_param_scheduler = setup_model_and_optimizer(
820
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
821
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1283, in setup_model_and_optimizer
822
+ [rank3]: args.iteration, args.num_floating_point_operations_so_far = load_checkpoint(
823
+ [rank3]: ^^^^^^^^^^^^^^^^
824
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 1374, in load_checkpoint
825
+ [rank3]: state_dict, checkpoint_name, release, ckpt_type = _load_base_checkpoint(
826
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^
827
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 959, in _load_base_checkpoint
828
+ [rank3]: ckpt_format = _get_checkpoint_format(checkpoint_name)
829
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
830
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 882, in _get_checkpoint_format
831
+ [rank3]: is_torch_ckpt = any([f.startswith("mp_rank_0") for f in os.listdir(checkpoint_name)])
832
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
833
+ [rank3]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010'
834
+ [rank13]:[E621 21:59:06.773515282 ProcessGroupNCCL.cpp:1682] [PG ID 0 PG GUID 0(default_pg) Rank 13] Observed flight recorder dump signal from another rank via TCPStore.
835
+ [rank12]:[E621 21:59:06.773515209 ProcessGroupNCCL.cpp:1682] [PG ID 0 PG GUID 0(default_pg) Rank 12] Observed flight recorder dump signal from another rank via TCPStore.
836
+ [rank9]:[E621 21:59:06.773517421 ProcessGroupNCCL.cpp:1682] [PG ID 0 PG GUID 0(default_pg) Rank 9] Observed flight recorder dump signal from another rank via TCPStore.
837
+ [rank15]:[E621 21:59:06.773543543 ProcessGroupNCCL.cpp:1682] [PG ID 0 PG GUID 0(default_pg) Rank 15] Observed flight recorder dump signal from another rank via TCPStore.
838
+ [rank11]:[E621 21:59:06.773722841 ProcessGroupNCCL.cpp:1682] [PG ID 0 PG GUID 0(default_pg) Rank 11] Observed flight recorder dump signal from another rank via TCPStore.
839
+ [rank12]:[E621 21:59:06.773844997 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 12] Received a dump signal due to a collective timeout from rank 5 and we will try our best to dump the debug info. Last enqueued NCCL work: 7, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
840
+ [rank13]:[E621 21:59:06.773861391 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 13] Received a dump signal due to a collective timeout from rank 5 and we will try our best to dump the debug info. Last enqueued NCCL work: 7, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
841
+ [rank9]:[E621 21:59:06.773896952 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 9] Received a dump signal due to a collective timeout from rank 5 and we will try our best to dump the debug info. Last enqueued NCCL work: 7, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
842
+ [rank15]:[E621 21:59:06.773906772 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 15] Received a dump signal due to a collective timeout from rank 5 and we will try our best to dump the debug info. Last enqueued NCCL work: 7, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
843
+ [rank11]:[E621 21:59:06.774081176 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 11] Received a dump signal due to a collective timeout from rank 5 and we will try our best to dump the debug info. Last enqueued NCCL work: 7, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
844
+ [rank12]:[E621 21:59:06.774136950 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 12] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
845
+ [rank15]:[E621 21:59:06.774151067 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 15] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
846
+ [rank13]:[E621 21:59:06.774199178 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 13] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
847
+ [rank9]:[E621 21:59:06.774262266 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 9] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
848
+ [rank11]:[E621 21:59:06.774298919 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 11] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
849
+ [rank3]:[E621 21:59:06.537038033 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 3] Received a dump signal due to a collective timeout from this local rank and we will try our best to dump the debug info. Last enqueued NCCL work: 8, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
850
+ [rank4]:[E621 21:59:06.537123897 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 4] Received a dump signal due to a collective timeout from this local rank and we will try our best to dump the debug info. Last enqueued NCCL work: 8, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
851
+ [rank3]:[E621 21:59:06.537436762 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 3] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
852
+ [rank4]:[E621 21:59:06.537476982 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 4] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
853
+ [rank7]:[E621 21:59:06.555968394 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 7] Received a dump signal due to a collective timeout from this local rank and we will try our best to dump the debug info. Last enqueued NCCL work: 8, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
854
+ [rank5]:[E621 21:59:06.555975449 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 5] Received a dump signal due to a collective timeout from this local rank and we will try our best to dump the debug info. Last enqueued NCCL work: 8, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
855
+ [rank5]:[E621 21:59:06.556200093 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 5] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
856
+ [rank6]:[E621 21:59:06.556283370 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 6] Received a dump signal due to a collective timeout from this local rank and we will try our best to dump the debug info. Last enqueued NCCL work: 8, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
857
+ [rank1]:[E621 21:59:06.556319392 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 1] Received a dump signal due to a collective timeout from this local rank and we will try our best to dump the debug info. Last enqueued NCCL work: 8, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
858
+ [rank7]:[E621 21:59:06.556402526 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 7] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
859
+ [rank6]:[E621 21:59:06.556450223 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 6] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
860
+ [rank2]:[E621 21:59:06.556490940 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 2] Received a dump signal due to a collective timeout from this local rank and we will try our best to dump the debug info. Last enqueued NCCL work: 8, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
861
+ [rank1]:[E621 21:59:06.556540047 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 1] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
862
+ [rank2]:[E621 21:59:06.556654526 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 2] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
863
+ [rank8]:[E621 21:59:06.034060191 ProcessGroupNCCL.cpp:1682] [PG ID 0 PG GUID 0(default_pg) Rank 8] Observed flight recorder dump signal from another rank via TCPStore.
864
+ [rank8]:[E621 21:59:06.034569240 ProcessGroupNCCL.cpp:1743] [PG ID 0 PG GUID 0(default_pg) Rank 8] Received a dump signal due to a collective timeout from rank 5 and we will try our best to dump the debug info. Last enqueued NCCL work: 7, last completed NCCL work: 7.This is most likely caused by incorrect usages of collectives, e.g., wrong sizes used across ranks, the order of collectives is not same for all ranks or the scheduled collective, for some reason, didn't run. Additionally, this can be caused by GIL deadlock or other reasons such as network errors or bugs in the communications library (e.g. NCCL), etc.
865
+ [rank8]:[E621 21:59:06.034814979 ProcessGroupNCCL.cpp:1533] [PG ID 0 PG GUID 0(default_pg) Rank 8] ProcessGroupNCCL preparing to dump debug info. Include stack trace: 1
866
+ [rank3]:[E621 21:59:06.661078617 ProcessGroupNCCL.cpp:684] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
867
+ [rank3]:[E621 21:59:06.661102822 ProcessGroupNCCL.cpp:698] [Rank 3] To avoid data inconsistency, we are taking the entire process down.
868
+ [rank3]:[E621 21:59:06.662487744 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600020 milliseconds before timing out.
869
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
870
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1493e77785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
871
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x14938d852a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
872
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x14938d8547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
873
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x14938d855ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
874
+ frame #4: <unknown function> + 0xd3b6d (0x1493e72f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
875
+ frame #5: <unknown function> + 0x94ac3 (0x1493e883bac3 in /lib/x86_64-linux-gnu/libc.so.6)
876
+ frame #6: <unknown function> + 0x126850 (0x1493e88cd850 in /lib/x86_64-linux-gnu/libc.so.6)
877
+
878
+ terminate called after throwing an instance of 'c10::DistBackendError'
879
+ what(): [PG ID 0 PG GUID 0(default_pg) Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600020 milliseconds before timing out.
880
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
881
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1493e77785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
882
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x14938d852a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
883
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x14938d8547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
884
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x14938d855ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
885
+ frame #4: <unknown function> + 0xd3b6d (0x1493e72f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
886
+ frame #5: <unknown function> + 0x94ac3 (0x1493e883bac3 in /lib/x86_64-linux-gnu/libc.so.6)
887
+ frame #6: <unknown function> + 0x126850 (0x1493e88cd850 in /lib/x86_64-linux-gnu/libc.so.6)
888
+
889
+ Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1902 (most recent call first):
890
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1493e77785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
891
+ frame #1: <unknown function> + 0x11b4a6e (0x14938d824a6e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
892
+ frame #2: <unknown function> + 0xe07bed (0x14938d477bed in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
893
+ frame #3: <unknown function> + 0xd3b6d (0x1493e72f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
894
+ frame #4: <unknown function> + 0x94ac3 (0x1493e883bac3 in /lib/x86_64-linux-gnu/libc.so.6)
895
+ frame #5: <unknown function> + 0x126850 (0x1493e88cd850 in /lib/x86_64-linux-gnu/libc.so.6)
896
+
897
+ [rank0]:[E621 21:59:06.672471645 ProcessGroupNCCL.cpp:684] [Rank 0] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
898
+ [rank0]:[E621 21:59:06.672494662 ProcessGroupNCCL.cpp:698] [Rank 0] To avoid data inconsistency, we are taking the entire process down.
899
+ [rank0]:[E621 21:59:06.673713561 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600063 milliseconds before timing out.
900
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
901
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1521eaf785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
902
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x152191052a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
903
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x1521910547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
904
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x152191055ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
905
+ frame #4: <unknown function> + 0xd3b6d (0x1521eaaf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
906
+ frame #5: <unknown function> + 0x94ac3 (0x1521ebfa4ac3 in /lib/x86_64-linux-gnu/libc.so.6)
907
+ frame #6: <unknown function> + 0x126850 (0x1521ec036850 in /lib/x86_64-linux-gnu/libc.so.6)
908
+
909
+ terminate called after throwing an instance of 'c10::DistBackendError'
910
+ what(): [PG ID 0 PG GUID 0(default_pg) Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600063 milliseconds before timing out.
911
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
912
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1521eaf785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
913
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x152191052a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
914
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x1521910547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
915
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x152191055ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
916
+ frame #4: <unknown function> + 0xd3b6d (0x1521eaaf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
917
+ frame #5: <unknown function> + 0x94ac3 (0x1521ebfa4ac3 in /lib/x86_64-linux-gnu/libc.so.6)
918
+ frame #6: <unknown function> + 0x126850 (0x1521ec036850 in /lib/x86_64-linux-gnu/libc.so.6)
919
+
920
+ Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1902 (most recent call first):
921
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1521eaf785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
922
+ frame #1: <unknown function> + 0x11b4a6e (0x152191024a6e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
923
+ frame #2: <unknown function> + 0xe07bed (0x152190c77bed in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
924
+ frame #3: <unknown function> + 0xd3b6d (0x1521eaaf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
925
+ frame #4: <unknown function> + 0x94ac3 (0x1521ebfa4ac3 in /lib/x86_64-linux-gnu/libc.so.6)
926
+ frame #5: <unknown function> + 0x126850 (0x1521ec036850 in /lib/x86_64-linux-gnu/libc.so.6)
927
+
928
+ [rank1]:[E621 21:59:06.741526297 ProcessGroupNCCL.cpp:684] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
929
+ [rank1]:[E621 21:59:06.741549384 ProcessGroupNCCL.cpp:698] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
930
+ [rank5]:[E621 21:59:06.741810790 ProcessGroupNCCL.cpp:684] [Rank 5] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
931
+ [rank5]:[E621 21:59:06.741826896 ProcessGroupNCCL.cpp:698] [Rank 5] To avoid data inconsistency, we are taking the entire process down.
932
+ [rank1]:[E621 21:59:06.742820649 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600041 milliseconds before timing out.
933
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
934
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x154b5db785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
935
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x154b04052a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
936
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x154b040547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
937
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x154b04055ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
938
+ frame #4: <unknown function> + 0xd3b6d (0x154af4019b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
939
+ frame #5: <unknown function> + 0x94ac3 (0x154b5ef70ac3 in /lib/x86_64-linux-gnu/libc.so.6)
940
+ frame #6: <unknown function> + 0x126850 (0x154b5f002850 in /lib/x86_64-linux-gnu/libc.so.6)
941
+
942
+ terminate called after throwing an instance of 'c10::DistBackendError'
943
+ [rank5]:[E621 21:59:06.742982716 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 5] Process group watchdog thread terminated with exception: [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600081 milliseconds before timing out.
944
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
945
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1498fd1785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
946
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x1498a3252a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
947
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x1498a32547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
948
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x1498a3255ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
949
+ frame #4: <unknown function> + 0xd3b6d (0x1498fccf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
950
+ frame #5: <unknown function> + 0x94ac3 (0x1498fe250ac3 in /lib/x86_64-linux-gnu/libc.so.6)
951
+ frame #6: <unknown function> + 0x126850 (0x1498fe2e2850 in /lib/x86_64-linux-gnu/libc.so.6)
952
+
953
+ terminate called after throwing an instance of 'c10::DistBackendError'
954
+ [rank4]:[E621 21:59:06.743223711 ProcessGroupNCCL.cpp:684] [Rank 4] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
955
+ [rank4]:[E621 21:59:06.743247496 ProcessGroupNCCL.cpp:698] [Rank 4] To avoid data inconsistency, we are taking the entire process down.
956
+ what(): [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600041 milliseconds before timing out.
957
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
958
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x154b5db785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
959
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x154b04052a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
960
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x154b040547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
961
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x154b04055ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
962
+ frame #4: <unknown function> + 0xd3b6d (0x154af4019b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
963
+ frame #5: <unknown function> + 0x94ac3 (0x154b5ef70ac3 in /lib/x86_64-linux-gnu/libc.so.6)
964
+ frame #6: <unknown function> + 0x126850 (0x154b5f002850 in /lib/x86_64-linux-gnu/libc.so.6)
965
+
966
+ Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1902 (most recent call first):
967
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x154b5db785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
968
+ frame #1: <unknown function> + 0x11b4a6e (0x154b04024a6e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
969
+ frame #2: <unknown function> + 0xe07bed (0x154b03c77bed in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
970
+ frame #3: <unknown function> + 0xd3b6d (0x154af4019b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
971
+ frame #4: <unknown function> + 0x94ac3 (0x154b5ef70ac3 in /lib/x86_64-linux-gnu/libc.so.6)
972
+ frame #5: <unknown function> + 0x126850 (0x154b5f002850 in /lib/x86_64-linux-gnu/libc.so.6)
973
+
974
+ what(): [PG ID 0 PG GUID 0(default_pg) Rank 5] Process group watchdog thread terminated with exception: [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600081 milliseconds before timing out.
975
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
976
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1498fd1785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
977
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x1498a3252a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
978
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x1498a32547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
979
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x1498a3255ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
980
+ frame #4: <unknown function> + 0xd3b6d (0x1498fccf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
981
+ frame #5: <unknown function> + 0x94ac3 (0x1498fe250ac3 in /lib/x86_64-linux-gnu/libc.so.6)
982
+ frame #6: <unknown function> + 0x126850 (0x1498fe2e2850 in /lib/x86_64-linux-gnu/libc.so.6)
983
+
984
+ Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1902 (most recent call first):
985
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1498fd1785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
986
+ frame #1: <unknown function> + 0x11b4a6e (0x1498a3224a6e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
987
+ frame #2: <unknown function> + 0xe07bed (0x1498a2e77bed in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
988
+ frame #3: <unknown function> + 0xd3b6d (0x1498fccf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
989
+ frame #4: <unknown function> + 0x94ac3 (0x1498fe250ac3 in /lib/x86_64-linux-gnu/libc.so.6)
990
+ frame #5: <unknown function> + 0x126850 (0x1498fe2e2850 in /lib/x86_64-linux-gnu/libc.so.6)
991
+
992
+ [rank4]:[E621 21:59:06.744628998 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 4] Process group watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600012 milliseconds before timing out.
993
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
994
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x147b791785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
995
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x147b1f252a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
996
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x147b1f2547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
997
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x147b1f255ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
998
+ frame #4: <unknown function> + 0xd3b6d (0x147b78cf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
999
+ frame #5: <unknown function> + 0x94ac3 (0x147b7a1b1ac3 in /lib/x86_64-linux-gnu/libc.so.6)
1000
+ frame #6: <unknown function> + 0x126850 (0x147b7a243850 in /lib/x86_64-linux-gnu/libc.so.6)
1001
+
1002
+ terminate called after throwing an instance of 'c10::DistBackendError'
1003
+ what(): [PG ID 0 PG GUID 0(default_pg) Rank 4] Process group watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600012 milliseconds before timing out.
1004
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
1005
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x147b791785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
1006
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x147b1f252a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1007
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x147b1f2547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1008
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x147b1f255ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1009
+ frame #4: <unknown function> + 0xd3b6d (0x147b78cf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
1010
+ frame #5: <unknown function> + 0x94ac3 (0x147b7a1b1ac3 in /lib/x86_64-linux-gnu/libc.so.6)
1011
+ frame #6: <unknown function> + 0x126850 (0x147b7a243850 in /lib/x86_64-linux-gnu/libc.so.6)
1012
+
1013
+ Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1902 (most recent call first):
1014
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x147b791785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
1015
+ frame #1: <unknown function> + 0x11b4a6e (0x147b1f224a6e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1016
+ frame #2: <unknown function> + 0xe07bed (0x147b1ee77bed in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1017
+ frame #3: <unknown function> + 0xd3b6d (0x147b78cf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
1018
+ frame #4: <unknown function> + 0x94ac3 (0x147b7a1b1ac3 in /lib/x86_64-linux-gnu/libc.so.6)
1019
+ frame #5: <unknown function> + 0x126850 (0x147b7a243850 in /lib/x86_64-linux-gnu/libc.so.6)
1020
+
1021
+ [rank7]:[E621 21:59:06.750515400 ProcessGroupNCCL.cpp:684] [Rank 7] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
1022
+ [rank7]:[E621 21:59:06.750535519 ProcessGroupNCCL.cpp:698] [Rank 7] To avoid data inconsistency, we are taking the entire process down.
1023
+ [rank7]:[E621 21:59:06.751926965 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 7] Process group watchdog thread terminated with exception: [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600051 milliseconds before timing out.
1024
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
1025
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1478891785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
1026
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x14782f652a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1027
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x14782f6547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1028
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x14782f655ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1029
+ frame #4: <unknown function> + 0xd3b6d (0x14781f619b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
1030
+ frame #5: <unknown function> + 0x94ac3 (0x14788a4eaac3 in /lib/x86_64-linux-gnu/libc.so.6)
1031
+ frame #6: <unknown function> + 0x126850 (0x14788a57c850 in /lib/x86_64-linux-gnu/libc.so.6)
1032
+
1033
+ terminate called after throwing an instance of 'c10::DistBackendError'
1034
+ what(): [PG ID 0 PG GUID 0(default_pg) Rank 7] Process group watchdog thread terminated with exception: [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600051 milliseconds before timing out.
1035
+ Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first):
1036
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1478891785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
1037
+ frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x14782f652a1d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1038
+ frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x14782f6547a0 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1039
+ frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x14782f655ead in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1040
+ frame #4: <unknown function> + 0xd3b6d (0x14781f619b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
1041
+ frame #5: <unknown function> + 0x94ac3 (0x14788a4eaac3 in /lib/x86_64-linux-gnu/libc.so.6)
1042
+ frame #6: <unknown function> + 0x126850 (0x14788a57c850 in /lib/x86_64-linux-gnu/libc.so.6)
1043
+
1044
+ Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1902 (most recent call first):
1045
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1478891785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
1046
+ frame #1: <unknown function> + 0x11b4a6e (0x14782f624a6e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1047
+ frame #2: <unknown function> + 0xe07bed (0x14782f277bed in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
1048
+ frame #3: <unknown function> + 0xd3b6d (0x14781f619b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
1049
+ frame #4: <unknown function> + 0x94ac3 (0x14788a4eaac3 in /lib/x86_64-linux-gnu/libc.so.6)
1050
+ frame #5: <unknown function> + 0x126850 (0x14788a57c850 in /lib/x86_64-linux-gnu/libc.so.6)
1051
+
1052
+ W0621 21:59:06.954000 2138380 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2138450 closing signal SIGTERM
1053
+ W0621 21:59:06.955000 2138380 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2138451 closing signal SIGTERM
1054
+ W0621 21:59:06.955000 2138380 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2138452 closing signal SIGTERM
1055
+ W0621 21:59:06.956000 2138380 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2138454 closing signal SIGTERM
1056
+ W0621 21:59:06.956000 2138380 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2138455 closing signal SIGTERM
1057
+ W0621 21:59:06.956000 2138380 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2138456 closing signal SIGTERM
1058
+ W0621 21:59:06.957000 2138380 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2138457 closing signal SIGTERM
1059
+ E0621 21:59:07.699000 2138380 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: -6) local_rank: 3 (pid: 2138453) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
1060
+ Traceback (most recent call last):
1061
+ File "<frozen runpy>", line 198, in _run_module_as_main
1062
+ File "<frozen runpy>", line 88, in _run_code
1063
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
1064
+ main()
1065
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
1066
+ return arg(*args, **kwargs)
1067
+ ^^^^^^^^^^^^^^^^^^^^
1068
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
1069
+ launch(args)
1070
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
1071
+ run(args)
1072
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
1073
+ elastic_launch(
1074
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
1075
+ return launch_agent(self._config, self._entrypoint, list(args))
1076
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1077
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
1078
+ raise ChildFailedError(
1079
+ torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
1080
+ ========================================================
1081
+ ./pretrain_gpt_profile.py FAILED
1082
+ --------------------------------------------------------
1083
+ Failures:
1084
+ <NO_OTHER_FAILURES>
1085
+ --------------------------------------------------------
1086
+ Root Cause (first observed failure):
1087
+ [0]:
1088
+ time : 2025-06-21_21:59:06
1089
+ host : fs-mbz-gpu-141
1090
+ rank : 3 (local_rank: 3)
1091
+ exitcode : -6 (pid: 2138453)
1092
+ error_file: <N/A>
1093
+ traceback : Signal 6 (SIGABRT) received by PID 2138453
1094
+ ========================================================
1095
+ + set +x
attnserver.run_attnserver.slurm.sh.343240.err.log ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=2
115
+ + PROF_TP_SIZE=2
116
+ + export PROF_CP_SIZE=8
117
+ + PROF_CP_SIZE=8
118
+ + export PROF_BS=8
119
+ + PROF_BS=8
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp2.cp8.bs8.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp2.cp8.bs8.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343240 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-239:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
129
+ + which python3
130
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343240 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-239:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
131
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
132
+ and will be removed in future. Use torchrun.
133
+ Note that --use-env is set by default in torchrun.
134
+ If your script expects `--local-rank` argument to be set, please
135
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
136
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
137
+ further instructions
138
+
139
+ main()
140
+ W0621 21:58:47.051000 1958026 site-packages/torch/distributed/run.py:766]
141
+ W0621 21:58:47.051000 1958026 site-packages/torch/distributed/run.py:766] *****************************************
142
+ W0621 21:58:47.051000 1958026 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
143
+ W0621 21:58:47.051000 1958026 site-packages/torch/distributed/run.py:766] *****************************************
144
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
145
+ and will be removed in future. Use torchrun.
146
+ Note that --use-env is set by default in torchrun.
147
+ If your script expects `--local-rank` argument to be set, please
148
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
149
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
150
+ further instructions
151
+
152
+ main()
153
+ W0621 21:58:47.064000 1033650 site-packages/torch/distributed/run.py:766]
154
+ W0621 21:58:47.064000 1033650 site-packages/torch/distributed/run.py:766] *****************************************
155
+ W0621 21:58:47.064000 1033650 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
156
+ W0621 21:58:47.064000 1033650 site-packages/torch/distributed/run.py:766] *****************************************
157
+ [rank3]:[W621 21:59:10.986920489 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
158
+ [rank1]:[W621 21:59:10.987444238 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
159
+ [rank13]:[W621 21:59:10.117890005 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
160
+ [rank7]:[W621 21:59:10.988655112 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
161
+ [rank9]:[W621 21:59:10.118029109 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
162
+ [rank11]:[W621 21:59:10.118039286 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
163
+ [rank15]:[W621 21:59:10.118702525 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
164
+ [rank5]:[W621 21:59:10.989377387 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
165
+ [rank6]:[W621 21:59:10.993195910 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
166
+ [rank2]:[W621 21:59:10.997045975 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
167
+ [rank4]:[W621 21:59:10.997220851 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
168
+ [rank12]:[W621 21:59:10.133479956 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
169
+ [rank10]:[W621 21:59:10.133572956 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
170
+ [rank14]:[W621 21:59:10.133682828 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
171
+ [rank8]:[W621 21:59:10.215546371 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
172
+ [rank0]:[W621 21:59:10.121500277 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
173
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
174
+ warnings.warn(
175
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
176
+ warnings.warn(
177
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
178
+ warnings.warn(
179
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
180
+ warnings.warn(
181
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
182
+ warnings.warn(
183
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
184
+ warnings.warn(
185
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
186
+ warnings.warn(
187
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
188
+ warnings.warn(
189
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
190
+ warnings.warn(
191
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
192
+ warnings.warn(
193
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
194
+ warnings.warn(
195
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
196
+ warnings.warn(
197
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
198
+ warnings.warn(
199
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
200
+ warnings.warn(
201
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
202
+ warnings.warn(
203
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
204
+ warnings.warn(
205
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
206
+ warnings.warn(
207
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
208
+ warnings.warn(
209
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
210
+ warnings.warn(
211
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
212
+ warnings.warn(
213
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
214
+ warnings.warn(
215
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
216
+ warnings.warn(
217
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
218
+ warnings.warn(
219
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
220
+ warnings.warn(
221
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
222
+ warnings.warn(
223
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
224
+ warnings.warn(
225
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
226
+ warnings.warn(
227
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
228
+ warnings.warn(
229
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
230
+ warnings.warn(
231
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
232
+ warnings.warn(
233
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
234
+ warnings.warn(
235
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
236
+ warnings.warn(
attnserver.run_attnserver.slurm.sh.343240.out.log ADDED
@@ -0,0 +1,858 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Running ctx_length=1024, TP_SIZE=2, CP_SIZE=8, BATCH_SIZE=8
2
+ Cleaning up checkpoint directory: gpt-checkpoint
3
+ --------------------------------
4
+ CTX_LENGTH: 1024
5
+ TP_SIZE: 2
6
+ CP_SIZE: 8
7
+ CHECKPOINT_PATH: gpt-checkpoint
8
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
9
+ --------------------------------
10
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
11
+ Cleaning up checkpoint directory: gpt-checkpoint
12
+ --------------------------------
13
+ CTX_LENGTH: 1024
14
+ TP_SIZE: 2
15
+ CP_SIZE: 8
16
+ CHECKPOINT_PATH: gpt-checkpoint
17
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
18
+ --------------------------------
19
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
20
+ INFO:megatron.training.initialize:Setting logging level to 0
21
+ INFO:megatron.training.initialize:Setting logging level to 0
22
+ INFO:megatron.training.initialize:Setting logging level to 0
23
+ INFO:megatron.training.initialize:Setting logging level to 0
24
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
25
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
26
+ INFO:megatron.training.initialize:Setting logging level to 0
27
+ INFO:megatron.training.initialize:Setting logging level to 0
28
+ INFO:megatron.training.initialize:Setting logging level to 0
29
+ INFO:megatron.training.initialize:Setting logging level to 0
30
+ using world size: 16, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
31
+ Number of virtual stages per pipeline stage: None
32
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
33
+ using torch.float16 for parameters ...
34
+ ------------------------ arguments ------------------------
35
+ account_for_embedding_in_pipeline_split ......... False
36
+ account_for_loss_in_pipeline_split .............. False
37
+ accumulate_allreduce_grads_in_fp32 .............. False
38
+ adam_beta1 ...................................... 0.9
39
+ adam_beta2 ...................................... 0.999
40
+ adam_eps ........................................ 1e-08
41
+ add_bias_linear ................................. True
42
+ add_position_embedding .......................... True
43
+ add_qkv_bias .................................... True
44
+ adlr_autoresume ................................. False
45
+ adlr_autoresume_interval ........................ 1000
46
+ align_grad_reduce ............................... True
47
+ align_param_gather .............................. False
48
+ app_tag_run_name ................................ None
49
+ app_tag_run_version ............................. 0.0.0
50
+ apply_layernorm_1p .............................. False
51
+ apply_query_key_layer_scaling ................... False
52
+ apply_residual_connection_post_layernorm ........ False
53
+ apply_rope_fusion ............................... False
54
+ async_save ...................................... None
55
+ async_tensor_model_parallel_allreduce ........... True
56
+ attention_backend ............................... AttnBackend.auto
57
+ attention_dropout ............................... 0.1
58
+ attention_softmax_in_fp32 ....................... False
59
+ auto_detect_ckpt_format ......................... False
60
+ barrier_with_L1_time ............................ True
61
+ bert_binary_head ................................ True
62
+ bert_embedder_type .............................. megatron
63
+ bert_load ....................................... None
64
+ bf16 ............................................ False
65
+ bias_dropout_fusion ............................. True
66
+ bias_gelu_fusion ................................ True
67
+ bias_swiglu_fusion .............................. True
68
+ biencoder_projection_dim ........................ 0
69
+ biencoder_shared_query_context_model ............ False
70
+ block_data_path ................................. None
71
+ calc_ft_timeouts ................................ False
72
+ calculate_per_token_loss ........................ False
73
+ check_for_large_grads ........................... False
74
+ check_for_nan_in_loss_and_grad .................. False
75
+ check_for_spiky_loss ............................ False
76
+ check_weight_hash_across_dp_replicas_interval ... None
77
+ ckpt_assume_constant_structure .................. False
78
+ ckpt_convert_format ............................. None
79
+ ckpt_convert_save ............................... None
80
+ ckpt_convert_update_legacy_dist_opt_format ...... False
81
+ ckpt_format ..................................... torch_dist
82
+ ckpt_fully_parallel_load ........................ False
83
+ ckpt_fully_parallel_save ........................ True
84
+ ckpt_fully_parallel_save_deprecated ............. False
85
+ ckpt_step ....................................... None
86
+ classes_fraction ................................ 1.0
87
+ clip_grad ....................................... 1.0
88
+ clone_scatter_output_in_embedding ............... True
89
+ config_logger_dir ...............................
90
+ consumed_train_samples .......................... 0
91
+ consumed_valid_samples .......................... 0
92
+ context_parallel_size ........................... 8
93
+ cp_comm_type .................................... ['p2p']
94
+ create_attention_mask_in_dataloader ............. True
95
+ cross_entropy_fusion_impl ....................... native
96
+ cross_entropy_loss_fusion ....................... False
97
+ cuda_graph_scope ................................ full
98
+ cuda_graph_warmup_steps ......................... 3
99
+ data_args_path .................................. None
100
+ data_cache_path ................................. None
101
+ data_parallel_random_init ....................... False
102
+ data_parallel_sharding_strategy ................. no_shard
103
+ data_parallel_size .............................. 1
104
+ data_path ....................................... None
105
+ data_per_class_fraction ......................... 1.0
106
+ data_sharding ................................... True
107
+ dataloader_type ................................. single
108
+ ddp_average_in_collective ....................... False
109
+ ddp_bucket_size ................................. None
110
+ ddp_num_buckets ................................. None
111
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
112
+ decoder_first_pipeline_num_layers ............... None
113
+ decoder_last_pipeline_num_layers ................ None
114
+ decoder_num_layers .............................. None
115
+ decoder_seq_length .............................. None
116
+ decoupled_lr .................................... None
117
+ decoupled_min_lr ................................ None
118
+ decrease_batch_size_if_needed ................... False
119
+ defer_embedding_wgrad_compute ................... False
120
+ deprecated_use_mcore_models ..................... False
121
+ deterministic_mode .............................. False
122
+ dino_bottleneck_size ............................ 256
123
+ dino_freeze_last_layer .......................... 1
124
+ dino_head_hidden_size ........................... 2048
125
+ dino_local_crops_number ......................... 10
126
+ dino_local_img_size ............................. 96
127
+ dino_norm_last_layer ............................ False
128
+ dino_teacher_temp ............................... 0.07
129
+ dino_warmup_teacher_temp ........................ 0.04
130
+ dino_warmup_teacher_temp_epochs ................. 30
131
+ disable_bf16_reduced_precision_matmul ........... False
132
+ disable_mamba_mem_eff_path ...................... False
133
+ disable_straggler_on_startup .................... False
134
+ dist_ckpt_format_deprecated ..................... None
135
+ dist_ckpt_strictness ............................ assume_ok_unexpected
136
+ distribute_saved_activations .................... False
137
+ distributed_backend ............................. nccl
138
+ distributed_timeout_minutes ..................... 10
139
+ embedding_path .................................. None
140
+ empty_unused_memory_level ....................... 0
141
+ enable_cuda_graph ............................... False
142
+ enable_ft_package ............................... False
143
+ enable_gloo_process_groups ...................... True
144
+ enable_msc ...................................... True
145
+ enable_one_logger ............................... True
146
+ encoder_num_layers .............................. 2
147
+ encoder_pipeline_model_parallel_size ............ 0
148
+ encoder_seq_length .............................. 1024
149
+ encoder_tensor_model_parallel_size .............. 0
150
+ end_weight_decay ................................ 0.1
151
+ eod_mask_loss ................................... False
152
+ error_injection_rate ............................ 0
153
+ error_injection_type ............................ transient_error
154
+ eval_interval ................................... 16
155
+ eval_iters ...................................... 1
156
+ evidence_data_path .............................. None
157
+ exit_duration_in_mins ........................... None
158
+ exit_interval ................................... None
159
+ exit_on_missing_checkpoint ...................... False
160
+ exit_signal_handler ............................. False
161
+ exp_avg_dtype ................................... torch.float32
162
+ exp_avg_sq_dtype ................................ torch.float32
163
+ expert_model_parallel_size ...................... 1
164
+ expert_tensor_parallel_size ..................... 2
165
+ external_cuda_graph ............................. False
166
+ ffn_hidden_size ................................. 16384
167
+ finetune ........................................ False
168
+ first_last_layers_bf16 .......................... False
169
+ flash_decode .................................... False
170
+ fp16 ............................................ True
171
+ fp16_lm_cross_entropy ........................... False
172
+ fp32_residual_connection ........................ False
173
+ fp8 ............................................. None
174
+ fp8_amax_compute_algo ........................... most_recent
175
+ fp8_amax_history_len ............................ 1
176
+ fp8_interval .................................... 1
177
+ fp8_margin ...................................... 0
178
+ fp8_param_gather ................................ False
179
+ fp8_recipe ...................................... delayed
180
+ fp8_wgrad ....................................... True
181
+ fsdp_double_buffer .............................. False
182
+ global_batch_size ............................... 1
183
+ grad_reduce_in_bf16 ............................. False
184
+ gradient_accumulation_fusion .................... True
185
+ gradient_reduce_div_fusion ...................... True
186
+ group_query_attention ........................... True
187
+ head_lr_mult .................................... 1.0
188
+ heterogeneous_layers_config_encoded_json ........ None
189
+ heterogeneous_layers_config_path ................ None
190
+ hidden_dropout .................................. 0.1
191
+ hidden_size ..................................... 4096
192
+ hierarchical_context_parallel_sizes ............. None
193
+ high_priority_stream_groups ..................... []
194
+ hybrid_attention_ratio .......................... 0.0
195
+ hybrid_mlp_ratio ................................ 0.0
196
+ hybrid_override_pattern ......................... None
197
+ hysteresis ...................................... 2
198
+ ict_head_size ................................... None
199
+ ict_load ........................................ None
200
+ img_h ........................................... 224
201
+ img_w ........................................... 224
202
+ indexer_batch_size .............................. 128
203
+ indexer_log_interval ............................ 1000
204
+ inference_batch_times_seqlen_threshold .......... -1
205
+ inference_dynamic_batching ...................... False
206
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
207
+ inference_dynamic_batching_buffer_overflow_factor None
208
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
209
+ inference_dynamic_batching_chunk_size ........... 256
210
+ inference_dynamic_batching_max_requests_override None
211
+ inference_dynamic_batching_max_tokens_override .. None
212
+ inference_max_batch_size ........................ 8
213
+ inference_max_seq_length ........................ 2560
214
+ inference_rng_tracker ........................... False
215
+ init_method_std ................................. 0.02
216
+ init_method_xavier_uniform ...................... False
217
+ init_model_with_meta_device ..................... False
218
+ initial_loss_scale .............................. 4294967296
219
+ inprocess_active_world_size ..................... 16
220
+ inprocess_barrier_timeout ....................... 120
221
+ inprocess_completion_timeout .................... 120
222
+ inprocess_empty_cuda_cache ...................... False
223
+ inprocess_granularity ........................... node
224
+ inprocess_hard_timeout .......................... 90
225
+ inprocess_heartbeat_interval .................... 30
226
+ inprocess_heartbeat_timeout ..................... 60
227
+ inprocess_last_call_wait ........................ 1
228
+ inprocess_max_iterations ........................ None
229
+ inprocess_monitor_process_interval .............. 1.0
230
+ inprocess_monitor_thread_interval ............... 1.0
231
+ inprocess_progress_watchdog_interval ............ 1.0
232
+ inprocess_restart ............................... False
233
+ inprocess_soft_timeout .......................... 60
234
+ inprocess_termination_grace_time ................ 1
235
+ is_hybrid_model ................................. False
236
+ iter_per_epoch .................................. 1250
237
+ iterations_to_skip .............................. []
238
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
239
+ kv_channels ..................................... 64
240
+ kv_lora_rank .................................... 32
241
+ lazy_mpu_init ................................... None
242
+ load ............................................ gpt-checkpoint
243
+ load_model_opt_format ........................... False
244
+ local_rank ...................................... 0
245
+ log_interval .................................... 1
246
+ log_loss_scale_to_tensorboard ................... True
247
+ log_memory_to_tensorboard ....................... False
248
+ log_num_zeros_in_grad ........................... False
249
+ log_params_norm ................................. False
250
+ log_progress .................................... False
251
+ log_straggler ................................... False
252
+ log_throughput .................................. False
253
+ log_timers_to_tensorboard ....................... False
254
+ log_validation_ppl_to_tensorboard ............... False
255
+ log_world_size_to_tensorboard ................... False
256
+ logging_level ................................... 0
257
+ loss_scale ...................................... None
258
+ loss_scale_window ............................... 1000
259
+ lr .............................................. 0.0005
260
+ lr_decay_iters .................................. 150000
261
+ lr_decay_samples ................................ None
262
+ lr_decay_style .................................. cosine
263
+ lr_warmup_fraction .............................. None
264
+ lr_warmup_init .................................. 0.0
265
+ lr_warmup_iters ................................. 2
266
+ lr_warmup_samples ............................... 0
267
+ lr_wsd_decay_iters .............................. None
268
+ lr_wsd_decay_samples ............................ None
269
+ lr_wsd_decay_style .............................. exponential
270
+ main_grads_dtype ................................ torch.float32
271
+ main_params_dtype ............................... torch.float32
272
+ make_vocab_size_divisible_by .................... 128
273
+ mamba_head_dim .................................. 64
274
+ mamba_num_groups ................................ 8
275
+ mamba_num_heads ................................. None
276
+ mamba_state_dim ................................. 128
277
+ manual_gc ....................................... False
278
+ manual_gc_eval .................................. True
279
+ manual_gc_interval .............................. 0
280
+ mask_factor ..................................... 1.0
281
+ mask_prob ....................................... 0.15
282
+ mask_type ....................................... random
283
+ masked_softmax_fusion ........................... True
284
+ max_position_embeddings ......................... 1024
285
+ max_tokens_to_oom ............................... 12000
286
+ memory_snapshot_path ............................ snapshot.pickle
287
+ merge_file ...................................... merges.txt
288
+ micro_batch_size ................................ 1
289
+ microbatch_group_size_per_vp_stage .............. None
290
+ mid_level_dataset_surplus ....................... 0.005
291
+ min_loss_scale .................................. 1.0
292
+ min_lr .......................................... 0.0
293
+ mlp_chunks_for_prefill .......................... 1
294
+ mmap_bin_files .................................. True
295
+ mock_data ....................................... True
296
+ moe_apply_probs_on_input ........................ False
297
+ moe_aux_loss_coeff .............................. 0.0
298
+ moe_enable_deepep ............................... False
299
+ moe_expert_capacity_factor ...................... None
300
+ moe_extended_tp ................................. False
301
+ moe_ffn_hidden_size ............................. None
302
+ moe_grouped_gemm ................................ False
303
+ moe_input_jitter_eps ............................ None
304
+ moe_layer_freq .................................. 1
305
+ moe_layer_recompute ............................. False
306
+ moe_pad_expert_input_to_capacity ................ False
307
+ moe_per_layer_logging ........................... False
308
+ moe_permute_fusion .............................. False
309
+ moe_router_bias_update_rate ..................... 0.001
310
+ moe_router_dtype ................................ None
311
+ moe_router_enable_expert_bias ................... False
312
+ moe_router_force_load_balancing ................. False
313
+ moe_router_group_topk ........................... None
314
+ moe_router_load_balancing_type .................. aux_loss
315
+ moe_router_num_groups ........................... None
316
+ moe_router_padding_for_fp8 ...................... False
317
+ moe_router_pre_softmax .......................... False
318
+ moe_router_score_function ....................... softmax
319
+ moe_router_topk ................................. 2
320
+ moe_router_topk_scaling_factor .................. None
321
+ moe_shared_expert_intermediate_size ............. None
322
+ moe_shared_expert_overlap ....................... False
323
+ moe_token_dispatcher_type ....................... allgather
324
+ moe_token_drop_policy ........................... probs
325
+ moe_use_legacy_grouped_gemm ..................... False
326
+ moe_use_upcycling ............................... False
327
+ moe_z_loss_coeff ................................ None
328
+ mrope_section ................................... None
329
+ mscale .......................................... 1.0
330
+ mscale_all_dim .................................. 1.0
331
+ mtp_loss_scaling_factor ......................... 0.1
332
+ mtp_num_layers .................................. None
333
+ multi_latent_attention .......................... False
334
+ nccl_all_reduce_for_prefill ..................... False
335
+ nccl_communicator_config_path ................... None
336
+ nccl_ub ......................................... False
337
+ no_load_optim ................................... None
338
+ no_load_rng ..................................... None
339
+ no_persist_layer_norm ........................... False
340
+ no_rope_freq .................................... None
341
+ no_save_optim ................................... None
342
+ no_save_rng ..................................... None
343
+ non_persistent_ckpt_type ........................ None
344
+ non_persistent_global_ckpt_dir .................. None
345
+ non_persistent_local_ckpt_algo .................. fully_parallel
346
+ non_persistent_local_ckpt_dir ................... None
347
+ non_persistent_save_interval .................... None
348
+ norm_epsilon .................................... 1e-05
349
+ normalization ................................... LayerNorm
350
+ num_attention_heads ............................. 64
351
+ num_channels .................................... 3
352
+ num_classes ..................................... 1000
353
+ num_dataset_builder_threads ..................... 1
354
+ num_distributed_optimizer_instances ............. 1
355
+ num_experts ..................................... None
356
+ num_layers ...................................... 2
357
+ num_layers_at_end_in_bf16 ....................... 1
358
+ num_layers_at_start_in_bf16 ..................... 1
359
+ num_layers_per_virtual_pipeline_stage ........... None
360
+ num_query_groups ................................ 16
361
+ num_virtual_stages_per_pipeline_rank ............ None
362
+ num_workers ..................................... 2
363
+ object_storage_cache_path ....................... None
364
+ one_logger_async ................................ False
365
+ one_logger_project .............................. megatron-lm
366
+ one_logger_run_name ............................. None
367
+ onnx_safe ....................................... None
368
+ openai_gelu ..................................... False
369
+ optimizer ....................................... adam
370
+ optimizer_cpu_offload ........................... False
371
+ optimizer_offload_fraction ...................... 1.0
372
+ output_bert_embeddings .......................... False
373
+ overlap_cpu_optimizer_d2h_h2d ................... False
374
+ overlap_grad_reduce ............................. False
375
+ overlap_p2p_comm ................................ False
376
+ overlap_p2p_comm_warmup_flush ................... False
377
+ overlap_param_gather ............................ False
378
+ overlap_param_gather_with_optimizer_step ........ False
379
+ override_opt_param_scheduler .................... False
380
+ params_dtype .................................... torch.float16
381
+ patch_dim ....................................... 16
382
+ per_split_data_args_path ........................ None
383
+ perform_initialization .......................... True
384
+ pin_cpu_grads ................................... True
385
+ pin_cpu_params .................................. True
386
+ pipeline_model_parallel_comm_backend ............ None
387
+ pipeline_model_parallel_size .................... 1
388
+ pipeline_model_parallel_split_rank .............. None
389
+ position_embedding_type ......................... learned_absolute
390
+ pretrained_checkpoint ........................... None
391
+ profile ......................................... False
392
+ profile_ranks ................................... [0]
393
+ profile_step_end ................................ 12
394
+ profile_step_start .............................. 10
395
+ q_lora_rank ..................................... None
396
+ qk_head_dim ..................................... 128
397
+ qk_l2_norm ...................................... False
398
+ qk_layernorm .................................... False
399
+ qk_pos_emb_head_dim ............................. 64
400
+ query_in_block_prob ............................. 0.1
401
+ rampup_batch_size ............................... None
402
+ rank ............................................ 0
403
+ recompute_granularity ........................... None
404
+ recompute_method ................................ None
405
+ recompute_modules ............................... None
406
+ recompute_num_layers ............................ None
407
+ record_memory_history ........................... False
408
+ relative_attention_max_distance ................. 128
409
+ relative_attention_num_buckets .................. 32
410
+ replication ..................................... False
411
+ replication_factor .............................. 2
412
+ replication_jump ................................ None
413
+ rerun_mode ...................................... disabled
414
+ reset_attention_mask ............................ False
415
+ reset_position_ids .............................. False
416
+ result_rejected_tracker_filename ................ None
417
+ retriever_report_topk_accuracies ................ []
418
+ retriever_score_scaling ......................... False
419
+ retriever_seq_length ............................ 256
420
+ retro_add_retriever ............................. False
421
+ retro_attention_gate ............................ 1
422
+ retro_cyclic_train_iters ........................ None
423
+ retro_encoder_attention_dropout ................. 0.1
424
+ retro_encoder_hidden_dropout .................... 0.1
425
+ retro_encoder_layers ............................ 2
426
+ retro_num_neighbors ............................. 2
427
+ retro_num_retrieved_chunks ...................... 2
428
+ retro_project_dir ............................... None
429
+ retro_verify_neighbor_count ..................... True
430
+ rope_scaling_factor ............................. 8.0
431
+ rotary_base ..................................... 10000
432
+ rotary_interleaved .............................. False
433
+ rotary_percent .................................. 1.0
434
+ rotary_scaling_factor ........................... 1.0
435
+ rotary_seq_len_interpolation_factor ............. None
436
+ run_workload_inspector_server ................... False
437
+ sample_rate ..................................... 1.0
438
+ save ............................................ gpt-checkpoint
439
+ save_interval ................................... 16
440
+ scatter_gather_tensors_in_pipeline .............. True
441
+ seed ............................................ 1234
442
+ seq_length ...................................... 1024
443
+ sequence_parallel ............................... False
444
+ sgd_momentum .................................... 0.9
445
+ short_seq_prob .................................. 0.1
446
+ skip_train ...................................... False
447
+ skipped_train_samples ........................... 0
448
+ spec ............................................ None
449
+ split ........................................... None
450
+ squared_relu .................................... False
451
+ start_weight_decay .............................. 0.1
452
+ straggler_ctrlr_port ............................ 65535
453
+ straggler_minmax_count .......................... 1
454
+ suggested_communication_unit_size ............... None
455
+ swiglu .......................................... False
456
+ swin_backbone_type .............................. tiny
457
+ symmetric_ar_type ............................... None
458
+ te_rng_tracker .................................. False
459
+ tensor_model_parallel_size ...................... 2
460
+ tensorboard_dir ................................. tensorboard-logs/
461
+ tensorboard_log_interval ........................ 1
462
+ tensorboard_queue_size .......................... 1000
463
+ test_data_path .................................. None
464
+ test_mode ....................................... False
465
+ tiktoken_num_special_tokens ..................... 1000
466
+ tiktoken_pattern ................................ None
467
+ tiktoken_special_tokens ......................... None
468
+ timing_log_level ................................ 0
469
+ timing_log_option ............................... minmax
470
+ titles_data_path ................................ None
471
+ tokenizer_model ................................. None
472
+ tokenizer_type .................................. GPT2BPETokenizer
473
+ torch_fsdp2_reshard_after_forward ............... True
474
+ tp_comm_bootstrap_backend ....................... nccl
475
+ tp_comm_bulk_dgrad .............................. True
476
+ tp_comm_bulk_wgrad .............................. True
477
+ tp_comm_overlap ................................. False
478
+ tp_comm_overlap_ag .............................. True
479
+ tp_comm_overlap_cfg ............................. None
480
+ tp_comm_overlap_rs .............................. True
481
+ tp_comm_overlap_rs_dgrad ........................ False
482
+ tp_comm_split_ag ................................ True
483
+ tp_comm_split_rs ................................ True
484
+ train_data_path ................................. None
485
+ train_iters ..................................... 10
486
+ train_samples ................................... None
487
+ train_sync_interval ............................. None
488
+ transformer_impl ................................ transformer_engine
489
+ transformer_pipeline_model_parallel_size ........ 1
490
+ untie_embeddings_and_output_weights ............. False
491
+ use_checkpoint_args ............................. False
492
+ use_checkpoint_opt_param_scheduler .............. False
493
+ use_cpu_initialization .......................... None
494
+ use_custom_fsdp ................................. False
495
+ use_dist_ckpt ................................... True
496
+ use_dist_ckpt_deprecated ........................ False
497
+ use_distributed_optimizer ....................... False
498
+ use_flash_attn .................................. False
499
+ use_legacy_models ............................... False
500
+ use_mp_args_from_checkpoint_args ................ False
501
+ use_one_sent_docs ............................... False
502
+ use_persistent_ckpt_worker ...................... False
503
+ use_precision_aware_optimizer ................... False
504
+ use_pytorch_profiler ............................ False
505
+ use_ring_exchange_p2p ........................... False
506
+ use_rope_scaling ................................ False
507
+ use_rotary_position_embeddings .................. False
508
+ use_sharp ....................................... False
509
+ use_tokenizer_model_from_checkpoint_args ........ True
510
+ use_torch_fsdp2 ................................. False
511
+ use_torch_optimizer_for_cpu_offload ............. False
512
+ use_tp_pp_dp_mapping ............................ False
513
+ v_head_dim ...................................... 128
514
+ valid_data_path ................................. None
515
+ variable_seq_lengths ............................ False
516
+ virtual_pipeline_model_parallel_size ............ None
517
+ vision_backbone_type ............................ vit
518
+ vision_pretraining .............................. False
519
+ vision_pretraining_type ......................... classify
520
+ vocab_extra_ids ................................. 0
521
+ vocab_file ...................................... vocab.json
522
+ vocab_size ...................................... None
523
+ wandb_exp_name ..................................
524
+ wandb_project ...................................
525
+ wandb_save_dir ..................................
526
+ weight_decay .................................... 0.1
527
+ weight_decay_incr_style ......................... constant
528
+ wgrad_deferral_limit ............................ 0
529
+ world_size ...................................... 16
530
+ yaml_cfg ........................................ None
531
+ -------------------- end of arguments ---------------------
532
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
533
+ > building GPT2BPETokenizer tokenizer ...
534
+ INFO:megatron.training.initialize:Setting logging level to 0
535
+ INFO:megatron.training.initialize:Setting logging level to 0
536
+ INFO:megatron.training.initialize:Setting logging level to 0
537
+ INFO:megatron.training.initialize:Setting logging level to 0
538
+ INFO:megatron.training.initialize:Setting logging level to 0
539
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
540
+ INFO:megatron.training.initialize:Setting logging level to 0
541
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
542
+ > initializing torch distributed ...
543
+ INFO:megatron.training.initialize:Setting logging level to 0
544
+ INFO:megatron.training.initialize:Setting logging level to 0
545
+ > initialized tensor model parallel with size 2
546
+ > initialized pipeline model parallel with size 1
547
+ > setting random seeds to 1234 ...
548
+ > compiling dataset index builder ...
549
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
550
+ make: Nothing to be done for 'default'.
551
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
552
+ >>> done with dataset index builder. Compilation time: 0.042 seconds
553
+ > compiling and loading fused kernels ...
554
+ >>> done with compiling and loading fused kernels. Compilation time: 2.459 seconds
555
+ time to initialize megatron (seconds): 8.926
556
+ [after megatron is initialized] datetime: 2025-06-21 21:59:17
557
+ building GPT model ...
558
+ >>> embedding
559
+ >>> decoder
560
+ >>> output_layer
561
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 283719680
562
+ >>> embedding
563
+ >>> decoder
564
+ >>> output_layer
565
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 283719680
566
+ >>> embedding>>> embedding
567
+
568
+ >>> decoder>>> decoder
569
+
570
+ >>> output_layer>>> output_layer
571
+
572
+ >>> embedding
573
+ >>> decoder
574
+ >>> output_layer
575
+ >>> embedding
576
+ >>> decoder
577
+ >>> output_layer
578
+ >>> embedding
579
+ >>> decoder
580
+ >>> output_layer
581
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 283719680 > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 283719680
582
+
583
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 283719680
584
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 283719680
585
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 283719680
586
+ >>> embedding
587
+ >>> embedding>>> decoder
588
+
589
+ >>> output_layer
590
+ >>> decoder
591
+ >>> output_layer
592
+ >>> embedding
593
+ >>> decoder
594
+ >>> output_layer
595
+ >>> embedding
596
+ >>> decoder
597
+ >>> output_layer
598
+ >>> embedding
599
+ >>> decoder
600
+ >>> output_layer
601
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 283719680
602
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 283719680
603
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 283719680
604
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 283719680
605
+ >>> embedding
606
+ >>> decoder
607
+ >>> output_layer
608
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 283719680
609
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 283719680
610
+ >>> embedding
611
+ >>> decoder
612
+ >>> output_layer
613
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 283719680
614
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
615
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
616
+ Params for bucket 1 (283719680 elements, 283719680 padded size):
617
+ module.decoder.layers.1.mlp.linear_fc1.weight
618
+ module.decoder.layers.0.mlp.linear_fc1.weight
619
+ module.embedding.word_embeddings.weight
620
+ module.decoder.layers.1.mlp.linear_fc2.bias
621
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
622
+ module.decoder.layers.0.self_attention.linear_qkv.weight
623
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
624
+ module.decoder.layers.0.self_attention.linear_proj.bias
625
+ module.decoder.layers.1.mlp.linear_fc1.bias
626
+ module.decoder.layers.0.mlp.linear_fc2.weight
627
+ module.decoder.layers.0.mlp.linear_fc1.bias
628
+ module.embedding.position_embeddings.weight
629
+ module.decoder.final_layernorm.bias
630
+ module.decoder.layers.1.self_attention.linear_qkv.weight
631
+ module.decoder.layers.1.self_attention.linear_proj.weight
632
+ module.decoder.layers.0.self_attention.linear_qkv.bias
633
+ module.decoder.layers.1.mlp.linear_fc2.weight
634
+ module.decoder.layers.1.self_attention.linear_proj.bias
635
+ module.decoder.final_layernorm.weight
636
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
637
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
638
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
639
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
640
+ module.decoder.layers.1.self_attention.linear_qkv.bias
641
+ module.decoder.layers.0.mlp.linear_fc2.bias
642
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
643
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
644
+ module.decoder.layers.0.self_attention.linear_proj.weight
645
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x1468718f2420>, config_logger_dir='')
646
+ >>> embedding>>> embedding
647
+
648
+ >>> decoder
649
+ >>> decoder
650
+ >>> output_layer>>> output_layer
651
+
652
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 283719680
653
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 283719680
654
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
655
+ WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
656
+ will not load any checkpoints and will start from random
657
+ (min, max) time across ranks (ms):
658
+ load-checkpoint ................................: (2.92, 3.04)
659
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:59:17
660
+ > building train, validation, and test datasets ...
661
+ > datasets target sizes (minimum size):
662
+ train: 10
663
+ validation: 1
664
+ test: 1
665
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
666
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
667
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
668
+ > building train, validation, and test datasets for GPT ...
669
+ INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=1024, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x146871d076b0>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
670
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
671
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
672
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
673
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.006372 seconds
674
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66592
675
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
676
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
677
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
678
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
679
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.003615 seconds
680
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66562
681
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
682
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
683
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
684
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
685
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.003301 seconds
686
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 66686
687
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
688
+ > finished creating GPT datasets ...
689
+ [after dataloaders are built] datetime: 2025-06-21 21:59:17
690
+ done with setup ...
691
+ training ...
692
+ (min, max) time across ranks (ms):
693
+ model-and-optimizer-setup ......................: (432.15, 445.08)
694
+ train/valid/test-data-iterators-setup ..........: (22.61, 179.91)
695
+ Setting rerun_state_machine.current_iteration to 0...
696
+ [before the start of training step] datetime: 2025-06-21 21:59:17
697
+ batch tensor: tokens torch.Size([8, 8192])
698
+ batch tensor: labels torch.Size([8, 8192])
699
+ batch tensor: loss_mask torch.Size([8, 8192])
700
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
701
+ batch tensor: position_ids torch.Size([8, 8192])
702
+ batch tensor: tokens torch.Size([8, 8192])
703
+ batch tensor: labels torch.Size([8, 8192])
704
+ batch tensor: loss_mask torch.Size([8, 8192])
705
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
706
+ batch tensor: position_ids torch.Size([8, 8192])
707
+ batch tensor: tokens torch.Size([8, 8192])
708
+ batch tensor: labels torch.Size([8, 8192])
709
+ batch tensor: loss_mask torch.Size([8, 8192])
710
+ batch tensor: tokens torch.Size([8, 8192])
711
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
712
+ batch tensor: position_ids torch.Size([8, 8192])
713
+ batch tensor: labels torch.Size([8, 8192])
714
+ batch tensor: loss_mask torch.Size([8, 8192])
715
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
716
+ batch tensor: position_ids torch.Size([8, 8192])
717
+ batch tensor: tokens torch.Size([8, 8192])
718
+ batch tensor: labels torch.Size([8, 8192])
719
+ batch tensor: loss_mask torch.Size([8, 8192])
720
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
721
+ batch tensor: position_ids torch.Size([8, 8192])
722
+ batch tensor: tokens torch.Size([8, 8192])
723
+ batch tensor: labels torch.Size([8, 8192])
724
+ batch tensor: loss_mask torch.Size([8, 8192])
725
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
726
+ batch tensor: tokens torch.Size([8, 8192])
727
+ batch tensor: position_ids torch.Size([8, 8192])
728
+ batch tensor: labels torch.Size([8, 8192])
729
+ batch tensor: loss_mask torch.Size([8, 8192])
730
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
731
+ batch tensor: position_ids torch.Size([8, 8192])
732
+ batch tensor: tokens torch.Size([8, 8192])
733
+ batch tensor: tokens torch.Size([8, 8192])
734
+ batch tensor: labels torch.Size([8, 8192])
735
+ batch tensor: labels torch.Size([8, 8192])
736
+ batch tensor: loss_mask torch.Size([8, 8192])
737
+ batch tensor: loss_mask torch.Size([8, 8192])
738
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
739
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
740
+ batch tensor: position_ids torch.Size([8, 8192])
741
+ batch tensor: position_ids torch.Size([8, 8192])
742
+ batch tensor: tokens torch.Size([8, 8192])
743
+ batch tensor: tokens torch.Size([8, 8192])
744
+ batch tensor: labels torch.Size([8, 8192])
745
+ batch tensor: loss_mask torch.Size([8, 8192])
746
+ batch tensor: labels torch.Size([8, 8192])
747
+ batch tensor: loss_mask torch.Size([8, 8192])
748
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
749
+ batch tensor: position_ids torch.Size([8, 8192])
750
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
751
+ batch tensor: tokens torch.Size([8, 8192])
752
+ batch tensor: position_ids torch.Size([8, 8192])
753
+ batch tensor: labels torch.Size([8, 8192])
754
+ batch tensor: loss_mask torch.Size([8, 8192])
755
+ batch tensor: tokens torch.Size([8, 8192])
756
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
757
+ batch tensor: position_ids torch.Size([8, 8192])
758
+ batch tensor: labels torch.Size([8, 8192])
759
+ batch tensor: loss_mask torch.Size([8, 8192])
760
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
761
+ batch tensor: position_ids torch.Size([8, 8192])
762
+ batch tensor: tokens torch.Size([8, 8192])
763
+ batch tensor: labels torch.Size([8, 8192])
764
+ batch tensor: loss_mask torch.Size([8, 8192])
765
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
766
+ batch tensor: position_ids torch.Size([8, 8192])
767
+ batch tensor: tokens torch.Size([8, 8192])
768
+ batch tensor: labels torch.Size([8, 8192])
769
+ batch tensor: loss_mask torch.Size([8, 8192])
770
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
771
+ batch tensor: position_ids torch.Size([8, 8192])
772
+ batch tensor: tokens torch.Size([8, 8192])
773
+ batch tensor: labels torch.Size([8, 8192])
774
+ batch tensor: loss_mask torch.Size([8, 8192])
775
+ batch tensor: attention_mask torch.Size([8, 1, 8192, 8192])
776
+ batch tensor: position_ids torch.Size([8, 8192])
777
+ batch tensor after cp: tokens torch.Size([8, 1024])
778
+ batch tensor after cp: labels torch.Size([8, 1024])
779
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
780
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
781
+ batch tensor after cp: position_ids torch.Size([8, 1024])
782
+ batch tensor after cp: tokens torch.Size([8, 1024])
783
+ batch tensor after cp: labels torch.Size([8, 1024])
784
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
785
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
786
+ batch tensor after cp: position_ids torch.Size([8, 1024])
787
+ batch tensor after cp: tokens torch.Size([8, 1024])
788
+ batch tensor after cp: labels torch.Size([8, 1024])
789
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
790
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
791
+ batch tensor after cp: position_ids torch.Size([8, 1024])
792
+ batch tensor after cp: tokens torch.Size([8, 1024])
793
+ batch tensor after cp: tokens torch.Size([8, 1024])
794
+ batch tensor after cp: labels torch.Size([8, 1024])
795
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
796
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
797
+ batch tensor after cp: position_ids torch.Size([8, 1024])
798
+ batch tensor after cp: labels torch.Size([8, 1024])
799
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
800
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
801
+ batch tensor after cp: position_ids torch.Size([8, 1024])
802
+ batch tensor after cp: tokens torch.Size([8, 1024])
803
+ batch tensor after cp: labels torch.Size([8, 1024])
804
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
805
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
806
+ batch tensor after cp: position_ids torch.Size([8, 1024])
807
+ batch tensor after cp: tokens torch.Size([8, 1024])
808
+ batch tensor after cp: labels torch.Size([8, 1024])
809
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
810
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
811
+ batch tensor after cp: position_ids torch.Size([8, 1024])
812
+ batch tensor after cp: tokens torch.Size([8, 1024])
813
+ batch tensor after cp: labels torch.Size([8, 1024])
814
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
815
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
816
+ batch tensor after cp: position_ids torch.Size([8, 1024])
817
+ batch tensor after cp: tokens torch.Size([8, 1024])
818
+ batch tensor after cp: tokens torch.Size([8, 1024])
819
+ batch tensor after cp: labels torch.Size([8, 1024])
820
+ batch tensor after cp: labels torch.Size([8, 1024])
821
+ batch tensor after cp:batch tensor after cp: loss_mask tokenstorch.Size([8, 1024])
822
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
823
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
824
+ batch tensor after cp: position_ids torch.Size([8, 1024])
825
+ batch tensor after cp: attention_mask torch.Size([8, 1024])torch.Size([8, 1, 1024, 8192])
826
+
827
+ batch tensor after cp: position_idsbatch tensor after cp: torch.Size([8, 1024])labels
828
+ batch tensor after cp: tokens torch.Size([8, 1024])
829
+ torch.Size([8, 1024])
830
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
831
+ batch tensor after cp: labels torch.Size([8, 1024])
832
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
833
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
834
+ batch tensor after cp: position_ids torch.Size([8, 1024])
835
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
836
+ batch tensor after cp: tokens torch.Size([8, 1024])
837
+ batch tensor after cp: position_ids torch.Size([8, 1024])
838
+ batch tensor after cp: labels torch.Size([8, 1024])
839
+ batch tensor after cp: tokens torch.Size([8, 1024])
840
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
841
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
842
+ batch tensor after cp: labels torch.Size([8, 1024])
843
+ batch tensor after cp: position_ids torch.Size([8, 1024])
844
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
845
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
846
+ batch tensor after cp: position_ids torch.Size([8, 1024])
847
+ batch tensor after cp: tokens torch.Size([8, 1024])
848
+ batch tensor after cp: labels torch.Size([8, 1024])
849
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
850
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
851
+ batch tensor after cp: position_ids torch.Size([8, 1024])
852
+ batch tensor after cp: tokens torch.Size([8, 1024])
853
+ batch tensor after cp: labels torch.Size([8, 1024])
854
+ batch tensor after cp: loss_mask torch.Size([8, 1024])
855
+ batch tensor after cp: attention_mask torch.Size([8, 1, 1024, 8192])
856
+ batch tensor after cp: position_ids torch.Size([8, 1024])
857
+ Start exporting trace 0
858
+ Done exporting trace 0
attnserver.run_attnserver.slurm.sh.343243.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343243.out.log CHANGED
@@ -14877,3 +14877,802 @@ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/mega
14877
  WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
14878
  > compiling and loading fused kernels ...
14879
  >>> done with compiling and loading fused kernels. Compilation time: 2.191 seconds
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14877
  WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
14878
  > compiling and loading fused kernels ...
14879
  >>> done with compiling and loading fused kernels. Compilation time: 2.191 seconds
14880
+ time to initialize megatron (seconds): 7.532
14881
+ [after megatron is initialized] datetime: 2025-06-21 21:58:16
14882
+ building GPT model ...
14883
+ >>> embedding
14884
+ >>> decoder
14885
+ >>> output_layer
14886
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536
14887
+ >>> embedding
14888
+ >>> decoder
14889
+ >>> output_layer
14890
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536
14891
+ >>> embedding
14892
+ >>> decoder
14893
+ >>> output_layer
14894
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536
14895
+ >>> embedding
14896
+ >>> decoder
14897
+ >>> output_layer
14898
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536
14899
+ >>> embedding
14900
+ >>> decoder
14901
+ >>> output_layer
14902
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536
14903
+ >>> embedding
14904
+ >>> decoder
14905
+ >>> output_layer
14906
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536
14907
+ >>> embedding
14908
+ >>> decoder
14909
+ >>> output_layer
14910
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536
14911
+ >>> embedding
14912
+ >>> decoder
14913
+ >>> output_layer
14914
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536
14915
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
14916
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
14917
+ Params for bucket 1 (447297536 elements, 447297536 padded size):
14918
+ module.decoder.layers.1.mlp.linear_fc1.bias
14919
+ module.decoder.layers.0.mlp.linear_fc1.bias
14920
+ module.embedding.position_embeddings.weight
14921
+ module.decoder.final_layernorm.bias
14922
+ module.decoder.layers.1.self_attention.linear_qkv.weight
14923
+ module.decoder.layers.1.self_attention.linear_proj.weight
14924
+ module.decoder.layers.0.self_attention.linear_qkv.weight
14925
+ module.decoder.layers.1.mlp.linear_fc2.weight
14926
+ module.decoder.layers.1.self_attention.linear_proj.bias
14927
+ module.decoder.final_layernorm.weight
14928
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
14929
+ module.decoder.layers.0.mlp.linear_fc2.weight
14930
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
14931
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
14932
+ module.decoder.layers.1.self_attention.linear_qkv.bias
14933
+ module.decoder.layers.0.mlp.linear_fc2.bias
14934
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
14935
+ module.decoder.layers.0.self_attention.linear_qkv.bias
14936
+ module.decoder.layers.1.mlp.linear_fc1.weight
14937
+ module.decoder.layers.0.mlp.linear_fc1.weight
14938
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
14939
+ module.embedding.word_embeddings.weight
14940
+ module.decoder.layers.1.mlp.linear_fc2.bias
14941
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
14942
+ module.decoder.layers.0.self_attention.linear_proj.weight
14943
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
14944
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
14945
+ module.decoder.layers.0.self_attention.linear_proj.bias
14946
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14f358f7a480>, config_logger_dir='')
14947
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
14948
+ loading distributed checkpoint from gpt-checkpoint at iteration 10
14949
+ Running ctx_length=49152, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=1
14950
+ Cleaning up checkpoint directory: gpt-checkpoint
14951
+ --------------------------------
14952
+ CTX_LENGTH: 49152
14953
+ TP_SIZE: 2
14954
+ CP_SIZE: 4
14955
+ CHECKPOINT_PATH: gpt-checkpoint
14956
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
14957
+ --------------------------------
14958
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
14959
+ INFO:megatron.training.initialize:Setting logging level to 0
14960
+ using world size: 8, data-parallel size: 1, context-parallel size: 4, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
14961
+ Number of virtual stages per pipeline stage: None
14962
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
14963
+ using torch.float16 for parameters ...
14964
+ ------------------------ arguments ------------------------
14965
+ account_for_embedding_in_pipeline_split ......... False
14966
+ account_for_loss_in_pipeline_split .............. False
14967
+ accumulate_allreduce_grads_in_fp32 .............. False
14968
+ adam_beta1 ...................................... 0.9
14969
+ adam_beta2 ...................................... 0.999
14970
+ adam_eps ........................................ 1e-08
14971
+ add_bias_linear ................................. True
14972
+ add_position_embedding .......................... True
14973
+ add_qkv_bias .................................... True
14974
+ adlr_autoresume ................................. False
14975
+ adlr_autoresume_interval ........................ 1000
14976
+ align_grad_reduce ............................... True
14977
+ align_param_gather .............................. False
14978
+ app_tag_run_name ................................ None
14979
+ app_tag_run_version ............................. 0.0.0
14980
+ apply_layernorm_1p .............................. False
14981
+ apply_query_key_layer_scaling ................... False
14982
+ apply_residual_connection_post_layernorm ........ False
14983
+ apply_rope_fusion ............................... False
14984
+ async_save ...................................... None
14985
+ async_tensor_model_parallel_allreduce ........... True
14986
+ attention_backend ............................... AttnBackend.auto
14987
+ attention_dropout ............................... 0.1
14988
+ attention_softmax_in_fp32 ....................... False
14989
+ auto_detect_ckpt_format ......................... False
14990
+ barrier_with_L1_time ............................ True
14991
+ bert_binary_head ................................ True
14992
+ bert_embedder_type .............................. megatron
14993
+ bert_load ....................................... None
14994
+ bf16 ............................................ False
14995
+ bias_dropout_fusion ............................. True
14996
+ bias_gelu_fusion ................................ True
14997
+ bias_swiglu_fusion .............................. True
14998
+ biencoder_projection_dim ........................ 0
14999
+ biencoder_shared_query_context_model ............ False
15000
+ block_data_path ................................. None
15001
+ calc_ft_timeouts ................................ False
15002
+ calculate_per_token_loss ........................ False
15003
+ check_for_large_grads ........................... False
15004
+ check_for_nan_in_loss_and_grad .................. False
15005
+ check_for_spiky_loss ............................ False
15006
+ check_weight_hash_across_dp_replicas_interval ... None
15007
+ ckpt_assume_constant_structure .................. False
15008
+ ckpt_convert_format ............................. None
15009
+ ckpt_convert_save ............................... None
15010
+ ckpt_convert_update_legacy_dist_opt_format ...... False
15011
+ ckpt_format ..................................... torch_dist
15012
+ ckpt_fully_parallel_load ........................ False
15013
+ ckpt_fully_parallel_save ........................ True
15014
+ ckpt_fully_parallel_save_deprecated ............. False
15015
+ ckpt_step ....................................... None
15016
+ classes_fraction ................................ 1.0
15017
+ clip_grad ....................................... 1.0
15018
+ clone_scatter_output_in_embedding ............... True
15019
+ config_logger_dir ...............................
15020
+ consumed_train_samples .......................... 0
15021
+ consumed_valid_samples .......................... 0
15022
+ context_parallel_size ........................... 4
15023
+ cp_comm_type .................................... ['p2p']
15024
+ create_attention_mask_in_dataloader ............. True
15025
+ cross_entropy_fusion_impl ....................... native
15026
+ cross_entropy_loss_fusion ....................... False
15027
+ cuda_graph_scope ................................ full
15028
+ cuda_graph_warmup_steps ......................... 3
15029
+ data_args_path .................................. None
15030
+ data_cache_path ................................. None
15031
+ data_parallel_random_init ....................... False
15032
+ data_parallel_sharding_strategy ................. no_shard
15033
+ data_parallel_size .............................. 1
15034
+ data_path ....................................... None
15035
+ data_per_class_fraction ......................... 1.0
15036
+ data_sharding ................................... True
15037
+ dataloader_type ................................. single
15038
+ ddp_average_in_collective ....................... False
15039
+ ddp_bucket_size ................................. None
15040
+ ddp_num_buckets ................................. None
15041
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
15042
+ decoder_first_pipeline_num_layers ............... None
15043
+ decoder_last_pipeline_num_layers ................ None
15044
+ decoder_num_layers .............................. None
15045
+ decoder_seq_length .............................. None
15046
+ decoupled_lr .................................... None
15047
+ decoupled_min_lr ................................ None
15048
+ decrease_batch_size_if_needed ................... False
15049
+ defer_embedding_wgrad_compute ................... False
15050
+ deprecated_use_mcore_models ..................... False
15051
+ deterministic_mode .............................. False
15052
+ dino_bottleneck_size ............................ 256
15053
+ dino_freeze_last_layer .......................... 1
15054
+ dino_head_hidden_size ........................... 2048
15055
+ dino_local_crops_number ......................... 10
15056
+ dino_local_img_size ............................. 96
15057
+ dino_norm_last_layer ............................ False
15058
+ dino_teacher_temp ............................... 0.07
15059
+ dino_warmup_teacher_temp ........................ 0.04
15060
+ dino_warmup_teacher_temp_epochs ................. 30
15061
+ disable_bf16_reduced_precision_matmul ........... False
15062
+ disable_mamba_mem_eff_path ...................... False
15063
+ disable_straggler_on_startup .................... False
15064
+ dist_ckpt_format_deprecated ..................... None
15065
+ dist_ckpt_strictness ............................ assume_ok_unexpected
15066
+ distribute_saved_activations .................... False
15067
+ distributed_backend ............................. nccl
15068
+ distributed_timeout_minutes ..................... 10
15069
+ embedding_path .................................. None
15070
+ empty_unused_memory_level ....................... 0
15071
+ enable_cuda_graph ............................... False
15072
+ enable_ft_package ............................... False
15073
+ enable_gloo_process_groups ...................... True
15074
+ enable_msc ...................................... True
15075
+ enable_one_logger ............................... True
15076
+ encoder_num_layers .............................. 2
15077
+ encoder_pipeline_model_parallel_size ............ 0
15078
+ encoder_seq_length .............................. 49152
15079
+ encoder_tensor_model_parallel_size .............. 0
15080
+ end_weight_decay ................................ 0.1
15081
+ eod_mask_loss ................................... False
15082
+ error_injection_rate ............................ 0
15083
+ error_injection_type ............................ transient_error
15084
+ eval_interval ................................... 16
15085
+ eval_iters ...................................... 1
15086
+ evidence_data_path .............................. None
15087
+ exit_duration_in_mins ........................... None
15088
+ exit_interval ................................... None
15089
+ exit_on_missing_checkpoint ...................... False
15090
+ exit_signal_handler ............................. False
15091
+ exp_avg_dtype ................................... torch.float32
15092
+ exp_avg_sq_dtype ................................ torch.float32
15093
+ expert_model_parallel_size ...................... 1
15094
+ expert_tensor_parallel_size ..................... 2
15095
+ external_cuda_graph ............................. False
15096
+ ffn_hidden_size ................................. 16384
15097
+ finetune ........................................ False
15098
+ first_last_layers_bf16 .......................... False
15099
+ flash_decode .................................... False
15100
+ fp16 ............................................ True
15101
+ fp16_lm_cross_entropy ........................... False
15102
+ fp32_residual_connection ........................ False
15103
+ fp8 ............................................. None
15104
+ fp8_amax_compute_algo ........................... most_recent
15105
+ fp8_amax_history_len ............................ 1
15106
+ fp8_interval .................................... 1
15107
+ fp8_margin ...................................... 0
15108
+ fp8_param_gather ................................ False
15109
+ fp8_recipe ...................................... delayed
15110
+ fp8_wgrad ....................................... True
15111
+ fsdp_double_buffer .............................. False
15112
+ global_batch_size ............................... 1
15113
+ grad_reduce_in_bf16 ............................. False
15114
+ gradient_accumulation_fusion .................... True
15115
+ gradient_reduce_div_fusion ...................... True
15116
+ group_query_attention ........................... True
15117
+ head_lr_mult .................................... 1.0
15118
+ heterogeneous_layers_config_encoded_json ........ None
15119
+ heterogeneous_layers_config_path ................ None
15120
+ hidden_dropout .................................. 0.1
15121
+ hidden_size ..................................... 4096
15122
+ hierarchical_context_parallel_sizes ............. None
15123
+ high_priority_stream_groups ..................... []
15124
+ hybrid_attention_ratio .......................... 0.0
15125
+ hybrid_mlp_ratio ................................ 0.0
15126
+ hybrid_override_pattern ......................... None
15127
+ hysteresis ...................................... 2
15128
+ ict_head_size ................................... None
15129
+ ict_load ........................................ None
15130
+ img_h ........................................... 224
15131
+ img_w ........................................... 224
15132
+ indexer_batch_size .............................. 128
15133
+ indexer_log_interval ............................ 1000
15134
+ inference_batch_times_seqlen_threshold .......... -1
15135
+ inference_dynamic_batching ...................... False
15136
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
15137
+ inference_dynamic_batching_buffer_overflow_factor None
15138
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
15139
+ inference_dynamic_batching_chunk_size ........... 256
15140
+ inference_dynamic_batching_max_requests_override None
15141
+ inference_dynamic_batching_max_tokens_override .. None
15142
+ inference_max_batch_size ........................ 8
15143
+ inference_max_seq_length ........................ 2560
15144
+ inference_rng_tracker ........................... False
15145
+ init_method_std ................................. 0.02
15146
+ init_method_xavier_uniform ...................... False
15147
+ init_model_with_meta_device ..................... False
15148
+ initial_loss_scale .............................. 4294967296
15149
+ inprocess_active_world_size ..................... 8
15150
+ inprocess_barrier_timeout ....................... 120
15151
+ inprocess_completion_timeout .................... 120
15152
+ inprocess_empty_cuda_cache ...................... False
15153
+ inprocess_granularity ........................... node
15154
+ inprocess_hard_timeout .......................... 90
15155
+ inprocess_heartbeat_interval .................... 30
15156
+ inprocess_heartbeat_timeout ..................... 60
15157
+ inprocess_last_call_wait ........................ 1
15158
+ inprocess_max_iterations ........................ None
15159
+ inprocess_monitor_process_interval .............. 1.0
15160
+ inprocess_monitor_thread_interval ............... 1.0
15161
+ inprocess_progress_watchdog_interval ............ 1.0
15162
+ inprocess_restart ............................... False
15163
+ inprocess_soft_timeout .......................... 60
15164
+ inprocess_termination_grace_time ................ 1
15165
+ is_hybrid_model ................................. False
15166
+ iter_per_epoch .................................. 1250
15167
+ iterations_to_skip .............................. []
15168
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
15169
+ kv_channels ..................................... 64
15170
+ kv_lora_rank .................................... 32
15171
+ lazy_mpu_init ................................... None
15172
+ load ............................................ gpt-checkpoint
15173
+ load_model_opt_format ........................... False
15174
+ local_rank ...................................... 0
15175
+ log_interval .................................... 1
15176
+ log_loss_scale_to_tensorboard ................... True
15177
+ log_memory_to_tensorboard ....................... False
15178
+ log_num_zeros_in_grad ........................... False
15179
+ log_params_norm ................................. False
15180
+ log_progress .................................... False
15181
+ log_straggler ................................... False
15182
+ log_throughput .................................. False
15183
+ log_timers_to_tensorboard ....................... False
15184
+ log_validation_ppl_to_tensorboard ............... False
15185
+ log_world_size_to_tensorboard ................... False
15186
+ logging_level ................................... 0
15187
+ loss_scale ...................................... None
15188
+ loss_scale_window ............................... 1000
15189
+ lr .............................................. 0.0005
15190
+ lr_decay_iters .................................. 150000
15191
+ lr_decay_samples ................................ None
15192
+ lr_decay_style .................................. cosine
15193
+ lr_warmup_fraction .............................. None
15194
+ lr_warmup_init .................................. 0.0
15195
+ lr_warmup_iters ................................. 2
15196
+ lr_warmup_samples ............................... 0
15197
+ lr_wsd_decay_iters .............................. None
15198
+ lr_wsd_decay_samples ............................ None
15199
+ lr_wsd_decay_style .............................. exponential
15200
+ main_grads_dtype ................................ torch.float32
15201
+ main_params_dtype ............................... torch.float32
15202
+ make_vocab_size_divisible_by .................... 128
15203
+ mamba_head_dim .................................. 64
15204
+ mamba_num_groups ................................ 8
15205
+ mamba_num_heads ................................. None
15206
+ mamba_state_dim ................................. 128
15207
+ manual_gc ....................................... False
15208
+ manual_gc_eval .................................. True
15209
+ manual_gc_interval .............................. 0
15210
+ mask_factor ..................................... 1.0
15211
+ mask_prob ....................................... 0.15
15212
+ mask_type ....................................... random
15213
+ masked_softmax_fusion ........................... True
15214
+ max_position_embeddings ......................... 49152
15215
+ max_tokens_to_oom ............................... 12000
15216
+ memory_snapshot_path ............................ snapshot.pickle
15217
+ merge_file ...................................... merges.txt
15218
+ micro_batch_size ................................ 1
15219
+ microbatch_group_size_per_vp_stage .............. None
15220
+ mid_level_dataset_surplus ....................... 0.005
15221
+ min_loss_scale .................................. 1.0
15222
+ min_lr .......................................... 0.0
15223
+ mlp_chunks_for_prefill .......................... 1
15224
+ mmap_bin_files .................................. True
15225
+ mock_data ....................................... True
15226
+ moe_apply_probs_on_input ........................ False
15227
+ moe_aux_loss_coeff .............................. 0.0
15228
+ moe_enable_deepep ............................... False
15229
+ moe_expert_capacity_factor ...................... None
15230
+ moe_extended_tp ................................. False
15231
+ moe_ffn_hidden_size ............................. None
15232
+ moe_grouped_gemm ................................ False
15233
+ moe_input_jitter_eps ............................ None
15234
+ moe_layer_freq .................................. 1
15235
+ moe_layer_recompute ............................. False
15236
+ moe_pad_expert_input_to_capacity ................ False
15237
+ moe_per_layer_logging ........................... False
15238
+ moe_permute_fusion .............................. False
15239
+ moe_router_bias_update_rate ..................... 0.001
15240
+ moe_router_dtype ................................ None
15241
+ moe_router_enable_expert_bias ................... False
15242
+ moe_router_force_load_balancing ................. False
15243
+ moe_router_group_topk ........................... None
15244
+ moe_router_load_balancing_type .................. aux_loss
15245
+ moe_router_num_groups ........................... None
15246
+ moe_router_padding_for_fp8 ...................... False
15247
+ moe_router_pre_softmax .......................... False
15248
+ moe_router_score_function ....................... softmax
15249
+ moe_router_topk ................................. 2
15250
+ moe_router_topk_scaling_factor .................. None
15251
+ moe_shared_expert_intermediate_size ............. None
15252
+ moe_shared_expert_overlap ....................... False
15253
+ moe_token_dispatcher_type ....................... allgather
15254
+ moe_token_drop_policy ........................... probs
15255
+ moe_use_legacy_grouped_gemm ..................... False
15256
+ moe_use_upcycling ............................... False
15257
+ moe_z_loss_coeff ................................ None
15258
+ mrope_section ................................... None
15259
+ mscale .......................................... 1.0
15260
+ mscale_all_dim .................................. 1.0
15261
+ mtp_loss_scaling_factor ......................... 0.1
15262
+ mtp_num_layers .................................. None
15263
+ multi_latent_attention .......................... False
15264
+ nccl_all_reduce_for_prefill ..................... False
15265
+ nccl_communicator_config_path ................... None
15266
+ nccl_ub ......................................... False
15267
+ no_load_optim ................................... None
15268
+ no_load_rng ..................................... None
15269
+ no_persist_layer_norm ........................... False
15270
+ no_rope_freq .................................... None
15271
+ no_save_optim ................................... None
15272
+ no_save_rng ..................................... None
15273
+ non_persistent_ckpt_type ........................ None
15274
+ non_persistent_global_ckpt_dir .................. None
15275
+ non_persistent_local_ckpt_algo .................. fully_parallel
15276
+ non_persistent_local_ckpt_dir ................... None
15277
+ non_persistent_save_interval .................... None
15278
+ norm_epsilon .................................... 1e-05
15279
+ normalization ................................... LayerNorm
15280
+ num_attention_heads ............................. 64
15281
+ num_channels .................................... 3
15282
+ num_classes ..................................... 1000
15283
+ num_dataset_builder_threads ..................... 1
15284
+ num_distributed_optimizer_instances ............. 1
15285
+ num_experts ..................................... None
15286
+ num_layers ...................................... 2
15287
+ num_layers_at_end_in_bf16 ....................... 1
15288
+ num_layers_at_start_in_bf16 ..................... 1
15289
+ num_layers_per_virtual_pipeline_stage ........... None
15290
+ num_query_groups ................................ 16
15291
+ num_virtual_stages_per_pipeline_rank ............ None
15292
+ num_workers ..................................... 2
15293
+ object_storage_cache_path ....................... None
15294
+ one_logger_async ................................ False
15295
+ one_logger_project .............................. megatron-lm
15296
+ one_logger_run_name ............................. None
15297
+ onnx_safe ....................................... None
15298
+ openai_gelu ..................................... False
15299
+ optimizer ....................................... adam
15300
+ optimizer_cpu_offload ........................... False
15301
+ optimizer_offload_fraction ...................... 1.0
15302
+ output_bert_embeddings .......................... False
15303
+ overlap_cpu_optimizer_d2h_h2d ................... False
15304
+ overlap_grad_reduce ............................. False
15305
+ overlap_p2p_comm ................................ False
15306
+ overlap_p2p_comm_warmup_flush ................... False
15307
+ overlap_param_gather ............................ False
15308
+ overlap_param_gather_with_optimizer_step ........ False
15309
+ override_opt_param_scheduler .................... False
15310
+ params_dtype .................................... torch.float16
15311
+ patch_dim ....................................... 16
15312
+ per_split_data_args_path ........................ None
15313
+ perform_initialization .......................... True
15314
+ pin_cpu_grads ................................... True
15315
+ pin_cpu_params .................................. True
15316
+ pipeline_model_parallel_comm_backend ............ None
15317
+ pipeline_model_parallel_size .................... 1
15318
+ pipeline_model_parallel_split_rank .............. None
15319
+ position_embedding_type ......................... learned_absolute
15320
+ pretrained_checkpoint ........................... None
15321
+ profile ......................................... False
15322
+ profile_ranks ................................... [0]
15323
+ profile_step_end ................................ 12
15324
+ profile_step_start .............................. 10
15325
+ q_lora_rank ..................................... None
15326
+ qk_head_dim ..................................... 128
15327
+ qk_l2_norm ...................................... False
15328
+ qk_layernorm .................................... False
15329
+ qk_pos_emb_head_dim ............................. 64
15330
+ query_in_block_prob ............................. 0.1
15331
+ rampup_batch_size ............................... None
15332
+ rank ............................................ 0
15333
+ recompute_granularity ........................... None
15334
+ recompute_method ................................ None
15335
+ recompute_modules ............................... None
15336
+ recompute_num_layers ............................ None
15337
+ record_memory_history ........................... False
15338
+ relative_attention_max_distance ................. 128
15339
+ relative_attention_num_buckets .................. 32
15340
+ replication ..................................... False
15341
+ replication_factor .............................. 2
15342
+ replication_jump ................................ None
15343
+ rerun_mode ...................................... disabled
15344
+ reset_attention_mask ............................ False
15345
+ reset_position_ids .............................. False
15346
+ result_rejected_tracker_filename ................ None
15347
+ retriever_report_topk_accuracies ................ []
15348
+ retriever_score_scaling ......................... False
15349
+ retriever_seq_length ............................ 256
15350
+ retro_add_retriever ............................. False
15351
+ retro_attention_gate ............................ 1
15352
+ retro_cyclic_train_iters ........................ None
15353
+ retro_encoder_attention_dropout ................. 0.1
15354
+ retro_encoder_hidden_dropout .................... 0.1
15355
+ retro_encoder_layers ............................ 2
15356
+ retro_num_neighbors ............................. 2
15357
+ retro_num_retrieved_chunks ...................... 2
15358
+ retro_project_dir ............................... None
15359
+ retro_verify_neighbor_count ..................... True
15360
+ rope_scaling_factor ............................. 8.0
15361
+ rotary_base ..................................... 10000
15362
+ rotary_interleaved .............................. False
15363
+ rotary_percent .................................. 1.0
15364
+ rotary_scaling_factor ........................... 1.0
15365
+ rotary_seq_len_interpolation_factor ............. None
15366
+ run_workload_inspector_server ................... False
15367
+ sample_rate ..................................... 1.0
15368
+ save ............................................ gpt-checkpoint
15369
+ save_interval ................................... 16
15370
+ scatter_gather_tensors_in_pipeline .............. True
15371
+ seed ............................................ 1234
15372
+ seq_length ...................................... 49152
15373
+ sequence_parallel ............................... False
15374
+ sgd_momentum .................................... 0.9
15375
+ short_seq_prob .................................. 0.1
15376
+ skip_train ...................................... False
15377
+ skipped_train_samples ........................... 0
15378
+ spec ............................................ None
15379
+ split ........................................... None
15380
+ squared_relu .................................... False
15381
+ start_weight_decay .............................. 0.1
15382
+ straggler_ctrlr_port ............................ 65535
15383
+ straggler_minmax_count .......................... 1
15384
+ suggested_communication_unit_size ............... None
15385
+ swiglu .......................................... False
15386
+ swin_backbone_type .............................. tiny
15387
+ symmetric_ar_type ............................... None
15388
+ te_rng_tracker .................................. False
15389
+ tensor_model_parallel_size ...................... 2
15390
+ tensorboard_dir ................................. tensorboard-logs/
15391
+ tensorboard_log_interval ........................ 1
15392
+ tensorboard_queue_size .......................... 1000
15393
+ test_data_path .................................. None
15394
+ test_mode ....................................... False
15395
+ tiktoken_num_special_tokens ..................... 1000
15396
+ tiktoken_pattern ................................ None
15397
+ tiktoken_special_tokens ......................... None
15398
+ timing_log_level ................................ 0
15399
+ timing_log_option ............................... minmax
15400
+ titles_data_path ................................ None
15401
+ tokenizer_model ................................. None
15402
+ tokenizer_type .................................. GPT2BPETokenizer
15403
+ torch_fsdp2_reshard_after_forward ............... True
15404
+ tp_comm_bootstrap_backend ....................... nccl
15405
+ tp_comm_bulk_dgrad .............................. True
15406
+ tp_comm_bulk_wgrad .............................. True
15407
+ tp_comm_overlap ................................. False
15408
+ tp_comm_overlap_ag .............................. True
15409
+ tp_comm_overlap_cfg ............................. None
15410
+ tp_comm_overlap_rs .............................. True
15411
+ tp_comm_overlap_rs_dgrad ........................ False
15412
+ tp_comm_split_ag ................................ True
15413
+ tp_comm_split_rs ................................ True
15414
+ train_data_path ................................. None
15415
+ train_iters ..................................... 10
15416
+ train_samples ................................... None
15417
+ train_sync_interval ............................. None
15418
+ transformer_impl ................................ transformer_engine
15419
+ transformer_pipeline_model_parallel_size ........ 1
15420
+ untie_embeddings_and_output_weights ............. False
15421
+ use_checkpoint_args ............................. False
15422
+ use_checkpoint_opt_param_scheduler .............. False
15423
+ use_cpu_initialization .......................... None
15424
+ use_custom_fsdp ................................. False
15425
+ use_dist_ckpt ................................... True
15426
+ use_dist_ckpt_deprecated ........................ False
15427
+ use_distributed_optimizer ....................... False
15428
+ use_flash_attn .................................. False
15429
+ use_legacy_models ............................... False
15430
+ use_mp_args_from_checkpoint_args ................ False
15431
+ use_one_sent_docs ............................... False
15432
+ use_persistent_ckpt_worker ...................... False
15433
+ use_precision_aware_optimizer ................... False
15434
+ use_pytorch_profiler ............................ False
15435
+ use_ring_exchange_p2p ........................... False
15436
+ use_rope_scaling ................................ False
15437
+ use_rotary_position_embeddings .................. False
15438
+ use_sharp ....................................... False
15439
+ use_tokenizer_model_from_checkpoint_args ........ True
15440
+ use_torch_fsdp2 ................................. False
15441
+ use_torch_optimizer_for_cpu_offload ............. False
15442
+ use_tp_pp_dp_mapping ............................ False
15443
+ v_head_dim ...................................... 128
15444
+ valid_data_path ................................. None
15445
+ variable_seq_lengths ............................ False
15446
+ virtual_pipeline_model_parallel_size ............ None
15447
+ vision_backbone_type ............................ vit
15448
+ vision_pretraining .............................. False
15449
+ vision_pretraining_type ......................... classify
15450
+ vocab_extra_ids ................................. 0
15451
+ vocab_file ...................................... vocab.json
15452
+ vocab_size ...................................... None
15453
+ wandb_exp_name ..................................
15454
+ wandb_project ...................................
15455
+ wandb_save_dir ..................................
15456
+ weight_decay .................................... 0.1
15457
+ weight_decay_incr_style ......................... constant
15458
+ wgrad_deferral_limit ............................ 0
15459
+ world_size ...................................... 8
15460
+ yaml_cfg ........................................ None
15461
+ -------------------- end of arguments ---------------------
15462
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
15463
+ > building GPT2BPETokenizer tokenizer ...
15464
+ INFO:megatron.training.initialize:Setting logging level to 0
15465
+ INFO:megatron.training.initialize:Setting logging level to 0
15466
+ INFO:megatron.training.initialize:Setting logging level to 0
15467
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
15468
+ INFO:megatron.training.initialize:Setting logging level to 0
15469
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
15470
+ > initializing torch distributed ...
15471
+ INFO:megatron.training.initialize:Setting logging level to 0
15472
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
15473
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
15474
+ INFO:megatron.training.initialize:Setting logging level to 0
15475
+ INFO:megatron.training.initialize:Setting logging level to 0
15476
+ > initialized tensor model parallel with size 2
15477
+ > initialized pipeline model parallel with size 1
15478
+ > setting random seeds to 1234 ...
15479
+ > compiling dataset index builder ...
15480
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
15481
+ make: Nothing to be done for 'default'.
15482
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
15483
+ >>> done with dataset index builder. Compilation time: 0.047 seconds
15484
+ WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
15485
+ > compiling and loading fused kernels ...
15486
+ >>> done with compiling and loading fused kernels. Compilation time: 2.128 seconds
15487
+ time to initialize megatron (seconds): 7.450
15488
+ [after megatron is initialized] datetime: 2025-06-21 21:58:58
15489
+ building GPT model ...
15490
+ >>> embedding
15491
+ >>> decoder
15492
+ >>> output_layer
15493
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 480851968
15494
+ >>> embedding
15495
+ >>> decoder
15496
+ >>> output_layer
15497
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 480851968
15498
+ >>> embedding
15499
+ >>> decoder
15500
+ >>> output_layer
15501
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 480851968
15502
+ >>> embedding
15503
+ >>> decoder
15504
+ >>> output_layer
15505
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 480851968
15506
+ >>> embedding
15507
+ >>> decoder
15508
+ >>> output_layer
15509
+ >>> embedding
15510
+ >>> decoder
15511
+ >>> output_layer
15512
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 480851968
15513
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 480851968
15514
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
15515
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
15516
+ Params for bucket 1 (480851968 elements, 480851968 padded size):
15517
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
15518
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
15519
+ module.decoder.layers.0.self_attention.linear_proj.bias
15520
+ module.decoder.layers.1.mlp.linear_fc1.bias
15521
+ module.decoder.layers.0.mlp.linear_fc1.bias
15522
+ module.decoder.layers.1.self_attention.linear_qkv.weight
15523
+ module.decoder.layers.1.self_attention.linear_proj.weight
15524
+ module.decoder.layers.0.self_attention.linear_qkv.weight
15525
+ module.decoder.layers.1.mlp.linear_fc2.weight
15526
+ module.decoder.layers.1.self_attention.linear_proj.bias
15527
+ module.decoder.final_layernorm.bias
15528
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
15529
+ module.decoder.layers.0.mlp.linear_fc2.weight
15530
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
15531
+ module.embedding.word_embeddings.weight
15532
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
15533
+ module.decoder.layers.1.self_attention.linear_qkv.bias
15534
+ module.decoder.layers.0.mlp.linear_fc2.bias
15535
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
15536
+ module.decoder.layers.0.self_attention.linear_qkv.bias
15537
+ module.decoder.final_layernorm.weight
15538
+ module.decoder.layers.1.mlp.linear_fc1.weight
15539
+ module.decoder.layers.0.mlp.linear_fc1.weight
15540
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
15541
+ module.embedding.position_embeddings.weight
15542
+ module.decoder.layers.1.mlp.linear_fc2.bias
15543
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
15544
+ module.decoder.layers.0.self_attention.linear_proj.weight
15545
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14fe8ed9a2d0>, config_logger_dir='')
15546
+ >>> embedding
15547
+ >>> decoder
15548
+ >>> output_layer
15549
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 480851968
15550
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
15551
+ >>> embedding
15552
+ >>> decoder
15553
+ >>> output_layer
15554
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 480851968
15555
+ WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
15556
+ will not load any checkpoints and will start from random
15557
+ (min, max) time across ranks (ms):
15558
+ load-checkpoint ................................: (5.02, 5.22)
15559
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:59:01
15560
+ > building train, validation, and test datasets ...
15561
+ > datasets target sizes (minimum size):
15562
+ train: 10
15563
+ validation: 1
15564
+ test: 1
15565
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
15566
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
15567
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
15568
+ > building train, validation, and test datasets for GPT ...
15569
+ INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=49152, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14fe8f785430>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
15570
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
15571
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
15572
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
15573
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005154 seconds
15574
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1387
15575
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
15576
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
15577
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
15578
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
15579
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001636 seconds
15580
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1386
15581
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
15582
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
15583
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
15584
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
15585
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001354 seconds
15586
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1389
15587
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
15588
+ > finished creating GPT datasets ...
15589
+ [after dataloaders are built] datetime: 2025-06-21 21:59:01
15590
+ done with setup ...
15591
+ (min, max) time across ranks (ms):
15592
+ model-and-optimizer-setup ......................: (2568.09, 2585.27)
15593
+ train/valid/test-data-iterators-setup ..........: (45.93, 175.89)
15594
+ training ...
15595
+ Setting rerun_state_machine.current_iteration to 0...
15596
+ [before the start of training step] datetime: 2025-06-21 21:59:01
15597
+ batch tensor: tokens torch.Size([1, 49152])
15598
+ batch tensor: labels torch.Size([1, 49152])
15599
+ batch tensor: loss_mask torch.Size([1, 49152])
15600
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
15601
+ batch tensor: position_ids torch.Size([1, 49152])
15602
+ batch tensor: tokens torch.Size([1, 49152])
15603
+ batch tensor: labels torch.Size([1, 49152])
15604
+ batch tensor: loss_mask torch.Size([1, 49152])
15605
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
15606
+ batch tensor: position_ids torch.Size([1, 49152])
15607
+ batch tensor after cp: tokens torch.Size([1, 12288])
15608
+ batch tensor after cp: labels torch.Size([1, 12288])
15609
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
15610
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 49152])
15611
+ batch tensor after cp: position_ids torch.Size([1, 12288])
15612
+ batch tensor after cp: tokens torch.Size([1, 12288])
15613
+ batch tensor after cp: labels torch.Size([1, 12288])
15614
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
15615
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 49152])
15616
+ batch tensor after cp: position_ids torch.Size([1, 12288])
15617
+ batch tensor: tokens torch.Size([1, 49152])
15618
+ batch tensor: labels torch.Size([1, 49152])
15619
+ batch tensor: loss_mask torch.Size([1, 49152])
15620
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
15621
+ batch tensor: position_ids torch.Size([1, 49152])
15622
+ batch tensor after cp: tokens torch.Size([1, 12288])
15623
+ batch tensor after cp: labels torch.Size([1, 12288])
15624
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
15625
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 49152])
15626
+ batch tensor after cp: position_ids torch.Size([1, 12288])
15627
+ batch tensor: tokens torch.Size([1, 49152])
15628
+ batch tensor: labels torch.Size([1, 49152])
15629
+ batch tensor: loss_mask torch.Size([1, 49152])
15630
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
15631
+ batch tensor: position_ids torch.Size([1, 49152])
15632
+ batch tensor: tokens torch.Size([1, 49152])
15633
+ batch tensor: labels torch.Size([1, 49152])
15634
+ batch tensor: loss_mask torch.Size([1, 49152])
15635
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
15636
+ batch tensor: position_ids torch.Size([1, 49152])
15637
+ batch tensor after cp: tokens torch.Size([1, 12288])
15638
+ batch tensor after cp: labels torch.Size([1, 12288])
15639
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
15640
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 49152])
15641
+ batch tensor after cp: position_ids torch.Size([1, 12288])
15642
+ batch tensor after cp: tokens torch.Size([1, 12288])
15643
+ batch tensor:batch tensor after cp: labels torch.Size([1, 12288])tokens
15644
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
15645
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 49152])
15646
+ batch tensor after cp: position_ids torch.Size([1, 12288])
15647
+ torch.Size([1, 49152])
15648
+ batch tensor: labels torch.Size([1, 49152])
15649
+ batch tensor: loss_mask torch.Size([1, 49152])
15650
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
15651
+ batch tensor: position_ids torch.Size([1, 49152])
15652
+ batch tensor: tokens torch.Size([1, 49152])
15653
+ batch tensor: labels torch.Size([1, 49152])
15654
+ batch tensor: loss_mask torch.Size([1, 49152])
15655
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
15656
+ batch tensor: position_ids torch.Size([1, 49152])
15657
+ batch tensor after cp: tokens torch.Size([1, 12288])
15658
+ batch tensor after cp: labels torch.Size([1, 12288])
15659
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
15660
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 49152])
15661
+ batch tensor after cp: position_ids torch.Size([1, 12288])
15662
+ batch tensor after cp: tokens torch.Size([1, 12288])
15663
+ batch tensor after cp: labels torch.Size([1, 12288])
15664
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
15665
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 49152])
15666
+ batch tensor after cp: position_ids torch.Size([1, 12288])
15667
+ batch tensor: tokens torch.Size([1, 49152])
15668
+ batch tensor: labels torch.Size([1, 49152])
15669
+ batch tensor: loss_mask torch.Size([1, 49152])
15670
+ batch tensor: attention_mask torch.Size([1, 1, 49152, 49152])
15671
+ batch tensor: position_ids torch.Size([1, 49152])
15672
+ batch tensor after cp: tokens torch.Size([1, 12288])
15673
+ batch tensor after cp: labels torch.Size([1, 12288])
15674
+ batch tensor after cp: loss_mask torch.Size([1, 12288])
15675
+ batch tensor after cp: attention_mask torch.Size([1, 1, 12288, 49152])
15676
+ batch tensor after cp: position_ids torch.Size([1, 12288])
15677
+ Start exporting trace 0
15678
+ Done exporting trace 0
attnserver.run_attnserver.slurm.sh.343244.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343244.out.log CHANGED
@@ -11710,3 +11710,802 @@ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/mega
11710
  WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
11711
  > compiling and loading fused kernels ...
11712
  >>> done with compiling and loading fused kernels. Compilation time: 2.124 seconds
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11710
  WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
11711
  > compiling and loading fused kernels ...
11712
  >>> done with compiling and loading fused kernels. Compilation time: 2.124 seconds
11713
+ time to initialize megatron (seconds): 7.465
11714
+ [after megatron is initialized] datetime: 2025-06-21 21:58:15
11715
+ building GPT model ...
11716
+ >>> embedding
11717
+ >>> decoder
11718
+ >>> output_layer
11719
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104
11720
+ >>> embedding
11721
+ >>> decoder
11722
+ >>> output_layer
11723
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104
11724
+ >>> embedding
11725
+ >>> decoder
11726
+ >>> output_layer
11727
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104
11728
+ >>> embedding
11729
+ >>> decoder
11730
+ >>> output_layer
11731
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104
11732
+ >>> embedding
11733
+ >>> decoder
11734
+ >>> output_layer
11735
+ >>> embedding
11736
+ >>> decoder
11737
+ >>> output_layer
11738
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104
11739
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 413743104
11740
+ >>> embedding
11741
+ >>> decoder
11742
+ >>> output_layer
11743
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104
11744
+ >>> embedding
11745
+ >>> decoder
11746
+ >>> output_layer
11747
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 413743104
11748
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
11749
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
11750
+ Params for bucket 1 (413743104 elements, 413743104 padded size):
11751
+ module.decoder.final_layernorm.bias
11752
+ module.decoder.layers.1.mlp.linear_fc2.bias
11753
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
11754
+ module.decoder.layers.0.self_attention.linear_proj.weight
11755
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
11756
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
11757
+ module.decoder.layers.0.self_attention.linear_proj.bias
11758
+ module.embedding.word_embeddings.weight
11759
+ module.decoder.final_layernorm.weight
11760
+ module.decoder.layers.1.mlp.linear_fc1.bias
11761
+ module.decoder.layers.0.mlp.linear_fc1.bias
11762
+ module.decoder.layers.1.self_attention.linear_qkv.weight
11763
+ module.decoder.layers.1.self_attention.linear_proj.weight
11764
+ module.decoder.layers.0.self_attention.linear_qkv.weight
11765
+ module.decoder.layers.1.mlp.linear_fc2.weight
11766
+ module.decoder.layers.1.self_attention.linear_proj.bias
11767
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
11768
+ module.decoder.layers.0.mlp.linear_fc2.weight
11769
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
11770
+ module.embedding.position_embeddings.weight
11771
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
11772
+ module.decoder.layers.1.self_attention.linear_qkv.bias
11773
+ module.decoder.layers.0.mlp.linear_fc2.bias
11774
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
11775
+ module.decoder.layers.0.self_attention.linear_qkv.bias
11776
+ module.decoder.layers.1.mlp.linear_fc1.weight
11777
+ module.decoder.layers.0.mlp.linear_fc1.weight
11778
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
11779
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14f5cfd43b00>, config_logger_dir='')
11780
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
11781
+ loading distributed checkpoint from gpt-checkpoint at iteration 10
11782
+ Running ctx_length=40960, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=2
11783
+ Cleaning up checkpoint directory: gpt-checkpoint
11784
+ --------------------------------
11785
+ CTX_LENGTH: 40960
11786
+ TP_SIZE: 2
11787
+ CP_SIZE: 4
11788
+ CHECKPOINT_PATH: gpt-checkpoint
11789
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
11790
+ --------------------------------
11791
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
11792
+ INFO:megatron.training.initialize:Setting logging level to 0
11793
+ INFO:megatron.training.initialize:Setting logging level to 0
11794
+ using world size: 8, data-parallel size: 1, context-parallel size: 4, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
11795
+ Number of virtual stages per pipeline stage: None
11796
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
11797
+ using torch.float16 for parameters ...
11798
+ ------------------------ arguments ------------------------
11799
+ account_for_embedding_in_pipeline_split ......... False
11800
+ account_for_loss_in_pipeline_split .............. False
11801
+ accumulate_allreduce_grads_in_fp32 .............. False
11802
+ adam_beta1 ...................................... 0.9
11803
+ adam_beta2 ...................................... 0.999
11804
+ adam_eps ........................................ 1e-08
11805
+ add_bias_linear ................................. True
11806
+ add_position_embedding .......................... True
11807
+ add_qkv_bias .................................... True
11808
+ adlr_autoresume ................................. False
11809
+ adlr_autoresume_interval ........................ 1000
11810
+ align_grad_reduce ............................... True
11811
+ align_param_gather .............................. False
11812
+ app_tag_run_name ................................ None
11813
+ app_tag_run_version ............................. 0.0.0
11814
+ apply_layernorm_1p .............................. False
11815
+ apply_query_key_layer_scaling ................... False
11816
+ apply_residual_connection_post_layernorm ........ False
11817
+ apply_rope_fusion ............................... False
11818
+ async_save ...................................... None
11819
+ async_tensor_model_parallel_allreduce ........... True
11820
+ attention_backend ............................... AttnBackend.auto
11821
+ attention_dropout ............................... 0.1
11822
+ attention_softmax_in_fp32 ....................... False
11823
+ auto_detect_ckpt_format ......................... False
11824
+ barrier_with_L1_time ............................ True
11825
+ bert_binary_head ................................ True
11826
+ bert_embedder_type .............................. megatron
11827
+ bert_load ....................................... None
11828
+ bf16 ............................................ False
11829
+ bias_dropout_fusion ............................. True
11830
+ bias_gelu_fusion ................................ True
11831
+ bias_swiglu_fusion .............................. True
11832
+ biencoder_projection_dim ........................ 0
11833
+ biencoder_shared_query_context_model ............ False
11834
+ block_data_path ................................. None
11835
+ calc_ft_timeouts ................................ False
11836
+ calculate_per_token_loss ........................ False
11837
+ check_for_large_grads ........................... False
11838
+ check_for_nan_in_loss_and_grad .................. False
11839
+ check_for_spiky_loss ............................ False
11840
+ check_weight_hash_across_dp_replicas_interval ... None
11841
+ ckpt_assume_constant_structure .................. False
11842
+ ckpt_convert_format ............................. None
11843
+ ckpt_convert_save ............................... None
11844
+ ckpt_convert_update_legacy_dist_opt_format ...... False
11845
+ ckpt_format ..................................... torch_dist
11846
+ ckpt_fully_parallel_load ........................ False
11847
+ ckpt_fully_parallel_save ........................ True
11848
+ ckpt_fully_parallel_save_deprecated ............. False
11849
+ ckpt_step ....................................... None
11850
+ classes_fraction ................................ 1.0
11851
+ clip_grad ....................................... 1.0
11852
+ clone_scatter_output_in_embedding ............... True
11853
+ config_logger_dir ...............................
11854
+ consumed_train_samples .......................... 0
11855
+ consumed_valid_samples .......................... 0
11856
+ context_parallel_size ........................... 4
11857
+ cp_comm_type .................................... ['p2p']
11858
+ create_attention_mask_in_dataloader ............. True
11859
+ cross_entropy_fusion_impl ....................... native
11860
+ cross_entropy_loss_fusion ....................... False
11861
+ cuda_graph_scope ................................ full
11862
+ cuda_graph_warmup_steps ......................... 3
11863
+ data_args_path .................................. None
11864
+ data_cache_path ................................. None
11865
+ data_parallel_random_init ....................... False
11866
+ data_parallel_sharding_strategy ................. no_shard
11867
+ data_parallel_size .............................. 1
11868
+ data_path ....................................... None
11869
+ data_per_class_fraction ......................... 1.0
11870
+ data_sharding ................................... True
11871
+ dataloader_type ................................. single
11872
+ ddp_average_in_collective ....................... False
11873
+ ddp_bucket_size ................................. None
11874
+ ddp_num_buckets ................................. None
11875
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
11876
+ decoder_first_pipeline_num_layers ............... None
11877
+ decoder_last_pipeline_num_layers ................ None
11878
+ decoder_num_layers .............................. None
11879
+ decoder_seq_length .............................. None
11880
+ decoupled_lr .................................... None
11881
+ decoupled_min_lr ................................ None
11882
+ decrease_batch_size_if_needed ................... False
11883
+ defer_embedding_wgrad_compute ................... False
11884
+ deprecated_use_mcore_models ..................... False
11885
+ deterministic_mode .............................. False
11886
+ dino_bottleneck_size ............................ 256
11887
+ dino_freeze_last_layer .......................... 1
11888
+ dino_head_hidden_size ........................... 2048
11889
+ dino_local_crops_number ......................... 10
11890
+ dino_local_img_size ............................. 96
11891
+ dino_norm_last_layer ............................ False
11892
+ dino_teacher_temp ............................... 0.07
11893
+ dino_warmup_teacher_temp ........................ 0.04
11894
+ dino_warmup_teacher_temp_epochs ................. 30
11895
+ disable_bf16_reduced_precision_matmul ........... False
11896
+ disable_mamba_mem_eff_path ...................... False
11897
+ disable_straggler_on_startup .................... False
11898
+ dist_ckpt_format_deprecated ..................... None
11899
+ dist_ckpt_strictness ............................ assume_ok_unexpected
11900
+ distribute_saved_activations .................... False
11901
+ distributed_backend ............................. nccl
11902
+ distributed_timeout_minutes ..................... 10
11903
+ embedding_path .................................. None
11904
+ empty_unused_memory_level ....................... 0
11905
+ enable_cuda_graph ............................... False
11906
+ enable_ft_package ............................... False
11907
+ enable_gloo_process_groups ...................... True
11908
+ enable_msc ...................................... True
11909
+ enable_one_logger ............................... True
11910
+ encoder_num_layers .............................. 2
11911
+ encoder_pipeline_model_parallel_size ............ 0
11912
+ encoder_seq_length .............................. 40960
11913
+ encoder_tensor_model_parallel_size .............. 0
11914
+ end_weight_decay ................................ 0.1
11915
+ eod_mask_loss ................................... False
11916
+ error_injection_rate ............................ 0
11917
+ error_injection_type ............................ transient_error
11918
+ eval_interval ................................... 16
11919
+ eval_iters ...................................... 1
11920
+ evidence_data_path .............................. None
11921
+ exit_duration_in_mins ........................... None
11922
+ exit_interval ................................... None
11923
+ exit_on_missing_checkpoint ...................... False
11924
+ exit_signal_handler ............................. False
11925
+ exp_avg_dtype ................................... torch.float32
11926
+ exp_avg_sq_dtype ................................ torch.float32
11927
+ expert_model_parallel_size ...................... 1
11928
+ expert_tensor_parallel_size ..................... 2
11929
+ external_cuda_graph ............................. False
11930
+ ffn_hidden_size ................................. 16384
11931
+ finetune ........................................ False
11932
+ first_last_layers_bf16 .......................... False
11933
+ flash_decode .................................... False
11934
+ fp16 ............................................ True
11935
+ fp16_lm_cross_entropy ........................... False
11936
+ fp32_residual_connection ........................ False
11937
+ fp8 ............................................. None
11938
+ fp8_amax_compute_algo ........................... most_recent
11939
+ fp8_amax_history_len ............................ 1
11940
+ fp8_interval .................................... 1
11941
+ fp8_margin ...................................... 0
11942
+ fp8_param_gather ................................ False
11943
+ fp8_recipe ...................................... delayed
11944
+ fp8_wgrad ....................................... True
11945
+ fsdp_double_buffer .............................. False
11946
+ global_batch_size ............................... 1
11947
+ grad_reduce_in_bf16 ............................. False
11948
+ gradient_accumulation_fusion .................... True
11949
+ gradient_reduce_div_fusion ...................... True
11950
+ group_query_attention ........................... True
11951
+ head_lr_mult .................................... 1.0
11952
+ heterogeneous_layers_config_encoded_json ........ None
11953
+ heterogeneous_layers_config_path ................ None
11954
+ hidden_dropout .................................. 0.1
11955
+ hidden_size ..................................... 4096
11956
+ hierarchical_context_parallel_sizes ............. None
11957
+ high_priority_stream_groups ..................... []
11958
+ hybrid_attention_ratio .......................... 0.0
11959
+ hybrid_mlp_ratio ................................ 0.0
11960
+ hybrid_override_pattern ......................... None
11961
+ hysteresis ...................................... 2
11962
+ ict_head_size ................................... None
11963
+ ict_load ........................................ None
11964
+ img_h ........................................... 224
11965
+ img_w ........................................... 224
11966
+ indexer_batch_size .............................. 128
11967
+ indexer_log_interval ............................ 1000
11968
+ inference_batch_times_seqlen_threshold .......... -1
11969
+ inference_dynamic_batching ...................... False
11970
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
11971
+ inference_dynamic_batching_buffer_overflow_factor None
11972
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
11973
+ inference_dynamic_batching_chunk_size ........... 256
11974
+ inference_dynamic_batching_max_requests_override None
11975
+ inference_dynamic_batching_max_tokens_override .. None
11976
+ inference_max_batch_size ........................ 8
11977
+ inference_max_seq_length ........................ 2560
11978
+ inference_rng_tracker ........................... False
11979
+ init_method_std ................................. 0.02
11980
+ init_method_xavier_uniform ...................... False
11981
+ init_model_with_meta_device ..................... False
11982
+ initial_loss_scale .............................. 4294967296
11983
+ inprocess_active_world_size ..................... 8
11984
+ inprocess_barrier_timeout ....................... 120
11985
+ inprocess_completion_timeout .................... 120
11986
+ inprocess_empty_cuda_cache ...................... False
11987
+ inprocess_granularity ........................... node
11988
+ inprocess_hard_timeout .......................... 90
11989
+ inprocess_heartbeat_interval .................... 30
11990
+ inprocess_heartbeat_timeout ..................... 60
11991
+ inprocess_last_call_wait ........................ 1
11992
+ inprocess_max_iterations ........................ None
11993
+ inprocess_monitor_process_interval .............. 1.0
11994
+ inprocess_monitor_thread_interval ............... 1.0
11995
+ inprocess_progress_watchdog_interval ............ 1.0
11996
+ inprocess_restart ............................... False
11997
+ inprocess_soft_timeout .......................... 60
11998
+ inprocess_termination_grace_time ................ 1
11999
+ is_hybrid_model ................................. False
12000
+ iter_per_epoch .................................. 1250
12001
+ iterations_to_skip .............................. []
12002
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
12003
+ kv_channels ..................................... 64
12004
+ kv_lora_rank .................................... 32
12005
+ lazy_mpu_init ................................... None
12006
+ load ............................................ gpt-checkpoint
12007
+ load_model_opt_format ........................... False
12008
+ local_rank ...................................... 0
12009
+ log_interval .................................... 1
12010
+ log_loss_scale_to_tensorboard ................... True
12011
+ log_memory_to_tensorboard ....................... False
12012
+ log_num_zeros_in_grad ........................... False
12013
+ log_params_norm ................................. False
12014
+ log_progress .................................... False
12015
+ log_straggler ................................... False
12016
+ log_throughput .................................. False
12017
+ log_timers_to_tensorboard ....................... False
12018
+ log_validation_ppl_to_tensorboard ............... False
12019
+ log_world_size_to_tensorboard ................... False
12020
+ logging_level ................................... 0
12021
+ loss_scale ...................................... None
12022
+ loss_scale_window ............................... 1000
12023
+ lr .............................................. 0.0005
12024
+ lr_decay_iters .................................. 150000
12025
+ lr_decay_samples ................................ None
12026
+ lr_decay_style .................................. cosine
12027
+ lr_warmup_fraction .............................. None
12028
+ lr_warmup_init .................................. 0.0
12029
+ lr_warmup_iters ................................. 2
12030
+ lr_warmup_samples ............................... 0
12031
+ lr_wsd_decay_iters .............................. None
12032
+ lr_wsd_decay_samples ............................ None
12033
+ lr_wsd_decay_style .............................. exponential
12034
+ main_grads_dtype ................................ torch.float32
12035
+ main_params_dtype ............................... torch.float32
12036
+ make_vocab_size_divisible_by .................... 128
12037
+ mamba_head_dim .................................. 64
12038
+ mamba_num_groups ................................ 8
12039
+ mamba_num_heads ................................. None
12040
+ mamba_state_dim ................................. 128
12041
+ manual_gc ....................................... False
12042
+ manual_gc_eval .................................. True
12043
+ manual_gc_interval .............................. 0
12044
+ mask_factor ..................................... 1.0
12045
+ mask_prob ....................................... 0.15
12046
+ mask_type ....................................... random
12047
+ masked_softmax_fusion ........................... True
12048
+ max_position_embeddings ......................... 40960
12049
+ max_tokens_to_oom ............................... 12000
12050
+ memory_snapshot_path ............................ snapshot.pickle
12051
+ merge_file ...................................... merges.txt
12052
+ micro_batch_size ................................ 1
12053
+ microbatch_group_size_per_vp_stage .............. None
12054
+ mid_level_dataset_surplus ....................... 0.005
12055
+ min_loss_scale .................................. 1.0
12056
+ min_lr .......................................... 0.0
12057
+ mlp_chunks_for_prefill .......................... 1
12058
+ mmap_bin_files .................................. True
12059
+ mock_data ....................................... True
12060
+ moe_apply_probs_on_input ........................ False
12061
+ moe_aux_loss_coeff .............................. 0.0
12062
+ moe_enable_deepep ............................... False
12063
+ moe_expert_capacity_factor ...................... None
12064
+ moe_extended_tp ................................. False
12065
+ moe_ffn_hidden_size ............................. None
12066
+ moe_grouped_gemm ................................ False
12067
+ moe_input_jitter_eps ............................ None
12068
+ moe_layer_freq .................................. 1
12069
+ moe_layer_recompute ............................. False
12070
+ moe_pad_expert_input_to_capacity ................ False
12071
+ moe_per_layer_logging ........................... False
12072
+ moe_permute_fusion .............................. False
12073
+ moe_router_bias_update_rate ..................... 0.001
12074
+ moe_router_dtype ................................ None
12075
+ moe_router_enable_expert_bias ................... False
12076
+ moe_router_force_load_balancing ................. False
12077
+ moe_router_group_topk ........................... None
12078
+ moe_router_load_balancing_type .................. aux_loss
12079
+ moe_router_num_groups ........................... None
12080
+ moe_router_padding_for_fp8 ...................... False
12081
+ moe_router_pre_softmax .......................... False
12082
+ moe_router_score_function ....................... softmax
12083
+ moe_router_topk ................................. 2
12084
+ moe_router_topk_scaling_factor .................. None
12085
+ moe_shared_expert_intermediate_size ............. None
12086
+ moe_shared_expert_overlap ....................... False
12087
+ moe_token_dispatcher_type ....................... allgather
12088
+ moe_token_drop_policy ........................... probs
12089
+ moe_use_legacy_grouped_gemm ..................... False
12090
+ moe_use_upcycling ............................... False
12091
+ moe_z_loss_coeff ................................ None
12092
+ mrope_section ................................... None
12093
+ mscale .......................................... 1.0
12094
+ mscale_all_dim .................................. 1.0
12095
+ mtp_loss_scaling_factor ......................... 0.1
12096
+ mtp_num_layers .................................. None
12097
+ multi_latent_attention .......................... False
12098
+ nccl_all_reduce_for_prefill ..................... False
12099
+ nccl_communicator_config_path ................... None
12100
+ nccl_ub ......................................... False
12101
+ no_load_optim ................................... None
12102
+ no_load_rng ..................................... None
12103
+ no_persist_layer_norm ........................... False
12104
+ no_rope_freq .................................... None
12105
+ no_save_optim ................................... None
12106
+ no_save_rng ..................................... None
12107
+ non_persistent_ckpt_type ........................ None
12108
+ non_persistent_global_ckpt_dir .................. None
12109
+ non_persistent_local_ckpt_algo .................. fully_parallel
12110
+ non_persistent_local_ckpt_dir ................... None
12111
+ non_persistent_save_interval .................... None
12112
+ norm_epsilon .................................... 1e-05
12113
+ normalization ................................... LayerNorm
12114
+ num_attention_heads ............................. 64
12115
+ num_channels .................................... 3
12116
+ num_classes ..................................... 1000
12117
+ num_dataset_builder_threads ..................... 1
12118
+ num_distributed_optimizer_instances ............. 1
12119
+ num_experts ..................................... None
12120
+ num_layers ...................................... 2
12121
+ num_layers_at_end_in_bf16 ....................... 1
12122
+ num_layers_at_start_in_bf16 ..................... 1
12123
+ num_layers_per_virtual_pipeline_stage ........... None
12124
+ num_query_groups ................................ 16
12125
+ num_virtual_stages_per_pipeline_rank ............ None
12126
+ num_workers ..................................... 2
12127
+ object_storage_cache_path ....................... None
12128
+ one_logger_async ................................ False
12129
+ one_logger_project .............................. megatron-lm
12130
+ one_logger_run_name ............................. None
12131
+ onnx_safe ....................................... None
12132
+ openai_gelu ..................................... False
12133
+ optimizer ....................................... adam
12134
+ optimizer_cpu_offload ........................... False
12135
+ optimizer_offload_fraction ...................... 1.0
12136
+ output_bert_embeddings .......................... False
12137
+ overlap_cpu_optimizer_d2h_h2d ................... False
12138
+ overlap_grad_reduce ............................. False
12139
+ overlap_p2p_comm ................................ False
12140
+ overlap_p2p_comm_warmup_flush ................... False
12141
+ overlap_param_gather ............................ False
12142
+ overlap_param_gather_with_optimizer_step ........ False
12143
+ override_opt_param_scheduler .................... False
12144
+ params_dtype .................................... torch.float16
12145
+ patch_dim ....................................... 16
12146
+ per_split_data_args_path ........................ None
12147
+ perform_initialization .......................... True
12148
+ pin_cpu_grads ................................... True
12149
+ pin_cpu_params .................................. True
12150
+ pipeline_model_parallel_comm_backend ............ None
12151
+ pipeline_model_parallel_size .................... 1
12152
+ pipeline_model_parallel_split_rank .............. None
12153
+ position_embedding_type ......................... learned_absolute
12154
+ pretrained_checkpoint ........................... None
12155
+ profile ......................................... False
12156
+ profile_ranks ................................... [0]
12157
+ profile_step_end ................................ 12
12158
+ profile_step_start .............................. 10
12159
+ q_lora_rank ..................................... None
12160
+ qk_head_dim ..................................... 128
12161
+ qk_l2_norm ...................................... False
12162
+ qk_layernorm .................................... False
12163
+ qk_pos_emb_head_dim ............................. 64
12164
+ query_in_block_prob ............................. 0.1
12165
+ rampup_batch_size ............................... None
12166
+ rank ............................................ 0
12167
+ recompute_granularity ........................... None
12168
+ recompute_method ................................ None
12169
+ recompute_modules ............................... None
12170
+ recompute_num_layers ............................ None
12171
+ record_memory_history ........................... False
12172
+ relative_attention_max_distance ................. 128
12173
+ relative_attention_num_buckets .................. 32
12174
+ replication ..................................... False
12175
+ replication_factor .............................. 2
12176
+ replication_jump ................................ None
12177
+ rerun_mode ...................................... disabled
12178
+ reset_attention_mask ............................ False
12179
+ reset_position_ids .............................. False
12180
+ result_rejected_tracker_filename ................ None
12181
+ retriever_report_topk_accuracies ................ []
12182
+ retriever_score_scaling ......................... False
12183
+ retriever_seq_length ............................ 256
12184
+ retro_add_retriever ............................. False
12185
+ retro_attention_gate ............................ 1
12186
+ retro_cyclic_train_iters ........................ None
12187
+ retro_encoder_attention_dropout ................. 0.1
12188
+ retro_encoder_hidden_dropout .................... 0.1
12189
+ retro_encoder_layers ............................ 2
12190
+ retro_num_neighbors ............................. 2
12191
+ retro_num_retrieved_chunks ...................... 2
12192
+ retro_project_dir ............................... None
12193
+ retro_verify_neighbor_count ..................... True
12194
+ rope_scaling_factor ............................. 8.0
12195
+ rotary_base ..................................... 10000
12196
+ rotary_interleaved .............................. False
12197
+ rotary_percent .................................. 1.0
12198
+ rotary_scaling_factor ........................... 1.0
12199
+ rotary_seq_len_interpolation_factor ............. None
12200
+ run_workload_inspector_server ................... False
12201
+ sample_rate ..................................... 1.0
12202
+ save ............................................ gpt-checkpoint
12203
+ save_interval ................................... 16
12204
+ scatter_gather_tensors_in_pipeline .............. True
12205
+ seed ............................................ 1234
12206
+ seq_length ...................................... 40960
12207
+ sequence_parallel ............................... False
12208
+ sgd_momentum .................................... 0.9
12209
+ short_seq_prob .................................. 0.1
12210
+ skip_train ...................................... False
12211
+ skipped_train_samples ........................... 0
12212
+ spec ............................................ None
12213
+ split ........................................... None
12214
+ squared_relu .................................... False
12215
+ start_weight_decay .............................. 0.1
12216
+ straggler_ctrlr_port ............................ 65535
12217
+ straggler_minmax_count .......................... 1
12218
+ suggested_communication_unit_size ............... None
12219
+ swiglu .......................................... False
12220
+ swin_backbone_type .............................. tiny
12221
+ symmetric_ar_type ............................... None
12222
+ te_rng_tracker .................................. False
12223
+ tensor_model_parallel_size ...................... 2
12224
+ tensorboard_dir ................................. tensorboard-logs/
12225
+ tensorboard_log_interval ........................ 1
12226
+ tensorboard_queue_size .......................... 1000
12227
+ test_data_path .................................. None
12228
+ test_mode ....................................... False
12229
+ tiktoken_num_special_tokens ..................... 1000
12230
+ tiktoken_pattern ................................ None
12231
+ tiktoken_special_tokens ......................... None
12232
+ timing_log_level ................................ 0
12233
+ timing_log_option ............................... minmax
12234
+ titles_data_path ................................ None
12235
+ tokenizer_model ................................. None
12236
+ tokenizer_type .................................. GPT2BPETokenizer
12237
+ torch_fsdp2_reshard_after_forward ............... True
12238
+ tp_comm_bootstrap_backend ....................... nccl
12239
+ tp_comm_bulk_dgrad .............................. True
12240
+ tp_comm_bulk_wgrad .............................. True
12241
+ tp_comm_overlap ................................. False
12242
+ tp_comm_overlap_ag .............................. True
12243
+ tp_comm_overlap_cfg ............................. None
12244
+ tp_comm_overlap_rs .............................. True
12245
+ tp_comm_overlap_rs_dgrad ........................ False
12246
+ tp_comm_split_ag ................................ True
12247
+ tp_comm_split_rs ................................ True
12248
+ train_data_path ................................. None
12249
+ train_iters ..................................... 10
12250
+ train_samples ................................... None
12251
+ train_sync_interval ............................. None
12252
+ transformer_impl ................................ transformer_engine
12253
+ transformer_pipeline_model_parallel_size ........ 1
12254
+ untie_embeddings_and_output_weights ............. False
12255
+ use_checkpoint_args ............................. False
12256
+ use_checkpoint_opt_param_scheduler .............. False
12257
+ use_cpu_initialization .......................... None
12258
+ use_custom_fsdp ................................. False
12259
+ use_dist_ckpt ................................... True
12260
+ use_dist_ckpt_deprecated ........................ False
12261
+ use_distributed_optimizer ....................... False
12262
+ use_flash_attn .................................. False
12263
+ use_legacy_models ............................... False
12264
+ use_mp_args_from_checkpoint_args ................ False
12265
+ use_one_sent_docs ............................... False
12266
+ use_persistent_ckpt_worker ...................... False
12267
+ use_precision_aware_optimizer ................... False
12268
+ use_pytorch_profiler ............................ False
12269
+ use_ring_exchange_p2p ........................... False
12270
+ use_rope_scaling ................................ False
12271
+ use_rotary_position_embeddings .................. False
12272
+ use_sharp ....................................... False
12273
+ use_tokenizer_model_from_checkpoint_args ........ True
12274
+ use_torch_fsdp2 ................................. False
12275
+ use_torch_optimizer_for_cpu_offload ............. False
12276
+ use_tp_pp_dp_mapping ............................ False
12277
+ v_head_dim ...................................... 128
12278
+ valid_data_path ................................. None
12279
+ variable_seq_lengths ............................ False
12280
+ virtual_pipeline_model_parallel_size ............ None
12281
+ vision_backbone_type ............................ vit
12282
+ vision_pretraining .............................. False
12283
+ vision_pretraining_type ......................... classify
12284
+ vocab_extra_ids ................................. 0
12285
+ vocab_file ...................................... vocab.json
12286
+ vocab_size ...................................... None
12287
+ wandb_exp_name ..................................
12288
+ wandb_project ...................................
12289
+ wandb_save_dir ..................................
12290
+ weight_decay .................................... 0.1
12291
+ weight_decay_incr_style ......................... constant
12292
+ wgrad_deferral_limit ............................ 0
12293
+ world_size ...................................... 8
12294
+ yaml_cfg ........................................ None
12295
+ -------------------- end of arguments ---------------------
12296
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
12297
+ > building GPT2BPETokenizer tokenizer ...
12298
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
12299
+ INFO:megatron.training.initialize:Setting logging level to 0
12300
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
12301
+ > initializing torch distributed ...
12302
+ INFO:megatron.training.initialize:Setting logging level to 0
12303
+ INFO:megatron.training.initialize:Setting logging level to 0
12304
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
12305
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
12306
+ INFO:megatron.training.initialize:Setting logging level to 0
12307
+ INFO:megatron.training.initialize:Setting logging level to 0
12308
+ INFO:megatron.training.initialize:Setting logging level to 0
12309
+ > initialized tensor model parallel with size 2
12310
+ > initialized pipeline model parallel with size 1
12311
+ > setting random seeds to 1234 ...
12312
+ > compiling dataset index builder ...
12313
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
12314
+ make: Nothing to be done for 'default'.
12315
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
12316
+ >>> done with dataset index builder. Compilation time: 0.044 seconds
12317
+ WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
12318
+ > compiling and loading fused kernels ...
12319
+ >>> done with compiling and loading fused kernels. Compilation time: 2.108 seconds
12320
+ time to initialize megatron (seconds): 7.123
12321
+ [after megatron is initialized] datetime: 2025-06-21 21:58:58
12322
+ building GPT model ...
12323
+ >>> embedding
12324
+ >>> decoder
12325
+ >>> output_layer
12326
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536
12327
+ >>> embedding
12328
+ >>> decoder
12329
+ >>> output_layer
12330
+ >>> embedding
12331
+ >>> decoder
12332
+ >>> output_layer
12333
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536
12334
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536
12335
+ >>> embedding
12336
+ >>> decoder
12337
+ >>> output_layer
12338
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536
12339
+ >>> embedding
12340
+ >>> decoder
12341
+ >>> output_layer
12342
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536
12343
+ >>> embedding
12344
+ >>> decoder
12345
+ >>> output_layer
12346
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 447297536
12347
+ >>> embedding
12348
+ >>> decoder
12349
+ >>> output_layer
12350
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536
12351
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
12352
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
12353
+ Params for bucket 1 (447297536 elements, 447297536 padded size):
12354
+ module.decoder.final_layernorm.bias
12355
+ module.decoder.layers.1.mlp.linear_fc2.bias
12356
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
12357
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
12358
+ module.embedding.position_embeddings.weight
12359
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
12360
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
12361
+ module.decoder.final_layernorm.weight
12362
+ module.decoder.layers.1.mlp.linear_fc1.bias
12363
+ module.decoder.layers.0.mlp.linear_fc1.bias
12364
+ module.decoder.layers.1.self_attention.linear_qkv.weight
12365
+ module.decoder.layers.1.self_attention.linear_proj.weight
12366
+ module.decoder.layers.0.self_attention.linear_qkv.weight
12367
+ module.decoder.layers.0.self_attention.linear_proj.weight
12368
+ module.decoder.layers.1.mlp.linear_fc2.weight
12369
+ module.decoder.layers.1.self_attention.linear_proj.bias
12370
+ module.decoder.layers.0.self_attention.linear_proj.bias
12371
+ module.embedding.word_embeddings.weight
12372
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
12373
+ module.decoder.layers.0.mlp.linear_fc2.weight
12374
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
12375
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
12376
+ module.decoder.layers.1.self_attention.linear_qkv.bias
12377
+ module.decoder.layers.0.mlp.linear_fc2.bias
12378
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
12379
+ module.decoder.layers.0.self_attention.linear_qkv.bias
12380
+ module.decoder.layers.1.mlp.linear_fc1.weight
12381
+ module.decoder.layers.0.mlp.linear_fc1.weight
12382
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14a0fc21b260>, config_logger_dir='')
12383
+ >>> embedding
12384
+ >>> decoder
12385
+ >>> output_layer
12386
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 447297536
12387
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
12388
+ WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
12389
+ will not load any checkpoints and will start from random
12390
+ (min, max) time across ranks (ms):
12391
+ load-checkpoint ................................: (2.95, 3.61)
12392
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:59:00
12393
+ > building train, validation, and test datasets ...
12394
+ > datasets target sizes (minimum size):
12395
+ train: 10
12396
+ validation: 1
12397
+ test: 1
12398
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
12399
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
12400
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
12401
+ > building train, validation, and test datasets for GPT ...
12402
+ INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=40960, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14a0fc807440>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
12403
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
12404
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
12405
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
12406
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005345 seconds
12407
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1664
12408
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
12409
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
12410
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
12411
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
12412
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001752 seconds
12413
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1664
12414
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
12415
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
12416
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
12417
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
12418
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001523 seconds
12419
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1667
12420
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
12421
+ > finished creating GPT datasets ...
12422
+ [after dataloaders are built] datetime: 2025-06-21 21:59:00
12423
+ done with setup ...
12424
+ (min, max) time across ranks (ms):
12425
+ model-and-optimizer-setup ......................: (1993.56, 1994.68)
12426
+ train/valid/test-data-iterators-setup ..........: (29.22, 179.62)
12427
+ training ...
12428
+ Setting rerun_state_machine.current_iteration to 0...
12429
+ [before the start of training step] datetime: 2025-06-21 21:59:00
12430
+ batch tensor: tokens torch.Size([2, 81920])
12431
+ batch tensor: labels torch.Size([2, 81920])
12432
+ batch tensor: loss_mask torch.Size([2, 81920])
12433
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
12434
+ batch tensor: position_ids torch.Size([2, 81920])
12435
+ batch tensor after cp: tokens torch.Size([2, 20480])
12436
+ batch tensor after cp: labels torch.Size([2, 20480])
12437
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
12438
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
12439
+ batch tensor after cp: position_ids torch.Size([2, 20480])
12440
+ batch tensor: tokens torch.Size([2, 81920])
12441
+ batch tensor: labels torch.Size([2, 81920])
12442
+ batch tensor: loss_mask torch.Size([2, 81920])
12443
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
12444
+ batch tensor: position_ids torch.Size([2, 81920])
12445
+ batch tensor after cp: tokens torch.Size([2, 20480])
12446
+ batch tensor after cp: labels torch.Size([2, 20480])
12447
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
12448
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
12449
+ batch tensor after cp: position_ids torch.Size([2, 20480])
12450
+ batch tensor: tokens torch.Size([2, 81920])
12451
+ batch tensor: labels torch.Size([2, 81920])
12452
+ batch tensor: loss_mask torch.Size([2, 81920])
12453
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
12454
+ batch tensor: position_ids torch.Size([2, 81920])
12455
+ batch tensor: tokens torch.Size([2, 81920])
12456
+ batch tensor: labels torch.Size([2, 81920])
12457
+ batch tensor: loss_mask torch.Size([2, 81920])
12458
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
12459
+ batch tensor: position_ids torch.Size([2, 81920])
12460
+ batch tensor after cp: tokens torch.Size([2, 20480])
12461
+ batch tensor after cp: labels torch.Size([2, 20480])
12462
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
12463
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
12464
+ batch tensor after cp: position_ids torch.Size([2, 20480])
12465
+ batch tensor: tokens torch.Size([2, 81920])
12466
+ batch tensor: labels torch.Size([2, 81920])
12467
+ batch tensor: loss_mask torch.Size([2, 81920])
12468
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
12469
+ batch tensor: position_ids torch.Size([2, 81920])
12470
+ batch tensor: tokens torch.Size([2, 81920])
12471
+ batch tensor: labels torch.Size([2, 81920])
12472
+ batch tensor: loss_mask torch.Size([2, 81920])
12473
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
12474
+ batch tensor: position_ids torch.Size([2, 81920])
12475
+ batch tensor after cp: tokens torch.Size([2, 20480])
12476
+ batch tensor after cp: labels torch.Size([2, 20480])
12477
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
12478
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
12479
+ batch tensor after cp: position_ids torch.Size([2, 20480])
12480
+ batch tensor after cp: tokens torch.Size([2, 20480])
12481
+ batch tensor after cp: labels torch.Size([2, 20480])
12482
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
12483
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
12484
+ batch tensor after cp: position_ids torch.Size([2, 20480])
12485
+ batch tensor after cp: tokens torch.Size([2, 20480])
12486
+ batch tensor after cp: labels torch.Size([2, 20480])
12487
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
12488
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
12489
+ batch tensor after cp: position_ids torch.Size([2, 20480])
12490
+ batch tensor: tokens torch.Size([2, 81920])
12491
+ batch tensor: labels torch.Size([2, 81920])
12492
+ batch tensor: loss_mask torch.Size([2, 81920])
12493
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
12494
+ batch tensor: position_ids torch.Size([2, 81920])
12495
+ batch tensor: tokens torch.Size([2, 81920])
12496
+ batch tensor: labels torch.Size([2, 81920])
12497
+ batch tensor: loss_mask torch.Size([2, 81920])
12498
+ batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
12499
+ batch tensor: position_ids torch.Size([2, 81920])
12500
+ batch tensor after cp: tokens torch.Size([2, 20480])
12501
+ batch tensor after cp: labels torch.Size([2, 20480])
12502
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
12503
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
12504
+ batch tensor after cp: position_ids torch.Size([2, 20480])
12505
+ batch tensor after cp: tokens torch.Size([2, 20480])
12506
+ batch tensor after cp: labels torch.Size([2, 20480])
12507
+ batch tensor after cp: loss_mask torch.Size([2, 20480])
12508
+ batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
12509
+ batch tensor after cp: position_ids torch.Size([2, 20480])
12510
+ Start exporting trace 0
12511
+ Done exporting trace 0
attnserver.run_attnserver.slurm.sh.343245.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343245.out.log CHANGED
@@ -9564,3 +9564,899 @@ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/mega
9564
  >>> done with dataset index builder. Compilation time: 0.043 seconds
9565
  > compiling and loading fused kernels ...
9566
  >>> done with compiling and loading fused kernels. Compilation time: 2.218 seconds
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9564
  >>> done with dataset index builder. Compilation time: 0.043 seconds
9565
  > compiling and loading fused kernels ...
9566
  >>> done with compiling and loading fused kernels. Compilation time: 2.218 seconds
9567
+ time to initialize megatron (seconds): 8.074
9568
+ [after megatron is initialized] datetime: 2025-06-21 21:58:17
9569
+ building GPT model ...
9570
+ >>> embedding
9571
+ >>> decoder
9572
+ >>> output_layer
9573
+ >>> embedding
9574
+ >>> decoder
9575
+ >>> output_layer
9576
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240
9577
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240
9578
+ >>> embedding
9579
+ >>> decoder
9580
+ >>> output_layer
9581
+ >>> embedding
9582
+ >>> decoder
9583
+ >>> output_layer
9584
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240
9585
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240
9586
+ >>> embedding
9587
+ >>> decoder
9588
+ >>> output_layer
9589
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240
9590
+ >>> embedding
9591
+ >>> decoder
9592
+ >>> output_layer
9593
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240
9594
+ >>> embedding
9595
+ >>> decoder
9596
+ >>> output_layer
9597
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 346634240
9598
+ >>> embedding
9599
+ >>> decoder
9600
+ >>> output_layer
9601
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 346634240
9602
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
9603
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
9604
+ Params for bucket 1 (346634240 elements, 346634240 padded size):
9605
+ module.decoder.final_layernorm.weight
9606
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
9607
+ module.decoder.layers.0.mlp.linear_fc2.weight
9608
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
9609
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
9610
+ module.decoder.layers.1.self_attention.linear_qkv.bias
9611
+ module.decoder.layers.0.mlp.linear_fc2.bias
9612
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
9613
+ module.decoder.layers.0.self_attention.linear_qkv.bias
9614
+ module.decoder.layers.1.mlp.linear_fc1.weight
9615
+ module.decoder.layers.0.mlp.linear_fc1.weight
9616
+ module.decoder.layers.1.mlp.linear_fc2.bias
9617
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
9618
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
9619
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
9620
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
9621
+ module.decoder.layers.1.mlp.linear_fc1.bias
9622
+ module.decoder.layers.0.mlp.linear_fc1.bias
9623
+ module.decoder.final_layernorm.bias
9624
+ module.decoder.layers.1.self_attention.linear_qkv.weight
9625
+ module.decoder.layers.1.self_attention.linear_proj.weight
9626
+ module.decoder.layers.0.self_attention.linear_qkv.weight
9627
+ module.decoder.layers.0.self_attention.linear_proj.weight
9628
+ module.embedding.position_embeddings.weight
9629
+ module.embedding.word_embeddings.weight
9630
+ module.decoder.layers.1.mlp.linear_fc2.weight
9631
+ module.decoder.layers.1.self_attention.linear_proj.bias
9632
+ module.decoder.layers.0.self_attention.linear_proj.bias
9633
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x145396736360>, config_logger_dir='')
9634
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
9635
+ loading distributed checkpoint from gpt-checkpoint at iteration 10
9636
+ Running ctx_length=24576, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=4
9637
+ Cleaning up checkpoint directory: gpt-checkpoint
9638
+ --------------------------------
9639
+ CTX_LENGTH: 24576
9640
+ TP_SIZE: 2
9641
+ CP_SIZE: 4
9642
+ CHECKPOINT_PATH: gpt-checkpoint
9643
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
9644
+ --------------------------------
9645
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
9646
+ INFO:megatron.training.initialize:Setting logging level to 0
9647
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
9648
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
9649
+ INFO:megatron.training.initialize:Setting logging level to 0
9650
+ using world size: 8, data-parallel size: 1, context-parallel size: 4, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
9651
+ Number of virtual stages per pipeline stage: None
9652
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
9653
+ using torch.float16 for parameters ...
9654
+ ------------------------ arguments ------------------------
9655
+ account_for_embedding_in_pipeline_split ......... False
9656
+ account_for_loss_in_pipeline_split .............. False
9657
+ accumulate_allreduce_grads_in_fp32 .............. False
9658
+ adam_beta1 ...................................... 0.9
9659
+ adam_beta2 ...................................... 0.999
9660
+ adam_eps ........................................ 1e-08
9661
+ add_bias_linear ................................. True
9662
+ add_position_embedding .......................... True
9663
+ add_qkv_bias .................................... True
9664
+ adlr_autoresume ................................. False
9665
+ adlr_autoresume_interval ........................ 1000
9666
+ align_grad_reduce ............................... True
9667
+ align_param_gather .............................. False
9668
+ app_tag_run_name ................................ None
9669
+ app_tag_run_version ............................. 0.0.0
9670
+ apply_layernorm_1p .............................. False
9671
+ apply_query_key_layer_scaling ................... False
9672
+ apply_residual_connection_post_layernorm ........ False
9673
+ apply_rope_fusion ............................... False
9674
+ async_save ...................................... None
9675
+ async_tensor_model_parallel_allreduce ........... True
9676
+ attention_backend ............................... AttnBackend.auto
9677
+ attention_dropout ............................... 0.1
9678
+ attention_softmax_in_fp32 ....................... False
9679
+ auto_detect_ckpt_format ......................... False
9680
+ barrier_with_L1_time ............................ True
9681
+ bert_binary_head ................................ True
9682
+ bert_embedder_type .............................. megatron
9683
+ bert_load ....................................... None
9684
+ bf16 ............................................ False
9685
+ bias_dropout_fusion ............................. True
9686
+ bias_gelu_fusion ................................ True
9687
+ bias_swiglu_fusion .............................. True
9688
+ biencoder_projection_dim ........................ 0
9689
+ biencoder_shared_query_context_model ............ False
9690
+ block_data_path ................................. None
9691
+ calc_ft_timeouts ................................ False
9692
+ calculate_per_token_loss ........................ False
9693
+ check_for_large_grads ........................... False
9694
+ check_for_nan_in_loss_and_grad .................. False
9695
+ check_for_spiky_loss ............................ False
9696
+ check_weight_hash_across_dp_replicas_interval ... None
9697
+ ckpt_assume_constant_structure .................. False
9698
+ ckpt_convert_format ............................. None
9699
+ ckpt_convert_save ............................... None
9700
+ ckpt_convert_update_legacy_dist_opt_format ...... False
9701
+ ckpt_format ..................................... torch_dist
9702
+ ckpt_fully_parallel_load ........................ False
9703
+ ckpt_fully_parallel_save ........................ True
9704
+ ckpt_fully_parallel_save_deprecated ............. False
9705
+ ckpt_step ....................................... None
9706
+ classes_fraction ................................ 1.0
9707
+ clip_grad ....................................... 1.0
9708
+ clone_scatter_output_in_embedding ............... True
9709
+ config_logger_dir ...............................
9710
+ consumed_train_samples .......................... 0
9711
+ consumed_valid_samples .......................... 0
9712
+ context_parallel_size ........................... 4
9713
+ cp_comm_type .................................... ['p2p']
9714
+ create_attention_mask_in_dataloader ............. True
9715
+ cross_entropy_fusion_impl ....................... native
9716
+ cross_entropy_loss_fusion ....................... False
9717
+ cuda_graph_scope ................................ full
9718
+ cuda_graph_warmup_steps ......................... 3
9719
+ data_args_path .................................. None
9720
+ data_cache_path ................................. None
9721
+ data_parallel_random_init ....................... False
9722
+ data_parallel_sharding_strategy ................. no_shard
9723
+ data_parallel_size .............................. 1
9724
+ data_path ....................................... None
9725
+ data_per_class_fraction ......................... 1.0
9726
+ data_sharding ................................... True
9727
+ dataloader_type ................................. single
9728
+ ddp_average_in_collective ....................... False
9729
+ ddp_bucket_size ................................. None
9730
+ ddp_num_buckets ................................. None
9731
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
9732
+ decoder_first_pipeline_num_layers ............... None
9733
+ decoder_last_pipeline_num_layers ................ None
9734
+ decoder_num_layers .............................. None
9735
+ decoder_seq_length .............................. None
9736
+ decoupled_lr .................................... None
9737
+ decoupled_min_lr ................................ None
9738
+ decrease_batch_size_if_needed ................... False
9739
+ defer_embedding_wgrad_compute ................... False
9740
+ deprecated_use_mcore_models ..................... False
9741
+ deterministic_mode .............................. False
9742
+ dino_bottleneck_size ............................ 256
9743
+ dino_freeze_last_layer .......................... 1
9744
+ dino_head_hidden_size ........................... 2048
9745
+ dino_local_crops_number ......................... 10
9746
+ dino_local_img_size ............................. 96
9747
+ dino_norm_last_layer ............................ False
9748
+ dino_teacher_temp ............................... 0.07
9749
+ dino_warmup_teacher_temp ........................ 0.04
9750
+ dino_warmup_teacher_temp_epochs ................. 30
9751
+ disable_bf16_reduced_precision_matmul ........... False
9752
+ disable_mamba_mem_eff_path ...................... False
9753
+ disable_straggler_on_startup .................... False
9754
+ dist_ckpt_format_deprecated ..................... None
9755
+ dist_ckpt_strictness ............................ assume_ok_unexpected
9756
+ distribute_saved_activations .................... False
9757
+ distributed_backend ............................. nccl
9758
+ distributed_timeout_minutes ..................... 10
9759
+ embedding_path .................................. None
9760
+ empty_unused_memory_level ....................... 0
9761
+ enable_cuda_graph ............................... False
9762
+ enable_ft_package ............................... False
9763
+ enable_gloo_process_groups ...................... True
9764
+ enable_msc ...................................... True
9765
+ enable_one_logger ............................... True
9766
+ encoder_num_layers .............................. 2
9767
+ encoder_pipeline_model_parallel_size ............ 0
9768
+ encoder_seq_length .............................. 24576
9769
+ encoder_tensor_model_parallel_size .............. 0
9770
+ end_weight_decay ................................ 0.1
9771
+ eod_mask_loss ................................... False
9772
+ error_injection_rate ............................ 0
9773
+ error_injection_type ............................ transient_error
9774
+ eval_interval ................................... 16
9775
+ eval_iters ...................................... 1
9776
+ evidence_data_path .............................. None
9777
+ exit_duration_in_mins ........................... None
9778
+ exit_interval ................................... None
9779
+ exit_on_missing_checkpoint ...................... False
9780
+ exit_signal_handler ............................. False
9781
+ exp_avg_dtype ................................... torch.float32
9782
+ exp_avg_sq_dtype ................................ torch.float32
9783
+ expert_model_parallel_size ...................... 1
9784
+ expert_tensor_parallel_size ..................... 2
9785
+ external_cuda_graph ............................. False
9786
+ ffn_hidden_size ................................. 16384
9787
+ finetune ........................................ False
9788
+ first_last_layers_bf16 .......................... False
9789
+ flash_decode .................................... False
9790
+ fp16 ............................................ True
9791
+ fp16_lm_cross_entropy ........................... False
9792
+ fp32_residual_connection ........................ False
9793
+ fp8 ............................................. None
9794
+ fp8_amax_compute_algo ........................... most_recent
9795
+ fp8_amax_history_len ............................ 1
9796
+ fp8_interval .................................... 1
9797
+ fp8_margin ...................................... 0
9798
+ fp8_param_gather ................................ False
9799
+ fp8_recipe ...................................... delayed
9800
+ fp8_wgrad ....................................... True
9801
+ fsdp_double_buffer .............................. False
9802
+ global_batch_size ............................... 1
9803
+ grad_reduce_in_bf16 ............................. False
9804
+ gradient_accumulation_fusion .................... True
9805
+ gradient_reduce_div_fusion ...................... True
9806
+ group_query_attention ........................... True
9807
+ head_lr_mult .................................... 1.0
9808
+ heterogeneous_layers_config_encoded_json ........ None
9809
+ heterogeneous_layers_config_path ................ None
9810
+ hidden_dropout .................................. 0.1
9811
+ hidden_size ..................................... 4096
9812
+ hierarchical_context_parallel_sizes ............. None
9813
+ high_priority_stream_groups ..................... []
9814
+ hybrid_attention_ratio .......................... 0.0
9815
+ hybrid_mlp_ratio ................................ 0.0
9816
+ hybrid_override_pattern ......................... None
9817
+ hysteresis ...................................... 2
9818
+ ict_head_size ................................... None
9819
+ ict_load ........................................ None
9820
+ img_h ........................................... 224
9821
+ img_w ........................................... 224
9822
+ indexer_batch_size .............................. 128
9823
+ indexer_log_interval ............................ 1000
9824
+ inference_batch_times_seqlen_threshold .......... -1
9825
+ inference_dynamic_batching ...................... False
9826
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
9827
+ inference_dynamic_batching_buffer_overflow_factor None
9828
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
9829
+ inference_dynamic_batching_chunk_size ........... 256
9830
+ inference_dynamic_batching_max_requests_override None
9831
+ inference_dynamic_batching_max_tokens_override .. None
9832
+ inference_max_batch_size ........................ 8
9833
+ inference_max_seq_length ........................ 2560
9834
+ inference_rng_tracker ........................... False
9835
+ init_method_std ................................. 0.02
9836
+ init_method_xavier_uniform ...................... False
9837
+ init_model_with_meta_device ..................... False
9838
+ initial_loss_scale .............................. 4294967296
9839
+ inprocess_active_world_size ..................... 8
9840
+ inprocess_barrier_timeout ....................... 120
9841
+ inprocess_completion_timeout .................... 120
9842
+ inprocess_empty_cuda_cache ...................... False
9843
+ inprocess_granularity ........................... node
9844
+ inprocess_hard_timeout .......................... 90
9845
+ inprocess_heartbeat_interval .................... 30
9846
+ inprocess_heartbeat_timeout ..................... 60
9847
+ inprocess_last_call_wait ........................ 1
9848
+ inprocess_max_iterations ........................ None
9849
+ inprocess_monitor_process_interval .............. 1.0
9850
+ inprocess_monitor_thread_interval ............... 1.0
9851
+ inprocess_progress_watchdog_interval ............ 1.0
9852
+ inprocess_restart ............................... False
9853
+ inprocess_soft_timeout .......................... 60
9854
+ inprocess_termination_grace_time ................ 1
9855
+ is_hybrid_model ................................. False
9856
+ iter_per_epoch .................................. 1250
9857
+ iterations_to_skip .............................. []
9858
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
9859
+ kv_channels ..................................... 64
9860
+ kv_lora_rank .................................... 32
9861
+ lazy_mpu_init ................................... None
9862
+ load ............................................ gpt-checkpoint
9863
+ load_model_opt_format ........................... False
9864
+ local_rank ...................................... 0
9865
+ log_interval .................................... 1
9866
+ log_loss_scale_to_tensorboard ................... True
9867
+ log_memory_to_tensorboard ....................... False
9868
+ log_num_zeros_in_grad ........................... False
9869
+ log_params_norm ................................. False
9870
+ log_progress .................................... False
9871
+ log_straggler ................................... False
9872
+ log_throughput .................................. False
9873
+ log_timers_to_tensorboard ....................... False
9874
+ log_validation_ppl_to_tensorboard ............... False
9875
+ log_world_size_to_tensorboard ................... False
9876
+ logging_level ................................... 0
9877
+ loss_scale ...................................... None
9878
+ loss_scale_window ............................... 1000
9879
+ lr .............................................. 0.0005
9880
+ lr_decay_iters .................................. 150000
9881
+ lr_decay_samples ................................ None
9882
+ lr_decay_style .................................. cosine
9883
+ lr_warmup_fraction .............................. None
9884
+ lr_warmup_init .................................. 0.0
9885
+ lr_warmup_iters ................................. 2
9886
+ lr_warmup_samples ............................... 0
9887
+ lr_wsd_decay_iters .............................. None
9888
+ lr_wsd_decay_samples ............................ None
9889
+ lr_wsd_decay_style .............................. exponential
9890
+ main_grads_dtype ................................ torch.float32
9891
+ main_params_dtype ............................... torch.float32
9892
+ make_vocab_size_divisible_by .................... 128
9893
+ mamba_head_dim .................................. 64
9894
+ mamba_num_groups ................................ 8
9895
+ mamba_num_heads ................................. None
9896
+ mamba_state_dim ................................. 128
9897
+ manual_gc ....................................... False
9898
+ manual_gc_eval .................................. True
9899
+ manual_gc_interval .............................. 0
9900
+ mask_factor ..................................... 1.0
9901
+ mask_prob ....................................... 0.15
9902
+ mask_type ....................................... random
9903
+ masked_softmax_fusion ........................... True
9904
+ max_position_embeddings ......................... 24576
9905
+ max_tokens_to_oom ............................... 12000
9906
+ memory_snapshot_path ............................ snapshot.pickle
9907
+ merge_file ...................................... merges.txt
9908
+ micro_batch_size ................................ 1
9909
+ microbatch_group_size_per_vp_stage .............. None
9910
+ mid_level_dataset_surplus ....................... 0.005
9911
+ min_loss_scale .................................. 1.0
9912
+ min_lr .......................................... 0.0
9913
+ mlp_chunks_for_prefill .......................... 1
9914
+ mmap_bin_files .................................. True
9915
+ mock_data ....................................... True
9916
+ moe_apply_probs_on_input ........................ False
9917
+ moe_aux_loss_coeff .............................. 0.0
9918
+ moe_enable_deepep ............................... False
9919
+ moe_expert_capacity_factor ...................... None
9920
+ moe_extended_tp ................................. False
9921
+ moe_ffn_hidden_size ............................. None
9922
+ moe_grouped_gemm ................................ False
9923
+ moe_input_jitter_eps ............................ None
9924
+ moe_layer_freq .................................. 1
9925
+ moe_layer_recompute ............................. False
9926
+ moe_pad_expert_input_to_capacity ................ False
9927
+ moe_per_layer_logging ........................... False
9928
+ moe_permute_fusion .............................. False
9929
+ moe_router_bias_update_rate ..................... 0.001
9930
+ moe_router_dtype ................................ None
9931
+ moe_router_enable_expert_bias ................... False
9932
+ moe_router_force_load_balancing ................. False
9933
+ moe_router_group_topk ........................... None
9934
+ moe_router_load_balancing_type .................. aux_loss
9935
+ moe_router_num_groups ........................... None
9936
+ moe_router_padding_for_fp8 ...................... False
9937
+ moe_router_pre_softmax .......................... False
9938
+ moe_router_score_function ....................... softmax
9939
+ moe_router_topk ................................. 2
9940
+ moe_router_topk_scaling_factor .................. None
9941
+ moe_shared_expert_intermediate_size ............. None
9942
+ moe_shared_expert_overlap ....................... False
9943
+ moe_token_dispatcher_type ....................... allgather
9944
+ moe_token_drop_policy ........................... probs
9945
+ moe_use_legacy_grouped_gemm ..................... False
9946
+ moe_use_upcycling ............................... False
9947
+ moe_z_loss_coeff ................................ None
9948
+ mrope_section ................................... None
9949
+ mscale .......................................... 1.0
9950
+ mscale_all_dim .................................. 1.0
9951
+ mtp_loss_scaling_factor ......................... 0.1
9952
+ mtp_num_layers .................................. None
9953
+ multi_latent_attention .......................... False
9954
+ nccl_all_reduce_for_prefill ..................... False
9955
+ nccl_communicator_config_path ................... None
9956
+ nccl_ub ......................................... False
9957
+ no_load_optim ................................... None
9958
+ no_load_rng ..................................... None
9959
+ no_persist_layer_norm ........................... False
9960
+ no_rope_freq .................................... None
9961
+ no_save_optim ................................... None
9962
+ no_save_rng ..................................... None
9963
+ non_persistent_ckpt_type ........................ None
9964
+ non_persistent_global_ckpt_dir .................. None
9965
+ non_persistent_local_ckpt_algo .................. fully_parallel
9966
+ non_persistent_local_ckpt_dir ................... None
9967
+ non_persistent_save_interval .................... None
9968
+ norm_epsilon .................................... 1e-05
9969
+ normalization ................................... LayerNorm
9970
+ num_attention_heads ............................. 64
9971
+ num_channels .................................... 3
9972
+ num_classes ..................................... 1000
9973
+ num_dataset_builder_threads ..................... 1
9974
+ num_distributed_optimizer_instances ............. 1
9975
+ num_experts ..................................... None
9976
+ num_layers ...................................... 2
9977
+ num_layers_at_end_in_bf16 ....................... 1
9978
+ num_layers_at_start_in_bf16 ..................... 1
9979
+ num_layers_per_virtual_pipeline_stage ........... None
9980
+ num_query_groups ................................ 16
9981
+ num_virtual_stages_per_pipeline_rank ............ None
9982
+ num_workers ..................................... 2
9983
+ object_storage_cache_path ....................... None
9984
+ one_logger_async ................................ False
9985
+ one_logger_project .............................. megatron-lm
9986
+ one_logger_run_name ............................. None
9987
+ onnx_safe ....................................... None
9988
+ openai_gelu ..................................... False
9989
+ optimizer ....................................... adam
9990
+ optimizer_cpu_offload ........................... False
9991
+ optimizer_offload_fraction ...................... 1.0
9992
+ output_bert_embeddings .......................... False
9993
+ overlap_cpu_optimizer_d2h_h2d ................... False
9994
+ overlap_grad_reduce ............................. False
9995
+ overlap_p2p_comm ................................ False
9996
+ overlap_p2p_comm_warmup_flush ................... False
9997
+ overlap_param_gather ............................ False
9998
+ overlap_param_gather_with_optimizer_step ........ False
9999
+ override_opt_param_scheduler .................... False
10000
+ params_dtype .................................... torch.float16
10001
+ patch_dim ....................................... 16
10002
+ per_split_data_args_path ........................ None
10003
+ perform_initialization .......................... True
10004
+ pin_cpu_grads ................................... True
10005
+ pin_cpu_params .................................. True
10006
+ pipeline_model_parallel_comm_backend ............ None
10007
+ pipeline_model_parallel_size .................... 1
10008
+ pipeline_model_parallel_split_rank .............. None
10009
+ position_embedding_type ......................... learned_absolute
10010
+ pretrained_checkpoint ........................... None
10011
+ profile ......................................... False
10012
+ profile_ranks ................................... [0]
10013
+ profile_step_end ................................ 12
10014
+ profile_step_start .............................. 10
10015
+ q_lora_rank ..................................... None
10016
+ qk_head_dim ..................................... 128
10017
+ qk_l2_norm ...................................... False
10018
+ qk_layernorm .................................... False
10019
+ qk_pos_emb_head_dim ............................. 64
10020
+ query_in_block_prob ............................. 0.1
10021
+ rampup_batch_size ............................... None
10022
+ rank ............................................ 0
10023
+ recompute_granularity ........................... None
10024
+ recompute_method ................................ None
10025
+ recompute_modules ............................... None
10026
+ recompute_num_layers ............................ None
10027
+ record_memory_history ........................... False
10028
+ relative_attention_max_distance ................. 128
10029
+ relative_attention_num_buckets .................. 32
10030
+ replication ..................................... False
10031
+ replication_factor .............................. 2
10032
+ replication_jump ................................ None
10033
+ rerun_mode ...................................... disabled
10034
+ reset_attention_mask ............................ False
10035
+ reset_position_ids .............................. False
10036
+ result_rejected_tracker_filename ................ None
10037
+ retriever_report_topk_accuracies ................ []
10038
+ retriever_score_scaling ......................... False
10039
+ retriever_seq_length ............................ 256
10040
+ retro_add_retriever ............................. False
10041
+ retro_attention_gate ............................ 1
10042
+ retro_cyclic_train_iters ........................ None
10043
+ retro_encoder_attention_dropout ................. 0.1
10044
+ retro_encoder_hidden_dropout .................... 0.1
10045
+ retro_encoder_layers ............................ 2
10046
+ retro_num_neighbors ............................. 2
10047
+ retro_num_retrieved_chunks ...................... 2
10048
+ retro_project_dir ............................... None
10049
+ retro_verify_neighbor_count ..................... True
10050
+ rope_scaling_factor ............................. 8.0
10051
+ rotary_base ..................................... 10000
10052
+ rotary_interleaved .............................. False
10053
+ rotary_percent .................................. 1.0
10054
+ rotary_scaling_factor ........................... 1.0
10055
+ rotary_seq_len_interpolation_factor ............. None
10056
+ run_workload_inspector_server ................... False
10057
+ sample_rate ..................................... 1.0
10058
+ save ............................................ gpt-checkpoint
10059
+ save_interval ................................... 16
10060
+ scatter_gather_tensors_in_pipeline .............. True
10061
+ seed ............................................ 1234
10062
+ seq_length ...................................... 24576
10063
+ sequence_parallel ............................... False
10064
+ sgd_momentum .................................... 0.9
10065
+ short_seq_prob .................................. 0.1
10066
+ skip_train ...................................... False
10067
+ skipped_train_samples ........................... 0
10068
+ spec ............................................ None
10069
+ split ........................................... None
10070
+ squared_relu .................................... False
10071
+ start_weight_decay .............................. 0.1
10072
+ straggler_ctrlr_port ............................ 65535
10073
+ straggler_minmax_count .......................... 1
10074
+ suggested_communication_unit_size ............... None
10075
+ swiglu .......................................... False
10076
+ swin_backbone_type .............................. tiny
10077
+ symmetric_ar_type ............................... None
10078
+ te_rng_tracker .................................. False
10079
+ tensor_model_parallel_size ...................... 2
10080
+ tensorboard_dir ................................. tensorboard-logs/
10081
+ tensorboard_log_interval ........................ 1
10082
+ tensorboard_queue_size .......................... 1000
10083
+ test_data_path .................................. None
10084
+ test_mode ....................................... False
10085
+ tiktoken_num_special_tokens ..................... 1000
10086
+ tiktoken_pattern ................................ None
10087
+ tiktoken_special_tokens ......................... None
10088
+ timing_log_level ................................ 0
10089
+ timing_log_option ............................... minmax
10090
+ titles_data_path ................................ None
10091
+ tokenizer_model ................................. None
10092
+ tokenizer_type .................................. GPT2BPETokenizer
10093
+ torch_fsdp2_reshard_after_forward ............... True
10094
+ tp_comm_bootstrap_backend ....................... nccl
10095
+ tp_comm_bulk_dgrad .............................. True
10096
+ tp_comm_bulk_wgrad .............................. True
10097
+ tp_comm_overlap ................................. False
10098
+ tp_comm_overlap_ag .............................. True
10099
+ tp_comm_overlap_cfg ............................. None
10100
+ tp_comm_overlap_rs .............................. True
10101
+ tp_comm_overlap_rs_dgrad ........................ False
10102
+ tp_comm_split_ag ................................ True
10103
+ tp_comm_split_rs ................................ True
10104
+ train_data_path ................................. None
10105
+ train_iters ..................................... 10
10106
+ train_samples ................................... None
10107
+ train_sync_interval ............................. None
10108
+ transformer_impl ................................ transformer_engine
10109
+ transformer_pipeline_model_parallel_size ........ 1
10110
+ untie_embeddings_and_output_weights ............. False
10111
+ use_checkpoint_args ............................. False
10112
+ use_checkpoint_opt_param_scheduler .............. False
10113
+ use_cpu_initialization .......................... None
10114
+ use_custom_fsdp ................................. False
10115
+ use_dist_ckpt ................................... True
10116
+ use_dist_ckpt_deprecated ........................ False
10117
+ use_distributed_optimizer ....................... False
10118
+ use_flash_attn .................................. False
10119
+ use_legacy_models ............................... False
10120
+ use_mp_args_from_checkpoint_args ................ False
10121
+ use_one_sent_docs ............................... False
10122
+ use_persistent_ckpt_worker ...................... False
10123
+ use_precision_aware_optimizer ................... False
10124
+ use_pytorch_profiler ............................ False
10125
+ use_ring_exchange_p2p ........................... False
10126
+ use_rope_scaling ................................ False
10127
+ use_rotary_position_embeddings .................. False
10128
+ use_sharp ....................................... False
10129
+ use_tokenizer_model_from_checkpoint_args ........ True
10130
+ use_torch_fsdp2 ................................. False
10131
+ use_torch_optimizer_for_cpu_offload ............. False
10132
+ use_tp_pp_dp_mapping ............................ False
10133
+ v_head_dim ...................................... 128
10134
+ valid_data_path ................................. None
10135
+ variable_seq_lengths ............................ False
10136
+ virtual_pipeline_model_parallel_size ............ None
10137
+ vision_backbone_type ............................ vit
10138
+ vision_pretraining .............................. False
10139
+ vision_pretraining_type ......................... classify
10140
+ vocab_extra_ids ................................. 0
10141
+ vocab_file ...................................... vocab.json
10142
+ vocab_size ...................................... None
10143
+ wandb_exp_name ..................................
10144
+ wandb_project ...................................
10145
+ wandb_save_dir ..................................
10146
+ weight_decay .................................... 0.1
10147
+ weight_decay_incr_style ......................... constant
10148
+ wgrad_deferral_limit ............................ 0
10149
+ world_size ...................................... 8
10150
+ yaml_cfg ........................................ None
10151
+ -------------------- end of arguments ---------------------
10152
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
10153
+ > building GPT2BPETokenizer tokenizer ...
10154
+ INFO:megatron.training.initialize:Setting logging level to 0
10155
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
10156
+ INFO:megatron.training.initialize:Setting logging level to 0
10157
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
10158
+ > initializing torch distributed ...
10159
+ INFO:megatron.training.initialize:Setting logging level to 0
10160
+ INFO:megatron.training.initialize:Setting logging level to 0
10161
+ INFO:megatron.training.initialize:Setting logging level to 0
10162
+ INFO:megatron.training.initialize:Setting logging level to 0
10163
+ > initialized tensor model parallel with size 2
10164
+ > initialized pipeline model parallel with size 1
10165
+ > setting random seeds to 1234 ...
10166
+ > compiling dataset index builder ...
10167
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
10168
+ make: Nothing to be done for 'default'.
10169
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
10170
+ >>> done with dataset index builder. Compilation time: 0.045 seconds
10171
+ WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
10172
+ > compiling and loading fused kernels ...
10173
+ >>> done with compiling and loading fused kernels. Compilation time: 2.145 seconds
10174
+ time to initialize megatron (seconds): 7.202
10175
+ [after megatron is initialized] datetime: 2025-06-21 21:58:58
10176
+ building GPT model ...
10177
+ >>> embedding
10178
+ >>> decoder
10179
+ >>> output_layer
10180
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672
10181
+ >>> embedding
10182
+ >>> decoder
10183
+ >>> output_layer
10184
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672
10185
+ >>> embedding
10186
+ >>> decoder
10187
+ >>> output_layer
10188
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672
10189
+ >>> embedding
10190
+ >>> decoder
10191
+ >>> output_layer
10192
+ >>> embedding
10193
+ >>> decoder
10194
+ >>> output_layer
10195
+ >>> embedding
10196
+ >>> decoder
10197
+ >>> output_layer
10198
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672
10199
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 380188672
10200
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672
10201
+ >>> embedding
10202
+ >>> decoder
10203
+ >>> output_layer
10204
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672
10205
+ >>> embedding
10206
+ >>> decoder
10207
+ >>> output_layer
10208
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 380188672
10209
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
10210
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
10211
+ Params for bucket 1 (380188672 elements, 380188672 padded size):
10212
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
10213
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
10214
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
10215
+ module.decoder.final_layernorm.weight
10216
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
10217
+ module.decoder.layers.1.self_attention.linear_qkv.bias
10218
+ module.decoder.layers.0.mlp.linear_fc2.bias
10219
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
10220
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
10221
+ module.embedding.position_embeddings.weight
10222
+ module.decoder.layers.1.mlp.linear_fc1.weight
10223
+ module.decoder.layers.0.mlp.linear_fc1.weight
10224
+ module.decoder.layers.1.mlp.linear_fc2.bias
10225
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
10226
+ module.decoder.layers.0.self_attention.linear_qkv.weight
10227
+ module.decoder.layers.0.self_attention.linear_proj.weight
10228
+ module.embedding.word_embeddings.weight
10229
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
10230
+ module.decoder.layers.0.self_attention.linear_proj.bias
10231
+ module.decoder.layers.0.mlp.linear_fc2.weight
10232
+ module.decoder.layers.1.mlp.linear_fc1.bias
10233
+ module.decoder.layers.0.mlp.linear_fc1.bias
10234
+ module.decoder.layers.1.self_attention.linear_qkv.weight
10235
+ module.decoder.layers.1.self_attention.linear_proj.weight
10236
+ module.decoder.layers.0.self_attention.linear_qkv.bias
10237
+ module.decoder.final_layernorm.bias
10238
+ module.decoder.layers.1.mlp.linear_fc2.weight
10239
+ module.decoder.layers.1.self_attention.linear_proj.bias
10240
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14d1a7e3a4e0>, config_logger_dir='')
10241
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
10242
+ WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
10243
+ will not load any checkpoints and will start from random
10244
+ (min, max) time across ranks (ms):
10245
+ load-checkpoint ................................: (3.64, 3.93)
10246
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:58:59
10247
+ > building train, validation, and test datasets ...
10248
+ > datasets target sizes (minimum size):
10249
+ train: 10
10250
+ validation: 1
10251
+ test: 1
10252
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
10253
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
10254
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
10255
+ > building train, validation, and test datasets for GPT ...
10256
+ INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=24576, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14d1a7e8ef60>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
10257
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
10258
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
10259
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
10260
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004665 seconds
10261
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2774
10262
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
10263
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
10264
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
10265
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
10266
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001884 seconds
10267
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2773
10268
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
10269
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
10270
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
10271
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
10272
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001557 seconds
10273
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 2778
10274
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
10275
+ > finished creating GPT datasets ...
10276
+ [after dataloaders are built] datetime: 2025-06-21 21:58:59
10277
+ done with setup ...
10278
+ (min, max) time across ranks (ms):
10279
+ model-and-optimizer-setup ......................: (1222.86, 1231.28)
10280
+ train/valid/test-data-iterators-setup ..........: (32.66, 161.70)
10281
+ training ...
10282
+ Setting rerun_state_machine.current_iteration to 0...
10283
+ [before the start of training step] datetime: 2025-06-21 21:58:59
10284
+ batch tensor: tokens torch.Size([4, 98304])
10285
+ batch tensor: labels torch.Size([4, 98304])
10286
+ batch tensor: loss_mask torch.Size([4, 98304])
10287
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10288
+ batch tensor: position_ids torch.Size([4, 98304])
10289
+ batch tensor after cp: tokens torch.Size([4, 24576])
10290
+ batch tensor after cp: labels torch.Size([4, 24576])
10291
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10292
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10293
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10294
+ batch tensor: tokens torch.Size([4, 98304])
10295
+ batch tensor: labels torch.Size([4, 98304])
10296
+ batch tensor: loss_mask torch.Size([4, 98304])
10297
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10298
+ batch tensor: position_ids torch.Size([4, 98304])
10299
+ batch tensor: tokens torch.Size([4, 98304])
10300
+ batch tensor: labels torch.Size([4, 98304])
10301
+ batch tensor: loss_mask torch.Size([4, 98304])
10302
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10303
+ batch tensor: position_ids torch.Size([4, 98304])
10304
+ batch tensor after cp: tokens torch.Size([4, 24576])
10305
+ batch tensor after cp: labels torch.Size([4, 24576])
10306
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10307
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10308
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10309
+ batch tensor after cp: tokens torch.Size([4, 24576])
10310
+ batch tensor after cp: labels torch.Size([4, 24576])
10311
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10312
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10313
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10314
+ batch tensor: tokens torch.Size([4, 98304])
10315
+ batch tensor: labels torch.Size([4, 98304])
10316
+ batch tensor: loss_mask torch.Size([4, 98304])
10317
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10318
+ batch tensor: position_ids torch.Size([4, 98304])
10319
+ batch tensor after cp: tokens torch.Size([4, 24576])
10320
+ batch tensor after cp: labels torch.Size([4, 24576])
10321
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10322
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10323
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10324
+ batch tensor: tokens torch.Size([4, 98304])
10325
+ batch tensor: labels torch.Size([4, 98304])
10326
+ batch tensor: loss_mask torch.Size([4, 98304])
10327
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10328
+ batch tensor: position_ids torch.Size([4, 98304])
10329
+ batch tensor: tokens torch.Size([4, 98304])
10330
+ batch tensor: labels torch.Size([4, 98304])
10331
+ batch tensor: loss_mask torch.Size([4, 98304])
10332
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10333
+ batch tensor: position_ids torch.Size([4, 98304])
10334
+ batch tensor after cp: tokens torch.Size([4, 24576])
10335
+ batch tensor after cp: labels torch.Size([4, 24576])
10336
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10337
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10338
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10339
+ batch tensor: tokens torch.Size([4, 98304])
10340
+ batch tensor: labels torch.Size([4, 98304])
10341
+ batch tensor: loss_mask torch.Size([4, 98304])
10342
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10343
+ batch tensor: position_ids torch.Size([4, 98304])
10344
+ batch tensor after cp: tokens torch.Size([4, 24576])
10345
+ batch tensor after cp: labels torch.Size([4, 24576])
10346
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10347
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10348
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10349
+ batch tensor after cp: tokens torch.Size([4, 24576])
10350
+ batch tensor after cp: labels torch.Size([4, 24576])
10351
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10352
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10353
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10354
+ batch tensor: tokens torch.Size([4, 98304])
10355
+ batch tensor: labels torch.Size([4, 98304])
10356
+ batch tensor: loss_mask torch.Size([4, 98304])
10357
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10358
+ batch tensor: position_ids torch.Size([4, 98304])
10359
+ batch tensor after cp: tokens torch.Size([4, 24576])
10360
+ batch tensor after cp: labels torch.Size([4, 24576])
10361
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10362
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10363
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10364
+ Start exporting trace 0
10365
+ Done exporting trace 0
10366
+ [2025-06-21 21:59:24] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 25150.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 |Number of parameters in transformer block in billions: 0.35
10367
+
10368
+ Number of parameters in embedding layers in billions: 0.21
10369
+ Total number of parameters in billions: 0.56
10370
+ Number of parameters in most loaded shard in billions: 0.2795
10371
+ Theoretical memory footprints: weight and optimizer=4797.35 MB
10372
+ [Rank 1] (after 1 iterations) memory (MB) | allocated: 50709.89306640625 | max allocated: 80939.65478515625 | reserved: 84936.0 | max reserved: 84936.0
10373
+ [Rank 4] (after 1 iterations) memory (MB) | allocated: 50709.89306640625 | max allocated: 80939.65478515625 | reserved: 84568.0 | max reserved: 84568.0
10374
+ [Rank 3] (after 1 iterations) memory (MB) | allocated: 50709.89306640625 | max allocated: 80939.65478515625 | reserved: 84376.0 | max reserved: 84376.0
10375
+ [Rank 2] (after 1 iterations) memory (MB) | allocated: 50709.89306640625 | max allocated: 80939.65478515625 | reserved: 84376.0 | max reserved: 84376.0
10376
+ [Rank 6] (after 1 iterations) memory (MB) | allocated: 50709.89306640625 | max allocated: 80939.65478515625 | reserved: 84748.0 | max reserved: 84748.0
10377
+ [Rank 0] (after 1 iterations) memory (MB) | allocated: 50709.89306640625 | max allocated: 80939.65478515625 | reserved: 84168.0 | max reserved: 84168.0
10378
+ [Rank 7] (after 1 iterations) memory (MB) | allocated: 50709.89306640625 | max allocated: 80939.65478515625 | reserved: 84748.0 | max reserved: 84748.0
10379
+ [Rank 5] (after 1 iterations) memory (MB) | allocated: 50709.89306640625 | max allocated: 80939.65478515625 | reserved: 84952.0 | max reserved: 84952.0
10380
+ batch tensor: tokens torch.Size([4, 98304])
10381
+ batch tensor: labels torch.Size([4, 98304])
10382
+ batch tensor: loss_mask torch.Size([4, 98304])
10383
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10384
+ batch tensor: position_ids torch.Size([4, 98304])
10385
+ batch tensor after cp: tokens torch.Size([4, 24576])
10386
+ batch tensor after cp: labels torch.Size([4, 24576])
10387
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10388
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10389
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10390
+ batch tensor: tokens torch.Size([4, 98304])
10391
+ batch tensor: labels torch.Size([4, 98304])
10392
+ batch tensor: loss_mask torch.Size([4, 98304])
10393
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10394
+ batch tensor: position_ids torch.Size([4, 98304])
10395
+ batch tensor after cp: tokens torch.Size([4, 24576])
10396
+ batch tensor after cp: labels torch.Size([4, 24576])
10397
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10398
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10399
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10400
+ batch tensor: tokens torch.Size([4, 98304])
10401
+ batch tensor: labels torch.Size([4, 98304])
10402
+ batch tensor: loss_mask torch.Size([4, 98304])
10403
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10404
+ batch tensor: position_ids torch.Size([4, 98304])
10405
+ batch tensor after cp: tokens torch.Size([4, 24576])
10406
+ batch tensor after cp: labels torch.Size([4, 24576])
10407
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10408
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10409
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10410
+ batch tensor: tokens torch.Size([4, 98304])
10411
+ batch tensor: labels torch.Size([4, 98304])
10412
+ batch tensor: loss_mask torch.Size([4, 98304])
10413
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10414
+ batch tensor: position_ids torch.Size([4, 98304])
10415
+ batch tensor after cp: tokens torch.Size([4, 24576])
10416
+ batch tensor after cp: labels torch.Size([4, 24576])
10417
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10418
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10419
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10420
+ batch tensor: tokens torch.Size([4, 98304])
10421
+ batch tensor: labels torch.Size([4, 98304])
10422
+ batch tensor: loss_mask torch.Size([4, 98304])
10423
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10424
+ batch tensor: position_ids torch.Size([4, 98304])
10425
+ batch tensor after cp: tokens torch.Size([4, 24576])
10426
+ batch tensor after cp: labels torch.Size([4, 24576])
10427
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10428
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10429
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10430
+ batch tensor: tokens torch.Size([4, 98304])
10431
+ batch tensor: labels torch.Size([4, 98304])
10432
+ batch tensor: loss_mask torch.Size([4, 98304])
10433
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10434
+ batch tensor: position_ids torch.Size([4, 98304])
10435
+ batch tensor after cp: tokens torch.Size([4, 24576])
10436
+ batch tensor after cp: labels torch.Size([4, 24576])
10437
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10438
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10439
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10440
+ batch tensor: tokens torch.Size([4, 98304])
10441
+ batch tensor: labels torch.Size([4, 98304])
10442
+ batch tensor: loss_mask torch.Size([4, 98304])
10443
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10444
+ batch tensor: position_ids torch.Size([4, 98304])
10445
+ batch tensor after cp: tokens torch.Size([4, 24576])
10446
+ batch tensor after cp: labels torch.Size([4, 24576])
10447
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10448
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10449
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10450
+ batch tensor: tokens torch.Size([4, 98304])
10451
+ batch tensor: labels torch.Size([4, 98304])
10452
+ batch tensor: loss_mask torch.Size([4, 98304])
10453
+ batch tensor: attention_mask torch.Size([4, 1, 98304, 98304])
10454
+ batch tensor: position_ids torch.Size([4, 98304])
10455
+ batch tensor after cp: tokens torch.Size([4, 24576])
10456
+ batch tensor after cp: labels torch.Size([4, 24576])
10457
+ batch tensor after cp: loss_mask torch.Size([4, 24576])
10458
+ batch tensor after cp: attention_mask torch.Size([4, 1, 24576, 98304])
10459
+ batch tensor after cp: position_ids torch.Size([4, 24576])
10460
+ Start exporting trace 1
10461
+ Done exporting trace 1
10462
+ [2025-06-21 21:59:32] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 8143.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
attnserver.run_attnserver.slurm.sh.343246.err.log CHANGED
@@ -4553,3 +4553,671 @@ W0621 21:57:59.783000 2375190 site-packages/torch/distributed/run.py:766]
4553
  W0621 21:57:59.783000 2375190 site-packages/torch/distributed/run.py:766] *****************************************
4554
  W0621 21:57:59.783000 2375190 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
4555
  W0621 21:57:59.783000 2375190 site-packages/torch/distributed/run.py:766] *****************************************
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4553
  W0621 21:57:59.783000 2375190 site-packages/torch/distributed/run.py:766] *****************************************
4554
  W0621 21:57:59.783000 2375190 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
4555
  W0621 21:57:59.783000 2375190 site-packages/torch/distributed/run.py:766] *****************************************
4556
+ [rank7]:[W621 21:58:22.758514097 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4557
+ [rank5]:[W621 21:58:22.758845110 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4558
+ [rank3]:[W621 21:58:22.758872980 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4559
+ [rank2]:[W621 21:58:22.759631565 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4560
+ [rank4]:[W621 21:58:22.759631437 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4561
+ [rank6]:[W621 21:58:22.759857033 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4562
+ [rank1]:[W621 21:58:22.763962101 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4563
+ [rank0]:[W621 21:58:22.911433410 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4564
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4565
+ warnings.warn(
4566
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4567
+ warnings.warn(
4568
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4569
+ warnings.warn(
4570
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4571
+ warnings.warn(
4572
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4573
+ warnings.warn(
4574
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4575
+ warnings.warn(
4576
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4577
+ warnings.warn(
4578
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4579
+ warnings.warn(
4580
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4581
+ warnings.warn(
4582
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4583
+ warnings.warn(
4584
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4585
+ warnings.warn(
4586
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4587
+ warnings.warn(
4588
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4589
+ warnings.warn(
4590
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4591
+ warnings.warn(
4592
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4593
+ warnings.warn(
4594
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4595
+ warnings.warn(
4596
+ [rank0]: Traceback (most recent call last):
4597
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4598
+ [rank0]: pretrain(
4599
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4600
+ [rank0]: iteration, num_floating_point_operations_so_far = train(
4601
+ [rank0]: ^^^^^^
4602
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4603
+ [rank0]: ) = train_step(
4604
+ [rank0]: ^^^^^^^^^^^
4605
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4606
+ [rank0]: losses_reduced = forward_backward_func(
4607
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^
4608
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4609
+ [rank0]: output_tensor, num_tokens = forward_step(
4610
+ [rank0]: ^^^^^^^^^^^^^
4611
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4612
+ [rank0]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4613
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4614
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4615
+ [rank0]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4616
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
4617
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4618
+ [rank0]: batch = next(global_batches)
4619
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^
4620
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4621
+ [rank0]: attention_mask = torch.ones(
4622
+ [rank0]: ^^^^^^^^^^^
4623
+ [rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.17 GiB is free. Including non-PyTorch memory, this process has 4.63 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4624
+ [rank3]: Traceback (most recent call last):
4625
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4626
+ [rank3]: pretrain(
4627
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4628
+ [rank3]: iteration, num_floating_point_operations_so_far = train(
4629
+ [rank3]: ^^^^^^
4630
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4631
+ [rank3]: ) = train_step(
4632
+ [rank3]: ^^^^^^^^^^^
4633
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4634
+ [rank3]: losses_reduced = forward_backward_func(
4635
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^
4636
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4637
+ [rank3]: output_tensor, num_tokens = forward_step(
4638
+ [rank3]: ^^^^^^^^^^^^^
4639
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4640
+ [rank3]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4641
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4642
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4643
+ [rank3]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4644
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^
4645
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4646
+ [rank3]: batch = next(global_batches)
4647
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^
4648
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4649
+ [rank3]: attention_mask = torch.ones(
4650
+ [rank3]: ^^^^^^^^^^^
4651
+ [rank3]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.17 GiB is free. Including non-PyTorch memory, this process has 4.63 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4652
+ [rank7]: Traceback (most recent call last):
4653
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4654
+ [rank7]: pretrain(
4655
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4656
+ [rank7]: iteration, num_floating_point_operations_so_far = train(
4657
+ [rank7]: ^^^^^^
4658
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4659
+ [rank7]: ) = train_step(
4660
+ [rank7]: ^^^^^^^^^^^
4661
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4662
+ [rank7]: losses_reduced = forward_backward_func(
4663
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^
4664
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4665
+ [rank7]: output_tensor, num_tokens = forward_step(
4666
+ [rank7]: ^^^^^^^^^^^^^
4667
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4668
+ [rank7]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4669
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4670
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4671
+ [rank7]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4672
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^
4673
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4674
+ [rank7]: batch = next(global_batches)
4675
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^
4676
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4677
+ [rank7]: attention_mask = torch.ones(
4678
+ [rank7]: ^^^^^^^^^^^
4679
+ [rank7]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.17 GiB is free. Including non-PyTorch memory, this process has 4.63 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4680
+ [rank2]: Traceback (most recent call last):
4681
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4682
+ [rank2]: pretrain(
4683
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4684
+ [rank2]: iteration, num_floating_point_operations_so_far = train(
4685
+ [rank2]: ^^^^^^
4686
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4687
+ [rank2]: ) = train_step(
4688
+ [rank2]: ^^^^^^^^^^^
4689
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4690
+ [rank2]: losses_reduced = forward_backward_func(
4691
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^
4692
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4693
+ [rank2]: output_tensor, num_tokens = forward_step(
4694
+ [rank2]: ^^^^^^^^^^^^^
4695
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4696
+ [rank2]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4697
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4698
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4699
+ [rank2]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4700
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^
4701
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4702
+ [rank2]: batch = next(global_batches)
4703
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^
4704
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4705
+ [rank2]: attention_mask = torch.ones(
4706
+ [rank2]: ^^^^^^^^^^^
4707
+ [rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.17 GiB is free. Including non-PyTorch memory, this process has 4.63 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4708
+ [rank5]: Traceback (most recent call last):
4709
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4710
+ [rank5]: pretrain(
4711
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4712
+ [rank5]: iteration, num_floating_point_operations_so_far = train(
4713
+ [rank5]: ^^^^^^
4714
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4715
+ [rank5]: ) = train_step(
4716
+ [rank5]: ^^^^^^^^^^^
4717
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4718
+ [rank5]: losses_reduced = forward_backward_func(
4719
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^
4720
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4721
+ [rank5]: output_tensor, num_tokens = forward_step(
4722
+ [rank5]: ^^^^^^^^^^^^^
4723
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4724
+ [rank5]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4725
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4726
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4727
+ [rank5]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4728
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^
4729
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4730
+ [rank5]: batch = next(global_batches)
4731
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^
4732
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4733
+ [rank5]: attention_mask = torch.ones(
4734
+ [rank5]: ^^^^^^^^^^^
4735
+ [rank5]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.17 GiB is free. Including non-PyTorch memory, this process has 4.63 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4736
+ [rank6]: Traceback (most recent call last):
4737
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4738
+ [rank6]: pretrain(
4739
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4740
+ [rank6]: iteration, num_floating_point_operations_so_far = train(
4741
+ [rank6]: ^^^^^^
4742
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4743
+ [rank6]: ) = train_step(
4744
+ [rank6]: ^^^^^^^^^^^
4745
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4746
+ [rank6]: losses_reduced = forward_backward_func(
4747
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^
4748
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4749
+ [rank6]: output_tensor, num_tokens = forward_step(
4750
+ [rank6]: ^^^^^^^^^^^^^
4751
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4752
+ [rank6]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4753
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4754
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4755
+ [rank6]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4756
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^
4757
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4758
+ [rank6]: batch = next(global_batches)
4759
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^
4760
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4761
+ [rank6]: attention_mask = torch.ones(
4762
+ [rank6]: ^^^^^^^^^^^
4763
+ [rank6]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.17 GiB is free. Including non-PyTorch memory, this process has 4.63 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4764
+ [rank4]: Traceback (most recent call last):
4765
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4766
+ [rank4]: pretrain(
4767
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4768
+ [rank4]: iteration, num_floating_point_operations_so_far = train(
4769
+ [rank4]: ^^^^^^
4770
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4771
+ [rank4]: ) = train_step(
4772
+ [rank4]: ^^^^^^^^^^^
4773
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4774
+ [rank4]: losses_reduced = forward_backward_func(
4775
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^
4776
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4777
+ [rank4]: output_tensor, num_tokens = forward_step(
4778
+ [rank4]: ^^^^^^^^^^^^^
4779
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4780
+ [rank4]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4781
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4782
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4783
+ [rank4]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4784
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^
4785
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4786
+ [rank4]: batch = next(global_batches)
4787
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^
4788
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4789
+ [rank4]: attention_mask = torch.ones(
4790
+ [rank4]: ^^^^^^^^^^^
4791
+ [rank4]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.17 GiB is free. Including non-PyTorch memory, this process has 4.63 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4792
+ [rank1]: Traceback (most recent call last):
4793
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4794
+ [rank1]: pretrain(
4795
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4796
+ [rank1]: iteration, num_floating_point_operations_so_far = train(
4797
+ [rank1]: ^^^^^^
4798
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4799
+ [rank1]: ) = train_step(
4800
+ [rank1]: ^^^^^^^^^^^
4801
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4802
+ [rank1]: losses_reduced = forward_backward_func(
4803
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^
4804
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4805
+ [rank1]: output_tensor, num_tokens = forward_step(
4806
+ [rank1]: ^^^^^^^^^^^^^
4807
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4808
+ [rank1]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4809
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4810
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4811
+ [rank1]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4812
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
4813
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4814
+ [rank1]: batch = next(global_batches)
4815
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^
4816
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4817
+ [rank1]: attention_mask = torch.ones(
4818
+ [rank1]: ^^^^^^^^^^^
4819
+ [rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.17 GiB is free. Including non-PyTorch memory, this process has 4.63 GiB memory in use. Of the allocated memory 3.12 GiB is allocated by PyTorch, and 57.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4820
+ [rank1]:[W621 21:58:32.708889669 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
4821
+ [rank7]:[W621 21:58:32.750157203 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
4822
+ [rank5]:[W621 21:58:32.750340011 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
4823
+ [rank3]:[W621 21:58:32.760584988 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
4824
+ W0621 21:58:33.836000 2375190 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2375262 closing signal SIGTERM
4825
+ W0621 21:58:33.840000 2375190 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2375263 closing signal SIGTERM
4826
+ W0621 21:58:33.840000 2375190 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2375264 closing signal SIGTERM
4827
+ W0621 21:58:33.843000 2375190 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2375265 closing signal SIGTERM
4828
+ W0621 21:58:33.843000 2375190 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2375266 closing signal SIGTERM
4829
+ W0621 21:58:33.846000 2375190 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2375267 closing signal SIGTERM
4830
+ W0621 21:58:33.846000 2375190 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2375268 closing signal SIGTERM
4831
+ E0621 21:58:34.090000 2375190 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 7 (pid: 2375269) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
4832
+ Traceback (most recent call last):
4833
+ File "<frozen runpy>", line 198, in _run_module_as_main
4834
+ File "<frozen runpy>", line 88, in _run_code
4835
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
4836
+ main()
4837
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
4838
+ return arg(*args, **kwargs)
4839
+ ^^^^^^^^^^^^^^^^^^^^
4840
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
4841
+ launch(args)
4842
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
4843
+ run(args)
4844
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
4845
+ elastic_launch(
4846
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
4847
+ return launch_agent(self._config, self._entrypoint, list(args))
4848
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4849
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
4850
+ raise ChildFailedError(
4851
+ torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
4852
+ ============================================================
4853
+ ./pretrain_gpt_profile.py FAILED
4854
+ ------------------------------------------------------------
4855
+ Failures:
4856
+ <NO_OTHER_FAILURES>
4857
+ ------------------------------------------------------------
4858
+ Root Cause (first observed failure):
4859
+ [0]:
4860
+ time : 2025-06-21_21:58:33
4861
+ host : fs-mbz-gpu-791
4862
+ rank : 7 (local_rank: 7)
4863
+ exitcode : 1 (pid: 2375269)
4864
+ error_file: <N/A>
4865
+ traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
4866
+ ============================================================
4867
+ + set +x
4868
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
4869
+ + export PROF_CTX_LENGTH=40960
4870
+ + PROF_CTX_LENGTH=40960
4871
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L40960*tp2.cp4.bs8.json'
4872
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L40960*tp2.cp4.bs8.json' ']'
4873
+ + echo 'Running ctx_length=40960, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=8'
4874
+ + srun bash ./attnserver.sh
4875
+ + which python3
4876
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343246 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-791:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 40960 --max-position-embeddings 40960 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
4877
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
4878
+ and will be removed in future. Use torchrun.
4879
+ Note that --use-env is set by default in torchrun.
4880
+ If your script expects `--local-rank` argument to be set, please
4881
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
4882
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
4883
+ further instructions
4884
+
4885
+ main()
4886
+ W0621 21:58:38.497000 2377039 site-packages/torch/distributed/run.py:766]
4887
+ W0621 21:58:38.497000 2377039 site-packages/torch/distributed/run.py:766] *****************************************
4888
+ W0621 21:58:38.497000 2377039 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
4889
+ W0621 21:58:38.497000 2377039 site-packages/torch/distributed/run.py:766] *****************************************
4890
+ [rank1]:[W621 21:59:00.224207839 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4891
+ [rank3]:[W621 21:59:00.225005281 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4892
+ [rank7]:[W621 21:59:00.225006951 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4893
+ [rank5]:[W621 21:59:00.225066158 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4894
+ [rank2]:[W621 21:59:00.229077551 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4895
+ [rank4]:[W621 21:59:00.229291060 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4896
+ [rank6]:[W621 21:59:00.231343031 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4897
+ [rank0]:[W621 21:59:00.377878739 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
4898
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4899
+ warnings.warn(
4900
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4901
+ warnings.warn(
4902
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4903
+ warnings.warn(
4904
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4905
+ warnings.warn(
4906
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4907
+ warnings.warn(
4908
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4909
+ warnings.warn(
4910
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4911
+ warnings.warn(
4912
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
4913
+ warnings.warn(
4914
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4915
+ warnings.warn(
4916
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4917
+ warnings.warn(
4918
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4919
+ warnings.warn(
4920
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4921
+ warnings.warn(
4922
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4923
+ warnings.warn(
4924
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4925
+ warnings.warn(
4926
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4927
+ warnings.warn(
4928
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
4929
+ warnings.warn(
4930
+ [rank6]: Traceback (most recent call last):
4931
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4932
+ [rank6]: pretrain(
4933
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4934
+ [rank6]: iteration, num_floating_point_operations_so_far = train(
4935
+ [rank6]: ^^^^^^
4936
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4937
+ [rank6]: ) = train_step(
4938
+ [rank6]: ^^^^^^^^^^^
4939
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4940
+ [rank6]: losses_reduced = forward_backward_func(
4941
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^
4942
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4943
+ [rank6]: output_tensor, num_tokens = forward_step(
4944
+ [rank6]: ^^^^^^^^^^^^^
4945
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4946
+ [rank6]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4947
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4948
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4949
+ [rank6]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4950
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^
4951
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4952
+ [rank6]: batch = next(global_batches)
4953
+ [rank6]: ^^^^^^^^^^^^^^^^^^^^
4954
+ [rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4955
+ [rank6]: attention_mask = torch.ones(
4956
+ [rank6]: ^^^^^^^^^^^
4957
+ [rank6]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 134.87 GiB is free. Including non-PyTorch memory, this process has 4.94 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4958
+ [rank4]: Traceback (most recent call last):
4959
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4960
+ [rank4]: pretrain(
4961
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4962
+ [rank4]: iteration, num_floating_point_operations_so_far = train(
4963
+ [rank4]: ^^^^^^
4964
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4965
+ [rank4]: ) = train_step(
4966
+ [rank4]: ^^^^^^^^^^^
4967
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4968
+ [rank4]: losses_reduced = forward_backward_func(
4969
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^
4970
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4971
+ [rank4]: output_tensor, num_tokens = forward_step(
4972
+ [rank4]: ^^^^^^^^^^^^^
4973
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
4974
+ [rank4]: output_tensor, loss_func = forward_step_func(data_iterator, model)
4975
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4976
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
4977
+ [rank4]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
4978
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^
4979
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
4980
+ [rank4]: batch = next(global_batches)
4981
+ [rank4]: ^^^^^^^^^^^^^^^^^^^^
4982
+ [rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
4983
+ [rank4]: attention_mask = torch.ones(
4984
+ [rank4]: ^^^^^^^^^^^
4985
+ [rank4]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 134.87 GiB is free. Including non-PyTorch memory, this process has 4.94 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4986
+ [rank2]: Traceback (most recent call last):
4987
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
4988
+ [rank2]: pretrain(
4989
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
4990
+ [rank2]: iteration, num_floating_point_operations_so_far = train(
4991
+ [rank2]: ^^^^^^
4992
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
4993
+ [rank2]: ) = train_step(
4994
+ [rank2]: ^^^^^^^^^^^
4995
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
4996
+ [rank2]: losses_reduced = forward_backward_func(
4997
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^
4998
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
4999
+ [rank2]: output_tensor, num_tokens = forward_step(
5000
+ [rank2]: ^^^^^^^^^^^^^
5001
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
5002
+ [rank2]: output_tensor, loss_func = forward_step_func(data_iterator, model)
5003
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5004
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
5005
+ [rank2]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
5006
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^
5007
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
5008
+ [rank2]: batch = next(global_batches)
5009
+ [rank2]: ^^^^^^^^^^^^^^^^^^^^
5010
+ [rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
5011
+ [rank2]: attention_mask = torch.ones(
5012
+ [rank2]: ^^^^^^^^^^^
5013
+ [rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 134.87 GiB is free. Including non-PyTorch memory, this process has 4.94 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
5014
+ [rank1]: Traceback (most recent call last):
5015
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
5016
+ [rank1]: pretrain(
5017
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
5018
+ [rank1]: iteration, num_floating_point_operations_so_far = train(
5019
+ [rank1]: ^^^^^^
5020
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
5021
+ [rank1]: ) = train_step(
5022
+ [rank1]: ^^^^^^^^^^^
5023
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
5024
+ [rank1]: losses_reduced = forward_backward_func(
5025
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^
5026
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
5027
+ [rank1]: output_tensor, num_tokens = forward_step(
5028
+ [rank1]: ^^^^^^^^^^^^^
5029
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
5030
+ [rank1]: output_tensor, loss_func = forward_step_func(data_iterator, model)
5031
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5032
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
5033
+ [rank1]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
5034
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
5035
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
5036
+ [rank1]: batch = next(global_batches)
5037
+ [rank1]: ^^^^^^^^^^^^^^^^^^^^
5038
+ [rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
5039
+ [rank1]: attention_mask = torch.ones(
5040
+ [rank1]: ^^^^^^^^^^^
5041
+ [rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 134.87 GiB is free. Including non-PyTorch memory, this process has 4.94 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
5042
+ [rank7]: Traceback (most recent call last):
5043
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
5044
+ [rank7]: pretrain(
5045
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
5046
+ [rank7]: iteration, num_floating_point_operations_so_far = train(
5047
+ [rank7]: ^^^^^^
5048
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
5049
+ [rank7]: ) = train_step(
5050
+ [rank7]: ^^^^^^^^^^^
5051
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
5052
+ [rank7]: losses_reduced = forward_backward_func(
5053
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^
5054
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
5055
+ [rank7]: output_tensor, num_tokens = forward_step(
5056
+ [rank7]: ^^^^^^^^^^^^^
5057
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
5058
+ [rank7]: output_tensor, loss_func = forward_step_func(data_iterator, model)
5059
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5060
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
5061
+ [rank7]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
5062
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^
5063
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
5064
+ [rank7]: batch = next(global_batches)
5065
+ [rank7]: ^^^^^^^^^^^^^^^^^^^^
5066
+ [rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
5067
+ [rank7]: attention_mask = torch.ones(
5068
+ [rank7]: ^^^^^^^^^^^
5069
+ [rank7]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 134.87 GiB is free. Including non-PyTorch memory, this process has 4.94 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
5070
+ [rank0]: Traceback (most recent call last):
5071
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
5072
+ [rank0]: pretrain(
5073
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
5074
+ [rank0]: iteration, num_floating_point_operations_so_far = train(
5075
+ [rank0]: ^^^^^^
5076
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
5077
+ [rank0]: ) = train_step(
5078
+ [rank0]: ^^^^^^^^^^^
5079
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
5080
+ [rank0]: losses_reduced = forward_backward_func(
5081
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^
5082
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
5083
+ [rank0]: output_tensor, num_tokens = forward_step(
5084
+ [rank0]: ^^^^^^^^^^^^^
5085
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
5086
+ [rank0]: output_tensor, loss_func = forward_step_func(data_iterator, model)
5087
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5088
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
5089
+ [rank0]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
5090
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
5091
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
5092
+ [rank0]: batch = next(global_batches)
5093
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^
5094
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
5095
+ [rank0]: attention_mask = torch.ones(
5096
+ [rank0]: ^^^^^^^^^^^
5097
+ [rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 134.87 GiB is free. Including non-PyTorch memory, this process has 4.94 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
5098
+ [rank3]: Traceback (most recent call last):
5099
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
5100
+ [rank3]: pretrain(
5101
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
5102
+ [rank3]: iteration, num_floating_point_operations_so_far = train(
5103
+ [rank3]: ^^^^^^
5104
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
5105
+ [rank3]: ) = train_step(
5106
+ [rank3]: ^^^^^^^^^^^
5107
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
5108
+ [rank3]: losses_reduced = forward_backward_func(
5109
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^
5110
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
5111
+ [rank3]: output_tensor, num_tokens = forward_step(
5112
+ [rank3]: ^^^^^^^^^^^^^
5113
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
5114
+ [rank3]: output_tensor, loss_func = forward_step_func(data_iterator, model)
5115
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5116
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
5117
+ [rank3]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
5118
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^
5119
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
5120
+ [rank3]: batch = next(global_batches)
5121
+ [rank3]: ^^^^^^^^^^^^^^^^^^^^
5122
+ [rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
5123
+ [rank3]: attention_mask = torch.ones(
5124
+ [rank3]: ^^^^^^^^^^^
5125
+ [rank3]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 134.87 GiB is free. Including non-PyTorch memory, this process has 4.94 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
5126
+ [rank5]: Traceback (most recent call last):
5127
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
5128
+ [rank5]: pretrain(
5129
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
5130
+ [rank5]: iteration, num_floating_point_operations_so_far = train(
5131
+ [rank5]: ^^^^^^
5132
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
5133
+ [rank5]: ) = train_step(
5134
+ [rank5]: ^^^^^^^^^^^
5135
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
5136
+ [rank5]: losses_reduced = forward_backward_func(
5137
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^
5138
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
5139
+ [rank5]: output_tensor, num_tokens = forward_step(
5140
+ [rank5]: ^^^^^^^^^^^^^
5141
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
5142
+ [rank5]: output_tensor, loss_func = forward_step_func(data_iterator, model)
5143
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5144
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
5145
+ [rank5]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
5146
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^
5147
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
5148
+ [rank5]: batch = next(global_batches)
5149
+ [rank5]: ^^^^^^^^^^^^^^^^^^^^
5150
+ [rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
5151
+ [rank5]: attention_mask = torch.ones(
5152
+ [rank5]: ^^^^^^^^^^^
5153
+ [rank5]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 800.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 134.87 GiB is free. Including non-PyTorch memory, this process has 4.94 GiB memory in use. Of the allocated memory 3.38 GiB is allocated by PyTorch, and 100.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
5154
+ [rank7]:[W621 21:59:11.565613408 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
5155
+ [rank3]:[W621 21:59:11.139136307 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
5156
+ [rank5]:[W621 21:59:11.140201601 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
5157
+ [rank1]:[W621 21:59:11.203365284 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
5158
+ W0621 21:59:12.862000 2377039 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2377110 closing signal SIGTERM
5159
+ W0621 21:59:12.865000 2377039 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2377111 closing signal SIGTERM
5160
+ W0621 21:59:12.866000 2377039 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2377112 closing signal SIGTERM
5161
+ W0621 21:59:12.869000 2377039 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2377113 closing signal SIGTERM
5162
+ W0621 21:59:12.870000 2377039 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2377114 closing signal SIGTERM
5163
+ W0621 21:59:12.876000 2377039 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2377115 closing signal SIGTERM
5164
+ W0621 21:59:12.877000 2377039 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2377116 closing signal SIGTERM
5165
+ E0621 21:59:13.593000 2377039 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 7 (pid: 2377117) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
5166
+ Traceback (most recent call last):
5167
+ File "<frozen runpy>", line 198, in _run_module_as_main
5168
+ File "<frozen runpy>", line 88, in _run_code
5169
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
5170
+ main()
5171
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
5172
+ return arg(*args, **kwargs)
5173
+ ^^^^^^^^^^^^^^^^^^^^
5174
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
5175
+ launch(args)
5176
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
5177
+ run(args)
5178
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
5179
+ elastic_launch(
5180
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
5181
+ return launch_agent(self._config, self._entrypoint, list(args))
5182
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5183
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
5184
+ raise ChildFailedError(
5185
+ torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
5186
+ ============================================================
5187
+ ./pretrain_gpt_profile.py FAILED
5188
+ ------------------------------------------------------------
5189
+ Failures:
5190
+ <NO_OTHER_FAILURES>
5191
+ ------------------------------------------------------------
5192
+ Root Cause (first observed failure):
5193
+ [0]:
5194
+ time : 2025-06-21_21:59:12
5195
+ host : fs-mbz-gpu-791
5196
+ rank : 7 (local_rank: 7)
5197
+ exitcode : 1 (pid: 2377117)
5198
+ error_file: <N/A>
5199
+ traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
5200
+ ============================================================
5201
+ + set +x
5202
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
5203
+ + export PROF_CTX_LENGTH=49152
5204
+ + PROF_CTX_LENGTH=49152
5205
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L49152*tp2.cp4.bs8.json'
5206
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L49152*tp2.cp4.bs8.json' ']'
5207
+ + echo 'Running ctx_length=49152, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=8'
5208
+ + srun bash ./attnserver.sh
5209
+ + which python3
5210
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343246 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-791:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 2 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 49152 --max-position-embeddings 49152 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
5211
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
5212
+ and will be removed in future. Use torchrun.
5213
+ Note that --use-env is set by default in torchrun.
5214
+ If your script expects `--local-rank` argument to be set, please
5215
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
5216
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
5217
+ further instructions
5218
+
5219
+ main()
5220
+ W0621 21:59:17.894000 2378871 site-packages/torch/distributed/run.py:766]
5221
+ W0621 21:59:17.894000 2378871 site-packages/torch/distributed/run.py:766] *****************************************
5222
+ W0621 21:59:17.894000 2378871 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
5223
+ W0621 21:59:17.894000 2378871 site-packages/torch/distributed/run.py:766] *****************************************
attnserver.run_attnserver.slurm.sh.343246.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343247.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343247.out.log CHANGED
@@ -3933,3 +3933,1272 @@ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/mega
3933
  >>> done with dataset index builder. Compilation time: 0.058 seconds
3934
  > compiling and loading fused kernels ...
3935
  >>> done with compiling and loading fused kernels. Compilation time: 2.196 seconds
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3933
  >>> done with dataset index builder. Compilation time: 0.058 seconds
3934
  > compiling and loading fused kernels ...
3935
  >>> done with compiling and loading fused kernels. Compilation time: 2.196 seconds
3936
+ time to initialize megatron (seconds): 7.659
3937
+ [after megatron is initialized] datetime: 2025-06-21 21:58:18
3938
+ building GPT model ...
3939
+ >>> embedding
3940
+ >>> decoder
3941
+ >>> output_layer
3942
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592
3943
+ >>> embedding
3944
+ >>> decoder
3945
+ >>> output_layer
3946
+ >>> embedding
3947
+ >>> decoder
3948
+ >>> output_layer
3949
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592
3950
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592
3951
+ >>> embedding
3952
+ >>> decoder
3953
+ >>> output_layer
3954
+ >>> embedding
3955
+ >>> decoder
3956
+ >>> output_layer
3957
+ >>> embedding
3958
+ >>> decoder>>> embedding
3959
+ >>> output_layer
3960
+
3961
+ >>> decoder
3962
+ >>> output_layer
3963
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592
3964
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592
3965
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592
3966
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 296302592
3967
+ >>> embedding
3968
+ >>> decoder
3969
+ >>> output_layer
3970
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 296302592
3971
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
3972
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
3973
+ Params for bucket 1 (296302592 elements, 296302592 padded size):
3974
+ module.decoder.layers.1.mlp.linear_fc2.weight
3975
+ module.decoder.layers.1.self_attention.linear_proj.bias
3976
+ module.decoder.layers.0.mlp.linear_fc1.weight
3977
+ module.decoder.final_layernorm.weight
3978
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
3979
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
3980
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
3981
+ module.decoder.layers.1.self_attention.linear_qkv.bias
3982
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
3983
+ module.decoder.layers.1.mlp.linear_fc1.weight
3984
+ module.decoder.layers.0.mlp.linear_fc1.bias
3985
+ module.decoder.layers.1.mlp.linear_fc2.bias
3986
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
3987
+ module.decoder.layers.0.self_attention.linear_qkv.weight
3988
+ module.decoder.layers.0.self_attention.linear_proj.weight
3989
+ module.embedding.word_embeddings.weight
3990
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
3991
+ module.decoder.layers.0.self_attention.linear_proj.bias
3992
+ module.decoder.layers.1.mlp.linear_fc1.bias
3993
+ module.decoder.layers.0.mlp.linear_fc2.weight
3994
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
3995
+ module.embedding.position_embeddings.weight
3996
+ module.decoder.final_layernorm.bias
3997
+ module.decoder.layers.1.self_attention.linear_qkv.weight
3998
+ module.decoder.layers.1.self_attention.linear_proj.weight
3999
+ module.decoder.layers.0.mlp.linear_fc2.bias
4000
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
4001
+ module.decoder.layers.0.self_attention.linear_qkv.bias
4002
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x149c1ed8a3f0>, config_logger_dir='')
4003
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
4004
+ loading distributed checkpoint from gpt-checkpoint at iteration 10
4005
+ Running ctx_length=8192, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=16
4006
+ Cleaning up checkpoint directory: gpt-checkpoint
4007
+ --------------------------------
4008
+ CTX_LENGTH: 8192
4009
+ TP_SIZE: 2
4010
+ CP_SIZE: 4
4011
+ CHECKPOINT_PATH: gpt-checkpoint
4012
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
4013
+ --------------------------------
4014
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
4015
+ using world size: 8, data-parallel size: 1, context-parallel size: 4, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
4016
+ Number of virtual stages per pipeline stage: None
4017
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
4018
+ using torch.float16 for parameters ...
4019
+ ------------------------ arguments ------------------------
4020
+ account_for_embedding_in_pipeline_split ......... False
4021
+ account_for_loss_in_pipeline_split .............. False
4022
+ accumulate_allreduce_grads_in_fp32 .............. False
4023
+ adam_beta1 ...................................... 0.9
4024
+ adam_beta2 ...................................... 0.999
4025
+ adam_eps ........................................ 1e-08
4026
+ add_bias_linear ................................. True
4027
+ add_position_embedding .......................... True
4028
+ add_qkv_bias .................................... True
4029
+ adlr_autoresume ................................. False
4030
+ adlr_autoresume_interval ........................ 1000
4031
+ align_grad_reduce ............................... True
4032
+ align_param_gather .............................. False
4033
+ app_tag_run_name ................................ None
4034
+ app_tag_run_version ............................. 0.0.0
4035
+ apply_layernorm_1p .............................. False
4036
+ apply_query_key_layer_scaling ................... False
4037
+ apply_residual_connection_post_layernorm ........ False
4038
+ apply_rope_fusion ............................... False
4039
+ async_save ...................................... None
4040
+ async_tensor_model_parallel_allreduce ........... True
4041
+ attention_backend ............................... AttnBackend.auto
4042
+ attention_dropout ............................... 0.1
4043
+ attention_softmax_in_fp32 ....................... False
4044
+ auto_detect_ckpt_format ......................... False
4045
+ barrier_with_L1_time ............................ True
4046
+ bert_binary_head ................................ True
4047
+ bert_embedder_type .............................. megatron
4048
+ bert_load ....................................... None
4049
+ bf16 ............................................ False
4050
+ bias_dropout_fusion ............................. True
4051
+ bias_gelu_fusion ................................ True
4052
+ bias_swiglu_fusion .............................. True
4053
+ biencoder_projection_dim ........................ 0
4054
+ biencoder_shared_query_context_model ............ False
4055
+ block_data_path ................................. None
4056
+ calc_ft_timeouts ................................ False
4057
+ calculate_per_token_loss ........................ False
4058
+ check_for_large_grads ........................... False
4059
+ check_for_nan_in_loss_and_grad .................. False
4060
+ check_for_spiky_loss ............................ False
4061
+ check_weight_hash_across_dp_replicas_interval ... None
4062
+ ckpt_assume_constant_structure .................. False
4063
+ ckpt_convert_format ............................. None
4064
+ ckpt_convert_save ............................... None
4065
+ ckpt_convert_update_legacy_dist_opt_format ...... False
4066
+ ckpt_format ..................................... torch_dist
4067
+ ckpt_fully_parallel_load ........................ False
4068
+ ckpt_fully_parallel_save ........................ True
4069
+ ckpt_fully_parallel_save_deprecated ............. False
4070
+ ckpt_step ....................................... None
4071
+ classes_fraction ................................ 1.0
4072
+ clip_grad ....................................... 1.0
4073
+ clone_scatter_output_in_embedding ............... True
4074
+ config_logger_dir ...............................
4075
+ consumed_train_samples .......................... 0
4076
+ consumed_valid_samples .......................... 0
4077
+ context_parallel_size ........................... 4
4078
+ cp_comm_type .................................... ['p2p']
4079
+ create_attention_mask_in_dataloader ............. True
4080
+ cross_entropy_fusion_impl ....................... native
4081
+ cross_entropy_loss_fusion ....................... False
4082
+ cuda_graph_scope ................................ full
4083
+ cuda_graph_warmup_steps ......................... 3
4084
+ data_args_path .................................. None
4085
+ data_cache_path ................................. None
4086
+ data_parallel_random_init ....................... False
4087
+ data_parallel_sharding_strategy ................. no_shard
4088
+ data_parallel_size .............................. 1
4089
+ data_path ....................................... None
4090
+ data_per_class_fraction ......................... 1.0
4091
+ data_sharding ................................... True
4092
+ dataloader_type ................................. single
4093
+ ddp_average_in_collective ....................... False
4094
+ ddp_bucket_size ................................. None
4095
+ ddp_num_buckets ................................. None
4096
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
4097
+ decoder_first_pipeline_num_layers ............... None
4098
+ decoder_last_pipeline_num_layers ................ None
4099
+ decoder_num_layers .............................. None
4100
+ decoder_seq_length .............................. None
4101
+ decoupled_lr .................................... None
4102
+ decoupled_min_lr ................................ None
4103
+ decrease_batch_size_if_needed ................... False
4104
+ defer_embedding_wgrad_compute ................... False
4105
+ deprecated_use_mcore_models ..................... False
4106
+ deterministic_mode .............................. False
4107
+ dino_bottleneck_size ............................ 256
4108
+ dino_freeze_last_layer .......................... 1
4109
+ dino_head_hidden_size ........................... 2048
4110
+ dino_local_crops_number ......................... 10
4111
+ dino_local_img_size ............................. 96
4112
+ dino_norm_last_layer ............................ False
4113
+ dino_teacher_temp ............................... 0.07
4114
+ dino_warmup_teacher_temp ........................ 0.04
4115
+ dino_warmup_teacher_temp_epochs ................. 30
4116
+ disable_bf16_reduced_precision_matmul ........... False
4117
+ disable_mamba_mem_eff_path ...................... False
4118
+ disable_straggler_on_startup .................... False
4119
+ dist_ckpt_format_deprecated ..................... None
4120
+ dist_ckpt_strictness ............................ assume_ok_unexpected
4121
+ distribute_saved_activations .................... False
4122
+ distributed_backend ............................. nccl
4123
+ distributed_timeout_minutes ..................... 10
4124
+ embedding_path .................................. None
4125
+ empty_unused_memory_level ....................... 0
4126
+ enable_cuda_graph ............................... False
4127
+ enable_ft_package ............................... False
4128
+ enable_gloo_process_groups ...................... True
4129
+ enable_msc ...................................... True
4130
+ enable_one_logger ............................... True
4131
+ encoder_num_layers .............................. 2
4132
+ encoder_pipeline_model_parallel_size ............ 0
4133
+ encoder_seq_length .............................. 8192
4134
+ encoder_tensor_model_parallel_size .............. 0
4135
+ end_weight_decay ................................ 0.1
4136
+ eod_mask_loss ................................... False
4137
+ error_injection_rate ............................ 0
4138
+ error_injection_type ............................ transient_error
4139
+ eval_interval ................................... 16
4140
+ eval_iters ...................................... 1
4141
+ evidence_data_path .............................. None
4142
+ exit_duration_in_mins ........................... None
4143
+ exit_interval ................................... None
4144
+ exit_on_missing_checkpoint ...................... False
4145
+ exit_signal_handler ............................. False
4146
+ exp_avg_dtype ................................... torch.float32
4147
+ exp_avg_sq_dtype ................................ torch.float32
4148
+ expert_model_parallel_size ...................... 1
4149
+ expert_tensor_parallel_size ..................... 2
4150
+ external_cuda_graph ............................. False
4151
+ ffn_hidden_size ................................. 16384
4152
+ finetune ........................................ False
4153
+ first_last_layers_bf16 .......................... False
4154
+ flash_decode .................................... False
4155
+ fp16 ............................................ True
4156
+ fp16_lm_cross_entropy ........................... False
4157
+ fp32_residual_connection ........................ False
4158
+ fp8 ............................................. None
4159
+ fp8_amax_compute_algo ........................... most_recent
4160
+ fp8_amax_history_len ............................ 1
4161
+ fp8_interval .................................... 1
4162
+ fp8_margin ...................................... 0
4163
+ fp8_param_gather ................................ False
4164
+ fp8_recipe ...................................... delayed
4165
+ fp8_wgrad ....................................... True
4166
+ fsdp_double_buffer .............................. False
4167
+ global_batch_size ............................... 1
4168
+ grad_reduce_in_bf16 ............................. False
4169
+ gradient_accumulation_fusion .................... True
4170
+ gradient_reduce_div_fusion ...................... True
4171
+ group_query_attention ........................... True
4172
+ head_lr_mult .................................... 1.0
4173
+ heterogeneous_layers_config_encoded_json ........ None
4174
+ heterogeneous_layers_config_path ................ None
4175
+ hidden_dropout .................................. 0.1
4176
+ hidden_size ..................................... 4096
4177
+ hierarchical_context_parallel_sizes ............. None
4178
+ high_priority_stream_groups ..................... []
4179
+ hybrid_attention_ratio .......................... 0.0
4180
+ hybrid_mlp_ratio ................................ 0.0
4181
+ hybrid_override_pattern ......................... None
4182
+ hysteresis ...................................... 2
4183
+ ict_head_size ................................... None
4184
+ ict_load ........................................ None
4185
+ img_h ........................................... 224
4186
+ img_w ........................................... 224
4187
+ indexer_batch_size .............................. 128
4188
+ indexer_log_interval ............................ 1000
4189
+ inference_batch_times_seqlen_threshold .......... -1
4190
+ inference_dynamic_batching ...................... False
4191
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
4192
+ inference_dynamic_batching_buffer_overflow_factor None
4193
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
4194
+ inference_dynamic_batching_chunk_size ........... 256
4195
+ inference_dynamic_batching_max_requests_override None
4196
+ inference_dynamic_batching_max_tokens_override .. None
4197
+ inference_max_batch_size ........................ 8
4198
+ inference_max_seq_length ........................ 2560
4199
+ inference_rng_tracker ........................... False
4200
+ init_method_std ................................. 0.02
4201
+ init_method_xavier_uniform ...................... False
4202
+ init_model_with_meta_device ..................... False
4203
+ initial_loss_scale .............................. 4294967296
4204
+ inprocess_active_world_size ..................... 8
4205
+ inprocess_barrier_timeout ....................... 120
4206
+ inprocess_completion_timeout .................... 120
4207
+ inprocess_empty_cuda_cache ...................... False
4208
+ inprocess_granularity ........................... node
4209
+ inprocess_hard_timeout .......................... 90
4210
+ inprocess_heartbeat_interval .................... 30
4211
+ inprocess_heartbeat_timeout ..................... 60
4212
+ inprocess_last_call_wait ........................ 1
4213
+ inprocess_max_iterations ........................ None
4214
+ inprocess_monitor_process_interval .............. 1.0
4215
+ inprocess_monitor_thread_interval ............... 1.0
4216
+ inprocess_progress_watchdog_interval ............ 1.0
4217
+ inprocess_restart ............................... False
4218
+ inprocess_soft_timeout .......................... 60
4219
+ inprocess_termination_grace_time ................ 1
4220
+ is_hybrid_model ................................. False
4221
+ iter_per_epoch .................................. 1250
4222
+ iterations_to_skip .............................. []
4223
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
4224
+ kv_channels ..................................... 64
4225
+ kv_lora_rank .................................... 32
4226
+ lazy_mpu_init ................................... None
4227
+ load ............................................ gpt-checkpoint
4228
+ load_model_opt_format ........................... False
4229
+ local_rank ...................................... 0
4230
+ log_interval .................................... 1
4231
+ log_loss_scale_to_tensorboard ................... True
4232
+ log_memory_to_tensorboard ....................... False
4233
+ log_num_zeros_in_grad ........................... False
4234
+ log_params_norm ................................. False
4235
+ log_progress .................................... False
4236
+ log_straggler ................................... False
4237
+ log_throughput .................................. False
4238
+ log_timers_to_tensorboard ....................... False
4239
+ log_validation_ppl_to_tensorboard ............... False
4240
+ log_world_size_to_tensorboard ................... False
4241
+ logging_level ................................... 0
4242
+ loss_scale ...................................... None
4243
+ loss_scale_window ............................... 1000
4244
+ lr .............................................. 0.0005
4245
+ lr_decay_iters .................................. 150000
4246
+ lr_decay_samples ................................ None
4247
+ lr_decay_style .................................. cosine
4248
+ lr_warmup_fraction .............................. None
4249
+ lr_warmup_init .................................. 0.0
4250
+ lr_warmup_iters ................................. 2
4251
+ lr_warmup_samples ............................... 0
4252
+ lr_wsd_decay_iters .............................. None
4253
+ lr_wsd_decay_samples ............................ None
4254
+ lr_wsd_decay_style .............................. exponential
4255
+ main_grads_dtype ................................ torch.float32
4256
+ main_params_dtype ............................... torch.float32
4257
+ make_vocab_size_divisible_by .................... 128
4258
+ mamba_head_dim .................................. 64
4259
+ mamba_num_groups ................................ 8
4260
+ mamba_num_heads ................................. None
4261
+ mamba_state_dim ................................. 128
4262
+ manual_gc ....................................... False
4263
+ manual_gc_eval .................................. True
4264
+ manual_gc_interval .............................. 0
4265
+ mask_factor ..................................... 1.0
4266
+ mask_prob ....................................... 0.15
4267
+ mask_type ....................................... random
4268
+ masked_softmax_fusion ........................... True
4269
+ max_position_embeddings ......................... 8192
4270
+ max_tokens_to_oom ............................... 12000
4271
+ memory_snapshot_path ............................ snapshot.pickle
4272
+ merge_file ...................................... merges.txt
4273
+ micro_batch_size ................................ 1
4274
+ microbatch_group_size_per_vp_stage .............. None
4275
+ mid_level_dataset_surplus ....................... 0.005
4276
+ min_loss_scale .................................. 1.0
4277
+ min_lr .......................................... 0.0
4278
+ mlp_chunks_for_prefill .......................... 1
4279
+ mmap_bin_files .................................. True
4280
+ mock_data ....................................... True
4281
+ moe_apply_probs_on_input ........................ False
4282
+ moe_aux_loss_coeff .............................. 0.0
4283
+ moe_enable_deepep ............................... False
4284
+ moe_expert_capacity_factor ...................... None
4285
+ moe_extended_tp ................................. False
4286
+ moe_ffn_hidden_size ............................. None
4287
+ moe_grouped_gemm ................................ False
4288
+ moe_input_jitter_eps ............................ None
4289
+ moe_layer_freq .................................. 1
4290
+ moe_layer_recompute ............................. False
4291
+ moe_pad_expert_input_to_capacity ................ False
4292
+ moe_per_layer_logging ........................... False
4293
+ moe_permute_fusion .............................. False
4294
+ moe_router_bias_update_rate ..................... 0.001
4295
+ moe_router_dtype ................................ None
4296
+ moe_router_enable_expert_bias ................... False
4297
+ moe_router_force_load_balancing ................. False
4298
+ moe_router_group_topk ........................... None
4299
+ moe_router_load_balancing_type .................. aux_loss
4300
+ moe_router_num_groups ........................... None
4301
+ moe_router_padding_for_fp8 ...................... False
4302
+ moe_router_pre_softmax .......................... False
4303
+ moe_router_score_function ....................... softmax
4304
+ moe_router_topk ................................. 2
4305
+ moe_router_topk_scaling_factor .................. None
4306
+ moe_shared_expert_intermediate_size ............. None
4307
+ moe_shared_expert_overlap ....................... False
4308
+ moe_token_dispatcher_type ....................... allgather
4309
+ moe_token_drop_policy ........................... probs
4310
+ moe_use_legacy_grouped_gemm ..................... False
4311
+ moe_use_upcycling ............................... False
4312
+ moe_z_loss_coeff ................................ None
4313
+ mrope_section ................................... None
4314
+ mscale .......................................... 1.0
4315
+ mscale_all_dim .................................. 1.0
4316
+ mtp_loss_scaling_factor ......................... 0.1
4317
+ mtp_num_layers .................................. None
4318
+ multi_latent_attention .......................... False
4319
+ nccl_all_reduce_for_prefill ..................... False
4320
+ nccl_communicator_config_path ................... None
4321
+ nccl_ub ......................................... False
4322
+ no_load_optim ................................... None
4323
+ no_load_rng ..................................... None
4324
+ no_persist_layer_norm ........................... False
4325
+ no_rope_freq .................................... None
4326
+ no_save_optim ................................... None
4327
+ no_save_rng ..................................... None
4328
+ non_persistent_ckpt_type ........................ None
4329
+ non_persistent_global_ckpt_dir .................. None
4330
+ non_persistent_local_ckpt_algo .................. fully_parallel
4331
+ non_persistent_local_ckpt_dir ................... None
4332
+ non_persistent_save_interval .................... None
4333
+ norm_epsilon .................................... 1e-05
4334
+ normalization ................................... LayerNorm
4335
+ num_attention_heads ............................. 64
4336
+ num_channels .................................... 3
4337
+ num_classes ..................................... 1000
4338
+ num_dataset_builder_threads ..................... 1
4339
+ num_distributed_optimizer_instances ............. 1
4340
+ num_experts ..................................... None
4341
+ num_layers ...................................... 2
4342
+ num_layers_at_end_in_bf16 ....................... 1
4343
+ num_layers_at_start_in_bf16 ..................... 1
4344
+ num_layers_per_virtual_pipeline_stage ........... None
4345
+ num_query_groups ................................ 16
4346
+ num_virtual_stages_per_pipeline_rank ............ None
4347
+ num_workers ..................................... 2
4348
+ object_storage_cache_path ....................... None
4349
+ one_logger_async ................................ False
4350
+ one_logger_project .............................. megatron-lm
4351
+ one_logger_run_name ............................. None
4352
+ onnx_safe ....................................... None
4353
+ openai_gelu ..................................... False
4354
+ optimizer ....................................... adam
4355
+ optimizer_cpu_offload ........................... False
4356
+ optimizer_offload_fraction ...................... 1.0
4357
+ output_bert_embeddings .......................... False
4358
+ overlap_cpu_optimizer_d2h_h2d ................... False
4359
+ overlap_grad_reduce ............................. False
4360
+ overlap_p2p_comm ................................ False
4361
+ overlap_p2p_comm_warmup_flush ................... False
4362
+ overlap_param_gather ............................ False
4363
+ overlap_param_gather_with_optimizer_step ........ False
4364
+ override_opt_param_scheduler .................... False
4365
+ params_dtype .................................... torch.float16
4366
+ patch_dim ....................................... 16
4367
+ per_split_data_args_path ........................ None
4368
+ perform_initialization .......................... True
4369
+ pin_cpu_grads ................................... True
4370
+ pin_cpu_params .................................. True
4371
+ pipeline_model_parallel_comm_backend ............ None
4372
+ pipeline_model_parallel_size .................... 1
4373
+ pipeline_model_parallel_split_rank .............. None
4374
+ position_embedding_type ......................... learned_absolute
4375
+ pretrained_checkpoint ........................... None
4376
+ profile ......................................... False
4377
+ profile_ranks ................................... [0]
4378
+ profile_step_end ................................ 12
4379
+ profile_step_start .............................. 10
4380
+ q_lora_rank ..................................... None
4381
+ qk_head_dim ..................................... 128
4382
+ qk_l2_norm ...................................... False
4383
+ qk_layernorm .................................... False
4384
+ qk_pos_emb_head_dim ............................. 64
4385
+ query_in_block_prob ............................. 0.1
4386
+ rampup_batch_size ............................... None
4387
+ rank ............................................ 0
4388
+ recompute_granularity ........................... None
4389
+ recompute_method ................................ None
4390
+ recompute_modules ............................... None
4391
+ recompute_num_layers ............................ None
4392
+ record_memory_history ........................... False
4393
+ relative_attention_max_distance ................. 128
4394
+ relative_attention_num_buckets .................. 32
4395
+ replication ..................................... False
4396
+ replication_factor .............................. 2
4397
+ replication_jump ................................ None
4398
+ rerun_mode ...................................... disabled
4399
+ reset_attention_mask ............................ False
4400
+ reset_position_ids .............................. False
4401
+ result_rejected_tracker_filename ................ None
4402
+ retriever_report_topk_accuracies ................ []
4403
+ retriever_score_scaling ......................... False
4404
+ retriever_seq_length ............................ 256
4405
+ retro_add_retriever ............................. False
4406
+ retro_attention_gate ............................ 1
4407
+ retro_cyclic_train_iters ........................ None
4408
+ retro_encoder_attention_dropout ................. 0.1
4409
+ retro_encoder_hidden_dropout .................... 0.1
4410
+ retro_encoder_layers ............................ 2
4411
+ retro_num_neighbors ............................. 2
4412
+ retro_num_retrieved_chunks ...................... 2
4413
+ retro_project_dir ............................... None
4414
+ retro_verify_neighbor_count ..................... True
4415
+ rope_scaling_factor ............................. 8.0
4416
+ rotary_base ..................................... 10000
4417
+ rotary_interleaved .............................. False
4418
+ rotary_percent .................................. 1.0
4419
+ rotary_scaling_factor ........................... 1.0
4420
+ rotary_seq_len_interpolation_factor ............. None
4421
+ run_workload_inspector_server ................... False
4422
+ sample_rate ..................................... 1.0
4423
+ save ............................................ gpt-checkpoint
4424
+ save_interval ................................... 16
4425
+ scatter_gather_tensors_in_pipeline .............. True
4426
+ seed ............................................ 1234
4427
+ seq_length ...................................... 8192
4428
+ sequence_parallel ............................... False
4429
+ sgd_momentum .................................... 0.9
4430
+ short_seq_prob .................................. 0.1
4431
+ skip_train ...................................... False
4432
+ skipped_train_samples ........................... 0
4433
+ spec ............................................ None
4434
+ split ........................................... None
4435
+ squared_relu .................................... False
4436
+ start_weight_decay .............................. 0.1
4437
+ straggler_ctrlr_port ............................ 65535
4438
+ straggler_minmax_count .......................... 1
4439
+ suggested_communication_unit_size ............... None
4440
+ swiglu .......................................... False
4441
+ swin_backbone_type .............................. tiny
4442
+ symmetric_ar_type ............................... None
4443
+ te_rng_tracker .................................. False
4444
+ tensor_model_parallel_size ...................... 2
4445
+ tensorboard_dir ................................. tensorboard-logs/
4446
+ tensorboard_log_interval ........................ 1
4447
+ tensorboard_queue_size .......................... 1000
4448
+ test_data_path .................................. None
4449
+ test_mode ....................................... False
4450
+ tiktoken_num_special_tokens ..................... 1000
4451
+ tiktoken_pattern ................................ None
4452
+ tiktoken_special_tokens ......................... None
4453
+ timing_log_level ................................ 0
4454
+ timing_log_option ............................... minmax
4455
+ titles_data_path ................................ None
4456
+ tokenizer_model ................................. None
4457
+ tokenizer_type .................................. GPT2BPETokenizer
4458
+ torch_fsdp2_reshard_after_forward ............... True
4459
+ tp_comm_bootstrap_backend ....................... nccl
4460
+ tp_comm_bulk_dgrad .............................. True
4461
+ tp_comm_bulk_wgrad .............................. True
4462
+ tp_comm_overlap ................................. False
4463
+ tp_comm_overlap_ag .............................. True
4464
+ tp_comm_overlap_cfg ............................. None
4465
+ tp_comm_overlap_rs .............................. True
4466
+ tp_comm_overlap_rs_dgrad ........................ False
4467
+ tp_comm_split_ag ................................ True
4468
+ tp_comm_split_rs ................................ True
4469
+ train_data_path ................................. None
4470
+ train_iters ..................................... 10
4471
+ train_samples ................................... None
4472
+ train_sync_interval ............................. None
4473
+ transformer_impl ................................ transformer_engine
4474
+ transformer_pipeline_model_parallel_size ........ 1
4475
+ untie_embeddings_and_output_weights ............. False
4476
+ use_checkpoint_args ............................. False
4477
+ use_checkpoint_opt_param_scheduler .............. False
4478
+ use_cpu_initialization .......................... None
4479
+ use_custom_fsdp ................................. False
4480
+ use_dist_ckpt ................................... True
4481
+ use_dist_ckpt_deprecated ........................ False
4482
+ use_distributed_optimizer ....................... False
4483
+ use_flash_attn .................................. False
4484
+ use_legacy_models ............................... False
4485
+ use_mp_args_from_checkpoint_args ................ False
4486
+ use_one_sent_docs ............................... False
4487
+ use_persistent_ckpt_worker ...................... False
4488
+ use_precision_aware_optimizer ................... False
4489
+ use_pytorch_profiler ............................ False
4490
+ use_ring_exchange_p2p ........................... False
4491
+ use_rope_scaling ................................ False
4492
+ use_rotary_position_embeddings .................. False
4493
+ use_sharp ....................................... False
4494
+ use_tokenizer_model_from_checkpoint_args ........ True
4495
+ use_torch_fsdp2 ................................. False
4496
+ use_torch_optimizer_for_cpu_offload ............. False
4497
+ use_tp_pp_dp_mapping ............................ False
4498
+ v_head_dim ...................................... 128
4499
+ valid_data_path ................................. None
4500
+ variable_seq_lengths ............................ False
4501
+ virtual_pipeline_model_parallel_size ............ None
4502
+ vision_backbone_type ............................ vit
4503
+ vision_pretraining .............................. False
4504
+ vision_pretraining_type ......................... classify
4505
+ vocab_extra_ids ................................. 0
4506
+ vocab_file ...................................... vocab.json
4507
+ vocab_size ...................................... None
4508
+ wandb_exp_name ..................................
4509
+ wandb_project ...................................
4510
+ wandb_save_dir ..................................
4511
+ weight_decay .................................... 0.1
4512
+ weight_decay_incr_style ......................... constant
4513
+ wgrad_deferral_limit ............................ 0
4514
+ world_size ...................................... 8
4515
+ yaml_cfg ........................................ None
4516
+ -------------------- end of arguments ---------------------
4517
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
4518
+ > building GPT2BPETokenizer tokenizer ...
4519
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
4520
+ INFO:megatron.training.initialize:Setting logging level to 0
4521
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
4522
+ > initializing torch distributed ...
4523
+ INFO:megatron.training.initialize:Setting logging level to 0
4524
+ INFO:megatron.training.initialize:Setting logging level to 0
4525
+ INFO:megatron.training.initialize:Setting logging level to 0
4526
+ INFO:megatron.training.initialize:Setting logging level to 0
4527
+ INFO:megatron.training.initialize:Setting logging level to 0
4528
+ INFO:megatron.training.initialize:Setting logging level to 0
4529
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
4530
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
4531
+ INFO:megatron.training.initialize:Setting logging level to 0
4532
+ > initialized tensor model parallel with size 2
4533
+ > initialized pipeline model parallel with size 1
4534
+ > setting random seeds to 1234 ...
4535
+ > compiling dataset index builder ...
4536
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
4537
+ make: Nothing to be done for 'default'.
4538
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
4539
+ >>> done with dataset index builder. Compilation time: 0.053 seconds
4540
+ > compiling and loading fused kernels ...
4541
+ >>> done with compiling and loading fused kernels. Compilation time: 2.115 seconds
4542
+ time to initialize megatron (seconds): 7.230
4543
+ [after megatron is initialized] datetime: 2025-06-21 21:59:01
4544
+ building GPT model ...
4545
+ >>> embedding
4546
+ >>> decoder
4547
+ >>> output_layer
4548
+ >>> embedding
4549
+ >>> decoder>>> embedding
4550
+ >>> output_layer
4551
+
4552
+ >>> decoder
4553
+ >>> output_layer
4554
+ >>> embedding
4555
+ >>> decoder
4556
+ >>> output_layer
4557
+ >>> embedding
4558
+ >>> decoder
4559
+ >>> output_layer
4560
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808
4561
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808
4562
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808
4563
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808
4564
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808
4565
+ >>> embedding
4566
+ >>> decoder
4567
+ >>> output_layer
4568
+ >>> embedding
4569
+ >>> decoder
4570
+ >>> output_layer
4571
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808
4572
+ >>> embedding > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 313079808
4573
+
4574
+ >>> decoder
4575
+ >>> output_layer
4576
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 313079808
4577
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
4578
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
4579
+ Params for bucket 1 (313079808 elements, 313079808 padded size):
4580
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
4581
+ module.decoder.layers.0.mlp.linear_fc2.weight
4582
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
4583
+ module.decoder.final_layernorm.weight
4584
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
4585
+ module.decoder.layers.1.self_attention.linear_qkv.bias
4586
+ module.decoder.layers.0.mlp.linear_fc2.bias
4587
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
4588
+ module.decoder.layers.0.self_attention.linear_qkv.bias
4589
+ module.decoder.layers.1.mlp.linear_fc1.weight
4590
+ module.decoder.layers.0.mlp.linear_fc1.weight
4591
+ module.decoder.layers.1.mlp.linear_fc2.bias
4592
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
4593
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
4594
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
4595
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
4596
+ module.embedding.word_embeddings.weight
4597
+ module.decoder.layers.0.mlp.linear_fc1.bias
4598
+ module.decoder.layers.1.mlp.linear_fc1.bias
4599
+ module.decoder.layers.1.self_attention.linear_qkv.weight
4600
+ module.decoder.layers.1.self_attention.linear_proj.weight
4601
+ module.decoder.layers.0.self_attention.linear_qkv.weight
4602
+ module.decoder.layers.0.self_attention.linear_proj.weight
4603
+ module.embedding.position_embeddings.weight
4604
+ module.decoder.final_layernorm.bias
4605
+ module.decoder.layers.1.mlp.linear_fc2.weight
4606
+ module.decoder.layers.1.self_attention.linear_proj.bias
4607
+ module.decoder.layers.0.self_attention.linear_proj.bias
4608
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14f72fbf6510>, config_logger_dir='')
4609
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
4610
+ WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
4611
+ will not load any checkpoints and will start from random
4612
+ (min, max) time across ranks (ms):
4613
+ load-checkpoint ................................: (2.99, 3.13)
4614
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:59:01
4615
+ > building train, validation, and test datasets ...
4616
+ > datasets target sizes (minimum size):
4617
+ train: 10
4618
+ validation: 1
4619
+ test: 1
4620
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
4621
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
4622
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
4623
+ > building train, validation, and test datasets for GPT ...
4624
+ INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=8192, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14f730026480>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
4625
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
4626
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
4627
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
4628
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005166 seconds
4629
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8324
4630
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
4631
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
4632
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
4633
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
4634
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001889 seconds
4635
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8320
4636
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
4637
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
4638
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
4639
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
4640
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001729 seconds
4641
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 8335
4642
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
4643
+ > finished creating GPT datasets ...
4644
+ [after dataloaders are built] datetime: 2025-06-21 21:59:01
4645
+ done with setup ...
4646
+ (min, max) time across ranks (ms):
4647
+ model-and-optimizer-setup ......................: (584.26, 602.26)
4648
+ train/valid/test-data-iterators-setup ..........: (32.39, 161.16)
4649
+ training ...
4650
+ Setting rerun_state_machine.current_iteration to 0...
4651
+ [before the start of training step] datetime: 2025-06-21 21:59:01
4652
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 256.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4653
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
4654
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 256.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4655
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
4656
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 256.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4657
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
4658
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 256.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4659
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
4660
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 256.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4661
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
4662
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 256.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4663
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
4664
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 256.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4665
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
4666
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 256.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
4667
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 135.91 GiB is free. Including non-PyTorch memory, this process has 3.89 GiB memory in use. Of the allocated memory 2.37 GiB is allocated by PyTorch, and 63.54 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
4668
+ Running ctx_length=12288, TP_SIZE=2, CP_SIZE=4, BATCH_SIZE=16
4669
+ Cleaning up checkpoint directory: gpt-checkpoint
4670
+ --------------------------------
4671
+ CTX_LENGTH: 12288
4672
+ TP_SIZE: 2
4673
+ CP_SIZE: 4
4674
+ CHECKPOINT_PATH: gpt-checkpoint
4675
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
4676
+ --------------------------------
4677
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
4678
+ INFO:megatron.training.initialize:Setting logging level to 0
4679
+ using world size: 8, data-parallel size: 1, context-parallel size: 4, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 2, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
4680
+ Number of virtual stages per pipeline stage: None
4681
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
4682
+ using torch.float16 for parameters ...
4683
+ ------------------------ arguments ------------------------
4684
+ account_for_embedding_in_pipeline_split ......... False
4685
+ account_for_loss_in_pipeline_split .............. False
4686
+ accumulate_allreduce_grads_in_fp32 .............. False
4687
+ adam_beta1 ...................................... 0.9
4688
+ adam_beta2 ...................................... 0.999
4689
+ adam_eps ........................................ 1e-08
4690
+ add_bias_linear ................................. True
4691
+ add_position_embedding .......................... True
4692
+ add_qkv_bias .................................... True
4693
+ adlr_autoresume ................................. False
4694
+ adlr_autoresume_interval ........................ 1000
4695
+ align_grad_reduce ............................... True
4696
+ align_param_gather .............................. False
4697
+ app_tag_run_name ................................ None
4698
+ app_tag_run_version ............................. 0.0.0
4699
+ apply_layernorm_1p .............................. False
4700
+ apply_query_key_layer_scaling ................... False
4701
+ apply_residual_connection_post_layernorm ........ False
4702
+ apply_rope_fusion ............................... False
4703
+ async_save ...................................... None
4704
+ async_tensor_model_parallel_allreduce ........... True
4705
+ attention_backend ............................... AttnBackend.auto
4706
+ attention_dropout ............................... 0.1
4707
+ attention_softmax_in_fp32 ....................... False
4708
+ auto_detect_ckpt_format ......................... False
4709
+ barrier_with_L1_time ............................ True
4710
+ bert_binary_head ................................ True
4711
+ bert_embedder_type .............................. megatron
4712
+ bert_load ....................................... None
4713
+ bf16 ............................................ False
4714
+ bias_dropout_fusion ............................. True
4715
+ bias_gelu_fusion ................................ True
4716
+ bias_swiglu_fusion .............................. True
4717
+ biencoder_projection_dim ........................ 0
4718
+ biencoder_shared_query_context_model ............ False
4719
+ block_data_path ................................. None
4720
+ calc_ft_timeouts ................................ False
4721
+ calculate_per_token_loss ........................ False
4722
+ check_for_large_grads ........................... False
4723
+ check_for_nan_in_loss_and_grad .................. False
4724
+ check_for_spiky_loss ............................ False
4725
+ check_weight_hash_across_dp_replicas_interval ... None
4726
+ ckpt_assume_constant_structure .................. False
4727
+ ckpt_convert_format ............................. None
4728
+ ckpt_convert_save ............................... None
4729
+ ckpt_convert_update_legacy_dist_opt_format ...... False
4730
+ ckpt_format ..................................... torch_dist
4731
+ ckpt_fully_parallel_load ........................ False
4732
+ ckpt_fully_parallel_save ........................ True
4733
+ ckpt_fully_parallel_save_deprecated ............. False
4734
+ ckpt_step ....................................... None
4735
+ classes_fraction ................................ 1.0
4736
+ clip_grad ....................................... 1.0
4737
+ clone_scatter_output_in_embedding ............... True
4738
+ config_logger_dir ...............................
4739
+ consumed_train_samples .......................... 0
4740
+ consumed_valid_samples .......................... 0
4741
+ context_parallel_size ........................... 4
4742
+ cp_comm_type .................................... ['p2p']
4743
+ create_attention_mask_in_dataloader ............. True
4744
+ cross_entropy_fusion_impl ....................... native
4745
+ cross_entropy_loss_fusion ....................... False
4746
+ cuda_graph_scope ................................ full
4747
+ cuda_graph_warmup_steps ......................... 3
4748
+ data_args_path .................................. None
4749
+ data_cache_path ................................. None
4750
+ data_parallel_random_init ....................... False
4751
+ data_parallel_sharding_strategy ................. no_shard
4752
+ data_parallel_size .............................. 1
4753
+ data_path ....................................... None
4754
+ data_per_class_fraction ......................... 1.0
4755
+ data_sharding ................................... True
4756
+ dataloader_type ................................. single
4757
+ ddp_average_in_collective ....................... False
4758
+ ddp_bucket_size ................................. None
4759
+ ddp_num_buckets ................................. None
4760
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
4761
+ decoder_first_pipeline_num_layers ............... None
4762
+ decoder_last_pipeline_num_layers ................ None
4763
+ decoder_num_layers .............................. None
4764
+ decoder_seq_length .............................. None
4765
+ decoupled_lr .................................... None
4766
+ decoupled_min_lr ................................ None
4767
+ decrease_batch_size_if_needed ................... False
4768
+ defer_embedding_wgrad_compute ................... False
4769
+ deprecated_use_mcore_models ..................... False
4770
+ deterministic_mode .............................. False
4771
+ dino_bottleneck_size ............................ 256
4772
+ dino_freeze_last_layer .......................... 1
4773
+ dino_head_hidden_size ........................... 2048
4774
+ dino_local_crops_number ......................... 10
4775
+ dino_local_img_size ............................. 96
4776
+ dino_norm_last_layer ............................ False
4777
+ dino_teacher_temp ............................... 0.07
4778
+ dino_warmup_teacher_temp ........................ 0.04
4779
+ dino_warmup_teacher_temp_epochs ................. 30
4780
+ disable_bf16_reduced_precision_matmul ........... False
4781
+ disable_mamba_mem_eff_path ...................... False
4782
+ disable_straggler_on_startup .................... False
4783
+ dist_ckpt_format_deprecated ..................... None
4784
+ dist_ckpt_strictness ............................ assume_ok_unexpected
4785
+ distribute_saved_activations .................... False
4786
+ distributed_backend ............................. nccl
4787
+ distributed_timeout_minutes ..................... 10
4788
+ embedding_path .................................. None
4789
+ empty_unused_memory_level ....................... 0
4790
+ enable_cuda_graph ............................... False
4791
+ enable_ft_package ............................... False
4792
+ enable_gloo_process_groups ...................... True
4793
+ enable_msc ...................................... True
4794
+ enable_one_logger ............................... True
4795
+ encoder_num_layers .............................. 2
4796
+ encoder_pipeline_model_parallel_size ............ 0
4797
+ encoder_seq_length .............................. 12288
4798
+ encoder_tensor_model_parallel_size .............. 0
4799
+ end_weight_decay ................................ 0.1
4800
+ eod_mask_loss ................................... False
4801
+ error_injection_rate ............................ 0
4802
+ error_injection_type ............................ transient_error
4803
+ eval_interval ................................... 16
4804
+ eval_iters ...................................... 1
4805
+ evidence_data_path .............................. None
4806
+ exit_duration_in_mins ........................... None
4807
+ exit_interval ................................... None
4808
+ exit_on_missing_checkpoint ...................... False
4809
+ exit_signal_handler ............................. False
4810
+ exp_avg_dtype ................................... torch.float32
4811
+ exp_avg_sq_dtype ................................ torch.float32
4812
+ expert_model_parallel_size ...................... 1
4813
+ expert_tensor_parallel_size ..................... 2
4814
+ external_cuda_graph ............................. False
4815
+ ffn_hidden_size ................................. 16384
4816
+ finetune ........................................ False
4817
+ first_last_layers_bf16 .......................... False
4818
+ flash_decode .................................... False
4819
+ fp16 ............................................ True
4820
+ fp16_lm_cross_entropy ........................... False
4821
+ fp32_residual_connection ........................ False
4822
+ fp8 ............................................. None
4823
+ fp8_amax_compute_algo ........................... most_recent
4824
+ fp8_amax_history_len ............................ 1
4825
+ fp8_interval .................................... 1
4826
+ fp8_margin ...................................... 0
4827
+ fp8_param_gather ................................ False
4828
+ fp8_recipe ...................................... delayed
4829
+ fp8_wgrad ....................................... True
4830
+ fsdp_double_buffer .............................. False
4831
+ global_batch_size ............................... 1
4832
+ grad_reduce_in_bf16 ............................. False
4833
+ gradient_accumulation_fusion .................... True
4834
+ gradient_reduce_div_fusion ...................... True
4835
+ group_query_attention ........................... True
4836
+ head_lr_mult .................................... 1.0
4837
+ heterogeneous_layers_config_encoded_json ........ None
4838
+ heterogeneous_layers_config_path ................ None
4839
+ hidden_dropout .................................. 0.1
4840
+ hidden_size ..................................... 4096
4841
+ hierarchical_context_parallel_sizes ............. None
4842
+ high_priority_stream_groups ..................... []
4843
+ hybrid_attention_ratio .......................... 0.0
4844
+ hybrid_mlp_ratio ................................ 0.0
4845
+ hybrid_override_pattern ......................... None
4846
+ hysteresis ...................................... 2
4847
+ ict_head_size ................................... None
4848
+ ict_load ........................................ None
4849
+ img_h ........................................... 224
4850
+ img_w ........................................... 224
4851
+ indexer_batch_size .............................. 128
4852
+ indexer_log_interval ............................ 1000
4853
+ inference_batch_times_seqlen_threshold .......... -1
4854
+ inference_dynamic_batching ...................... False
4855
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
4856
+ inference_dynamic_batching_buffer_overflow_factor None
4857
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
4858
+ inference_dynamic_batching_chunk_size ........... 256
4859
+ inference_dynamic_batching_max_requests_override None
4860
+ inference_dynamic_batching_max_tokens_override .. None
4861
+ inference_max_batch_size ........................ 8
4862
+ inference_max_seq_length ........................ 2560
4863
+ inference_rng_tracker ........................... False
4864
+ init_method_std ................................. 0.02
4865
+ init_method_xavier_uniform ...................... False
4866
+ init_model_with_meta_device ..................... False
4867
+ initial_loss_scale .............................. 4294967296
4868
+ inprocess_active_world_size ..................... 8
4869
+ inprocess_barrier_timeout ....................... 120
4870
+ inprocess_completion_timeout .................... 120
4871
+ inprocess_empty_cuda_cache ...................... False
4872
+ inprocess_granularity ........................... node
4873
+ inprocess_hard_timeout .......................... 90
4874
+ inprocess_heartbeat_interval .................... 30
4875
+ inprocess_heartbeat_timeout ..................... 60
4876
+ inprocess_last_call_wait ........................ 1
4877
+ inprocess_max_iterations ........................ None
4878
+ inprocess_monitor_process_interval .............. 1.0
4879
+ inprocess_monitor_thread_interval ............... 1.0
4880
+ inprocess_progress_watchdog_interval ............ 1.0
4881
+ inprocess_restart ............................... False
4882
+ inprocess_soft_timeout .......................... 60
4883
+ inprocess_termination_grace_time ................ 1
4884
+ is_hybrid_model ................................. False
4885
+ iter_per_epoch .................................. 1250
4886
+ iterations_to_skip .............................. []
4887
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
4888
+ kv_channels ..................................... 64
4889
+ kv_lora_rank .................................... 32
4890
+ lazy_mpu_init ................................... None
4891
+ load ............................................ gpt-checkpoint
4892
+ load_model_opt_format ........................... False
4893
+ local_rank ...................................... 0
4894
+ log_interval .................................... 1
4895
+ log_loss_scale_to_tensorboard ................... True
4896
+ log_memory_to_tensorboard ....................... False
4897
+ log_num_zeros_in_grad ........................... False
4898
+ log_params_norm ................................. False
4899
+ log_progress .................................... False
4900
+ log_straggler ................................... False
4901
+ log_throughput .................................. False
4902
+ log_timers_to_tensorboard ....................... False
4903
+ log_validation_ppl_to_tensorboard ............... False
4904
+ log_world_size_to_tensorboard ................... False
4905
+ logging_level ................................... 0
4906
+ loss_scale ...................................... None
4907
+ loss_scale_window ............................... 1000
4908
+ lr .............................................. 0.0005
4909
+ lr_decay_iters .................................. 150000
4910
+ lr_decay_samples ................................ None
4911
+ lr_decay_style .................................. cosine
4912
+ lr_warmup_fraction .............................. None
4913
+ lr_warmup_init .................................. 0.0
4914
+ lr_warmup_iters ................................. 2
4915
+ lr_warmup_samples ............................... 0
4916
+ lr_wsd_decay_iters .............................. None
4917
+ lr_wsd_decay_samples ............................ None
4918
+ lr_wsd_decay_style .............................. exponential
4919
+ main_grads_dtype ................................ torch.float32
4920
+ main_params_dtype ............................... torch.float32
4921
+ make_vocab_size_divisible_by .................... 128
4922
+ mamba_head_dim .................................. 64
4923
+ mamba_num_groups ................................ 8
4924
+ mamba_num_heads ................................. None
4925
+ mamba_state_dim ................................. 128
4926
+ manual_gc ....................................... False
4927
+ manual_gc_eval .................................. True
4928
+ manual_gc_interval .............................. 0
4929
+ mask_factor ..................................... 1.0
4930
+ mask_prob ....................................... 0.15
4931
+ mask_type ....................................... random
4932
+ masked_softmax_fusion ........................... True
4933
+ max_position_embeddings ......................... 12288
4934
+ max_tokens_to_oom ............................... 12000
4935
+ memory_snapshot_path ............................ snapshot.pickle
4936
+ merge_file ...................................... merges.txt
4937
+ micro_batch_size ................................ 1
4938
+ microbatch_group_size_per_vp_stage .............. None
4939
+ mid_level_dataset_surplus ....................... 0.005
4940
+ min_loss_scale .................................. 1.0
4941
+ min_lr .......................................... 0.0
4942
+ mlp_chunks_for_prefill .......................... 1
4943
+ mmap_bin_files .................................. True
4944
+ mock_data ....................................... True
4945
+ moe_apply_probs_on_input ........................ False
4946
+ moe_aux_loss_coeff .............................. 0.0
4947
+ moe_enable_deepep ............................... False
4948
+ moe_expert_capacity_factor ...................... None
4949
+ moe_extended_tp ................................. False
4950
+ moe_ffn_hidden_size ............................. None
4951
+ moe_grouped_gemm ................................ False
4952
+ moe_input_jitter_eps ............................ None
4953
+ moe_layer_freq .................................. 1
4954
+ moe_layer_recompute ............................. False
4955
+ moe_pad_expert_input_to_capacity ................ False
4956
+ moe_per_layer_logging ........................... False
4957
+ moe_permute_fusion .............................. False
4958
+ moe_router_bias_update_rate ..................... 0.001
4959
+ moe_router_dtype ................................ None
4960
+ moe_router_enable_expert_bias ................... False
4961
+ moe_router_force_load_balancing ................. False
4962
+ moe_router_group_topk ........................... None
4963
+ moe_router_load_balancing_type .................. aux_loss
4964
+ moe_router_num_groups ........................... None
4965
+ moe_router_padding_for_fp8 ...................... False
4966
+ moe_router_pre_softmax .......................... False
4967
+ moe_router_score_function ....................... softmax
4968
+ moe_router_topk ................................. 2
4969
+ moe_router_topk_scaling_factor .................. None
4970
+ moe_shared_expert_intermediate_size ............. None
4971
+ moe_shared_expert_overlap ....................... False
4972
+ moe_token_dispatcher_type ....................... allgather
4973
+ moe_token_drop_policy ........................... probs
4974
+ moe_use_legacy_grouped_gemm ..................... False
4975
+ moe_use_upcycling ............................... False
4976
+ moe_z_loss_coeff ................................ None
4977
+ mrope_section ................................... None
4978
+ mscale .......................................... 1.0
4979
+ mscale_all_dim .................................. 1.0
4980
+ mtp_loss_scaling_factor ......................... 0.1
4981
+ mtp_num_layers .................................. None
4982
+ multi_latent_attention .......................... False
4983
+ nccl_all_reduce_for_prefill ..................... False
4984
+ nccl_communicator_config_path ................... None
4985
+ nccl_ub ......................................... False
4986
+ no_load_optim ................................... None
4987
+ no_load_rng ..................................... None
4988
+ no_persist_layer_norm ........................... False
4989
+ no_rope_freq .................................... None
4990
+ no_save_optim ................................... None
4991
+ no_save_rng ..................................... None
4992
+ non_persistent_ckpt_type ........................ None
4993
+ non_persistent_global_ckpt_dir .................. None
4994
+ non_persistent_local_ckpt_algo .................. fully_parallel
4995
+ non_persistent_local_ckpt_dir ................... None
4996
+ non_persistent_save_interval .................... None
4997
+ norm_epsilon .................................... 1e-05
4998
+ normalization ................................... LayerNorm
4999
+ num_attention_heads ............................. 64
5000
+ num_channels .................................... 3
5001
+ num_classes ..................................... 1000
5002
+ num_dataset_builder_threads ..................... 1
5003
+ num_distributed_optimizer_instances ............. 1
5004
+ num_experts ..................................... None
5005
+ num_layers ...................................... 2
5006
+ num_layers_at_end_in_bf16 ....................... 1
5007
+ num_layers_at_start_in_bf16 ..................... 1
5008
+ num_layers_per_virtual_pipeline_stage ........... None
5009
+ num_query_groups ................................ 16
5010
+ num_virtual_stages_per_pipeline_rank ............ None
5011
+ num_workers ..................................... 2
5012
+ object_storage_cache_path ....................... None
5013
+ one_logger_async ................................ False
5014
+ one_logger_project .............................. megatron-lm
5015
+ one_logger_run_name ............................. None
5016
+ onnx_safe ....................................... None
5017
+ openai_gelu ..................................... False
5018
+ optimizer ....................................... adam
5019
+ optimizer_cpu_offload ........................... False
5020
+ optimizer_offload_fraction ...................... 1.0
5021
+ output_bert_embeddings .......................... False
5022
+ overlap_cpu_optimizer_d2h_h2d ................... False
5023
+ overlap_grad_reduce ............................. False
5024
+ overlap_p2p_comm ................................ False
5025
+ overlap_p2p_comm_warmup_flush ................... False
5026
+ overlap_param_gather ............................ False
5027
+ overlap_param_gather_with_optimizer_step ........ False
5028
+ override_opt_param_scheduler .................... False
5029
+ params_dtype .................................... torch.float16
5030
+ patch_dim ....................................... 16
5031
+ per_split_data_args_path ........................ None
5032
+ perform_initialization .......................... True
5033
+ pin_cpu_grads ................................... True
5034
+ pin_cpu_params .................................. True
5035
+ pipeline_model_parallel_comm_backend ............ None
5036
+ pipeline_model_parallel_size .................... 1
5037
+ pipeline_model_parallel_split_rank .............. None
5038
+ position_embedding_type ......................... learned_absolute
5039
+ pretrained_checkpoint ........................... None
5040
+ profile ......................................... False
5041
+ profile_ranks ................................... [0]
5042
+ profile_step_end ................................ 12
5043
+ profile_step_start .............................. 10
5044
+ q_lora_rank ..................................... None
5045
+ qk_head_dim ..................................... 128
5046
+ qk_l2_norm ...................................... False
5047
+ qk_layernorm .................................... False
5048
+ qk_pos_emb_head_dim ............................. 64
5049
+ query_in_block_prob ............................. 0.1
5050
+ rampup_batch_size ............................... None
5051
+ rank ............................................ 0
5052
+ recompute_granularity ........................... None
5053
+ recompute_method ................................ None
5054
+ recompute_modules ............................... None
5055
+ recompute_num_layers ............................ None
5056
+ record_memory_history ........................... False
5057
+ relative_attention_max_distance ................. 128
5058
+ relative_attention_num_buckets .................. 32
5059
+ replication ..................................... False
5060
+ replication_factor .............................. 2
5061
+ replication_jump ................................ None
5062
+ rerun_mode ...................................... disabled
5063
+ reset_attention_mask ............................ False
5064
+ reset_position_ids .............................. False
5065
+ result_rejected_tracker_filename ................ None
5066
+ retriever_report_topk_accuracies ................ []
5067
+ retriever_score_scaling ......................... False
5068
+ retriever_seq_length ............................ 256
5069
+ retro_add_retriever ............................. False
5070
+ retro_attention_gate ............................ 1
5071
+ retro_cyclic_train_iters ........................ None
5072
+ retro_encoder_attention_dropout ................. 0.1
5073
+ retro_encoder_hidden_dropout .................... 0.1
5074
+ retro_encoder_layers ............................ 2
5075
+ retro_num_neighbors ............................. 2
5076
+ retro_num_retrieved_chunks ...................... 2
5077
+ retro_project_dir ............................... None
5078
+ retro_verify_neighbor_count ..................... True
5079
+ rope_scaling_factor ............................. 8.0
5080
+ rotary_base ..................................... 10000
5081
+ rotary_interleaved .............................. False
5082
+ rotary_percent .................................. 1.0
5083
+ rotary_scaling_factor ........................... 1.0
5084
+ rotary_seq_len_interpolation_factor ............. None
5085
+ run_workload_inspector_server ................... False
5086
+ sample_rate ..................................... 1.0
5087
+ save ............................................ gpt-checkpoint
5088
+ save_interval ................................... 16
5089
+ scatter_gather_tensors_in_pipeline .............. True
5090
+ seed ............................................ 1234
5091
+ seq_length ...................................... 12288
5092
+ sequence_parallel ............................... False
5093
+ sgd_momentum .................................... 0.9
5094
+ short_seq_prob .................................. 0.1
5095
+ skip_train ...................................... False
5096
+ skipped_train_samples ........................... 0
5097
+ spec ............................................ None
5098
+ split ........................................... None
5099
+ squared_relu .................................... False
5100
+ start_weight_decay .............................. 0.1
5101
+ straggler_ctrlr_port ............................ 65535
5102
+ straggler_minmax_count .......................... 1
5103
+ suggested_communication_unit_size ............... None
5104
+ swiglu .......................................... False
5105
+ swin_backbone_type .............................. tiny
5106
+ symmetric_ar_type ............................... None
5107
+ te_rng_tracker .................................. False
5108
+ tensor_model_parallel_size ...................... 2
5109
+ tensorboard_dir ................................. tensorboard-logs/
5110
+ tensorboard_log_interval ........................ 1
5111
+ tensorboard_queue_size .......................... 1000
5112
+ test_data_path .................................. None
5113
+ test_mode ....................................... False
5114
+ tiktoken_num_special_tokens ..................... 1000
5115
+ tiktoken_pattern ................................ None
5116
+ tiktoken_special_tokens ......................... None
5117
+ timing_log_level ................................ 0
5118
+ timing_log_option ............................... minmax
5119
+ titles_data_path ................................ None
5120
+ tokenizer_model ................................. None
5121
+ tokenizer_type .................................. GPT2BPETokenizer
5122
+ torch_fsdp2_reshard_after_forward ............... True
5123
+ tp_comm_bootstrap_backend ....................... nccl
5124
+ tp_comm_bulk_dgrad .............................. True
5125
+ tp_comm_bulk_wgrad .............................. True
5126
+ tp_comm_overlap ................................. False
5127
+ tp_comm_overlap_ag .............................. True
5128
+ tp_comm_overlap_cfg ............................. None
5129
+ tp_comm_overlap_rs .............................. True
5130
+ tp_comm_overlap_rs_dgrad ........................ False
5131
+ tp_comm_split_ag ................................ True
5132
+ tp_comm_split_rs ................................ True
5133
+ train_data_path ................................. None
5134
+ train_iters ..................................... 10
5135
+ train_samples ................................... None
5136
+ train_sync_interval ............................. None
5137
+ transformer_impl ................................ transformer_engine
5138
+ transformer_pipeline_model_parallel_size ........ 1
5139
+ untie_embeddings_and_output_weights ............. False
5140
+ use_checkpoint_args ............................. False
5141
+ use_checkpoint_opt_param_scheduler .............. False
5142
+ use_cpu_initialization .......................... None
5143
+ use_custom_fsdp ................................. False
5144
+ use_dist_ckpt ................................... True
5145
+ use_dist_ckpt_deprecated ........................ False
5146
+ use_distributed_optimizer ....................... False
5147
+ use_flash_attn .................................. False
5148
+ use_legacy_models ............................... False
5149
+ use_mp_args_from_checkpoint_args ................ False
5150
+ use_one_sent_docs ............................... False
5151
+ use_persistent_ckpt_worker ...................... False
5152
+ use_precision_aware_optimizer ................... False
5153
+ use_pytorch_profiler ............................ False
5154
+ use_ring_exchange_p2p ........................... False
5155
+ use_rope_scaling ................................ False
5156
+ use_rotary_position_embeddings .................. False
5157
+ use_sharp ....................................... False
5158
+ use_tokenizer_model_from_checkpoint_args ........ True
5159
+ use_torch_fsdp2 ................................. False
5160
+ use_torch_optimizer_for_cpu_offload ............. False
5161
+ use_tp_pp_dp_mapping ............................ False
5162
+ v_head_dim ...................................... 128
5163
+ valid_data_path ................................. None
5164
+ variable_seq_lengths ............................ False
5165
+ virtual_pipeline_model_parallel_size ............ None
5166
+ vision_backbone_type ............................ vit
5167
+ vision_pretraining .............................. False
5168
+ vision_pretraining_type ......................... classify
5169
+ vocab_extra_ids ................................. 0
5170
+ vocab_file ...................................... vocab.json
5171
+ vocab_size ...................................... None
5172
+ wandb_exp_name ..................................
5173
+ wandb_project ...................................
5174
+ wandb_save_dir ..................................
5175
+ weight_decay .................................... 0.1
5176
+ weight_decay_incr_style ......................... constant
5177
+ wgrad_deferral_limit ............................ 0
5178
+ world_size ...................................... 8
5179
+ yaml_cfg ........................................ None
5180
+ -------------------- end of arguments ---------------------
5181
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
5182
+ > building GPT2BPETokenizer tokenizer ...
5183
+ INFO:megatron.training.initialize:Setting logging level to 0
5184
+ INFO:megatron.training.initialize:Setting logging level to 0
5185
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
5186
+ INFO:megatron.training.initialize:Setting logging level to 0
5187
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
5188
+ > initializing torch distributed ...
5189
+ INFO:megatron.training.initialize:Setting logging level to 0
5190
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
5191
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
5192
+ INFO:megatron.training.initialize:Setting logging level to 0
5193
+ INFO:megatron.training.initialize:Setting logging level to 0
5194
+ INFO:megatron.training.initialize:Setting logging level to 0
5195
+ > initialized tensor model parallel with size 2
5196
+ > initialized pipeline model parallel with size 1
5197
+ > setting random seeds to 1234 ...
5198
+ > compiling dataset index builder ...
5199
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
5200
+ make: Nothing to be done for 'default'.
5201
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
5202
+ >>> done with dataset index builder. Compilation time: 0.054 seconds
5203
+ > compiling and loading fused kernels ...
5204
+ >>> done with compiling and loading fused kernels. Compilation time: 2.330 seconds