Upload folder using huggingface_hub
Browse files- attnserver.run_attnserver.slurm.sh.343188.out.log +630 -0
- attnserver.run_attnserver.slurm.sh.343195.out.log +290 -0
- attnserver.run_attnserver.slurm.sh.343196.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343200.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343200.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343202.err.log +307 -0
- attnserver.run_attnserver.slurm.sh.343202.out.log +19 -0
- attnserver.run_attnserver.slurm.sh.343203.err.log +101 -0
- attnserver.run_attnserver.slurm.sh.343203.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343204.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343204.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343205.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343206.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343206.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343207.err.log +141 -0
- attnserver.run_attnserver.slurm.sh.343207.out.log +10 -0
- attnserver.run_attnserver.slurm.sh.343208.err.log +141 -0
- attnserver.run_attnserver.slurm.sh.343208.out.log +10 -0
- attnserver.run_attnserver.slurm.sh.343209.err.log +149 -0
- attnserver.run_attnserver.slurm.sh.343209.out.log +537 -0
- attnserver.run_attnserver.slurm.sh.343210.err.log +149 -0
- attnserver.run_attnserver.slurm.sh.343210.out.log +536 -0
attnserver.run_attnserver.slurm.sh.343188.out.log
CHANGED
|
@@ -124818,3 +124818,633 @@ batch tensor after cp: position_ids torch.Size([1, 16384])
|
|
| 124818 |
Start exporting trace 8
|
| 124819 |
Done exporting trace 8
|
| 124820 |
[2025-06-21 21:18:21] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 128291.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124818 |
Start exporting trace 8
|
| 124819 |
Done exporting trace 8
|
| 124820 |
[2025-06-21 21:18:21] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 128291.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 124821 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124822 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124823 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124824 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124825 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124826 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124827 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124828 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124829 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124830 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124831 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124832 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124833 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124834 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124835 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124836 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124837 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124838 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124839 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124840 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124841 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124842 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124843 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124844 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124845 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124846 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124847 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124848 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124849 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124850 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124851 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124852 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124853 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124854 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124855 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124856 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124857 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124858 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124859 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124860 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124861 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124862 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124863 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124864 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124865 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124866 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124867 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124868 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124869 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124870 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124871 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124872 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124873 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124874 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124875 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124876 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124877 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124878 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124879 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124880 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124881 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124882 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124883 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124884 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124885 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124886 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124887 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124888 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124889 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124890 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124891 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124892 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124893 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124894 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124895 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124896 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124897 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124898 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124899 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124900 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124901 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124902 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124903 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124904 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124905 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124906 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124907 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124908 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124909 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124910 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124911 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124912 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124913 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124914 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124915 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124916 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124917 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124918 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124919 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124920 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124921 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124922 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124923 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124924 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124925 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124926 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124927 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124928 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124929 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124930 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124931 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124932 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124933 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124934 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124935 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124936 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124937 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124938 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124939 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124940 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124941 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124942 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124943 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124944 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124945 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124946 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124947 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124948 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124949 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124950 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124951 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124952 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124953 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124954 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124955 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124956 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124957 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124958 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124959 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124960 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124961 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124962 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124963 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124964 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124965 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124966 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124967 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124968 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124969 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124970 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124971 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124972 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124973 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124974 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124975 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124976 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124977 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124978 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124979 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124980 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124981 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124982 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124983 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124984 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124985 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124986 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124987 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124988 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124989 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 124990 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 124991 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 124992 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 124993 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 124994 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 124995 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 124996 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 124997 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 124998 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 124999 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125000 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125001 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125002 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125003 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125004 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125005 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125006 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125007 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125008 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125009 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125010 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125011 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125012 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125013 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125014 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125015 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125016 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125017 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125018 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125019 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125020 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125021 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125022 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125023 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125024 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125025 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125026 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125027 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125028 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125029 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125030 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125031 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125032 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125033 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125034 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125035 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125036 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125037 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125038 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125039 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125040 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125041 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125042 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125043 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125044 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125045 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125046 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125047 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125048 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125049 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125050 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125051 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125052 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125053 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125054 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125055 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125056 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125057 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125058 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125059 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125060 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125061 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125062 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125063 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125064 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125065 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125066 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125067 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125068 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125069 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125070 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125071 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125072 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125073 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125074 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125075 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125076 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125077 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125078 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125079 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125080 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125081 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125082 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125083 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125084 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125085 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125086 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125087 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125088 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125089 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125090 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125091 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125092 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125093 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125094 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125095 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125096 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125097 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125098 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125099 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125100 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125101 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125102 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125103 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125104 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125105 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125106 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125107 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125108 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125109 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125110 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125111 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125112 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125113 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125114 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125115 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125116 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125117 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125118 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125119 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125120 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125121 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125122 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125123 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125124 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125125 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125126 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125127 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125128 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125129 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125130 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125131 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125132 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125133 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125134 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125135 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125136 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125137 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125138 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125139 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125140 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125141 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125142 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125143 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125144 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125145 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125146 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125147 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125148 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125149 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125150 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125151 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125152 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125153 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125154 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125155 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125156 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125157 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125158 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125159 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125160 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125161 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125162 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125163 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125164 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125165 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125166 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125167 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125168 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125169 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125170 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125171 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125172 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125173 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125174 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125175 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125176 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125177 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125178 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125179 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125180 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125181 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125182 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125183 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125184 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125185 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125186 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125187 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125188 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125189 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125190 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125191 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125192 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125193 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125194 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125195 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125196 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125197 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125198 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125199 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125200 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125201 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125202 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125203 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125204 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125205 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125206 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125207 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125208 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125209 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125210 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125211 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125212 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125213 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125214 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125215 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125216 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125217 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125218 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125219 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125220 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125221 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125222 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125223 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125224 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125225 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125226 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125227 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125228 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125229 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125230 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125231 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125232 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125233 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125234 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125235 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125236 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125237 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125238 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125239 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125240 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125241 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125242 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125243 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125244 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125245 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125246 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125247 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125248 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125249 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125250 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125251 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125252 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125253 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125254 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125255 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125256 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125257 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125258 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125259 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125260 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125261 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125262 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125263 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125264 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125265 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125266 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125267 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125268 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125269 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125270 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125271 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125272 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125273 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125274 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125275 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125276 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125277 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125278 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125279 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125280 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125281 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125282 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125283 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125284 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125285 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125286 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125287 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125288 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125289 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125290 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125291 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125292 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125293 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125294 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125295 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125296 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125297 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125298 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125299 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125300 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125301 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125302 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125303 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125304 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125305 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125306 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125307 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125308 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125309 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125310 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125311 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125312 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125313 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125314 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125315 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125316 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125317 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125318 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125319 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125320 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125321 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125322 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125323 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125324 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125325 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125326 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125327 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125328 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125329 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125330 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125331 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125332 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125333 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125334 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125335 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125336 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125337 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125338 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125339 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125340 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125341 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125342 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125343 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125344 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125345 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125346 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125347 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125348 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125349 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125350 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125351 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125352 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125353 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125354 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125355 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125356 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125357 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125358 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125359 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125360 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125361 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125362 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125363 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125364 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125365 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125366 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125367 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125368 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125369 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125370 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125371 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125372 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125373 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125374 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125375 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125376 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125377 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125378 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125379 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125380 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125381 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125382 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125383 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125384 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125385 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125386 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125387 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125388 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125389 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125390 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125391 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125392 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125393 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125394 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125395 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125396 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125397 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125398 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125399 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125400 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125401 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125402 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125403 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125404 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125405 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125406 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125407 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125408 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125409 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125410 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125411 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125412 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125413 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125414 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125415 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125416 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125417 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125418 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125419 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125420 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125421 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125422 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125423 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125424 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125425 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125426 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125427 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125428 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125429 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125430 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125431 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125432 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125433 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125434 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125435 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125436 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125437 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125438 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125439 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125440 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 125441 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 125442 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 125443 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 125444 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 125445 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 125446 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 125447 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 125448 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 125449 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 125450 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
attnserver.run_attnserver.slurm.sh.343195.out.log
CHANGED
|
@@ -67763,3 +67763,293 @@ batch tensor after cp: position_ids torch.Size([1, 32768])
|
|
| 67763 |
Start exporting trace 6
|
| 67764 |
Done exporting trace 6
|
| 67765 |
[2025-06-21 21:18:27] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 152588.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67763 |
Start exporting trace 6
|
| 67764 |
Done exporting trace 6
|
| 67765 |
[2025-06-21 21:18:27] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 152588.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 67766 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67767 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67768 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67769 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67770 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67771 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67772 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67773 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67774 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67775 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67776 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67777 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67778 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67779 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67780 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67781 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67782 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67783 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67784 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67785 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67786 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67787 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67788 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67789 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67790 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67791 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67792 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67793 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67794 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67795 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67796 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67797 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67798 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67799 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67800 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67801 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67802 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67803 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67804 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67805 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67806 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67807 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67808 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67809 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67810 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67811 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67812 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67813 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67814 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67815 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67816 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67817 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67818 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67819 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67820 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67821 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67822 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67823 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67824 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67825 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67826 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67827 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67828 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67829 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67830 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67831 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67832 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67833 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67834 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67835 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67836 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67837 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67838 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67839 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67840 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67841 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67842 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67843 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67844 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67845 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67846 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67847 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67848 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67849 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67850 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67851 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67852 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67853 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67854 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67855 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67856 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67857 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67858 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67859 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67860 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67861 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67862 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67863 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67864 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67865 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67866 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67867 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67868 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67869 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67870 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67871 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67872 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67873 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67874 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67875 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67876 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67877 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67878 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67879 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67880 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67881 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67882 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67883 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67884 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67885 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67886 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67887 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67888 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67889 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67890 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67891 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67892 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67893 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67894 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67895 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67896 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67897 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67898 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67899 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67900 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67901 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67902 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67903 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67904 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67905 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67906 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67907 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67908 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67909 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67910 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67911 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67912 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67913 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67914 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67915 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67916 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67917 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67918 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67919 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67920 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67921 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67922 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67923 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67924 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67925 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67926 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67927 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67928 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67929 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67930 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67931 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67932 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67933 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67934 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67935 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67936 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67937 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67938 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67939 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67940 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67941 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67942 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67943 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67944 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67945 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67946 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67947 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67948 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67949 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67950 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67951 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67952 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67953 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67954 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67955 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67956 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67957 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67958 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67959 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67960 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67961 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67962 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67963 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67964 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67965 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67966 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67967 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67968 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67969 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67970 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67971 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67972 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67973 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67974 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67975 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67976 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67977 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67978 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67979 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67980 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67981 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67982 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67983 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67984 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67985 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67986 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67987 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67988 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67989 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 67990 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 67991 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 67992 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 67993 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 67994 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 67995 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 67996 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 67997 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 67998 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 67999 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 68000 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 68001 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 68002 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 68003 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 68004 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 68005 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 68006 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 68007 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 68008 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 68009 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 68010 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 68011 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 68012 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 68013 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 68014 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 68015 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 68016 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 68017 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 68018 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 68019 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 68020 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 68021 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 68022 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 68023 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 68024 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 68025 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 68026 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 68027 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 68028 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 68029 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 68030 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 68031 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 68032 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 68033 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 68034 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 68035 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 68036 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 68037 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 68038 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 68039 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 68040 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 68041 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 68042 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 68043 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 68044 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 68045 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
| 68046 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 68047 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 68048 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 68049 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 68050 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 68051 |
+
batch tensor after cp: tokens torch.Size([1, 32768])
|
| 68052 |
+
batch tensor after cp: labels torch.Size([1, 32768])
|
| 68053 |
+
batch tensor after cp: loss_mask torch.Size([1, 32768])
|
| 68054 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
|
| 68055 |
+
batch tensor after cp: position_ids torch.Size([1, 32768])
|
attnserver.run_attnserver.slurm.sh.343196.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343200.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343200.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343202.err.log
CHANGED
|
@@ -6782,3 +6782,310 @@ W0621 21:13:47.545000 3922086 site-packages/torch/distributed/run.py:766] ******
|
|
| 6782 |
[rank0]: return io.open(self, mode, buffering, encoding, errors, newline)
|
| 6783 |
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6784 |
[rank0]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/.metadata.tmp'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6782 |
[rank0]: return io.open(self, mode, buffering, encoding, errors, newline)
|
| 6783 |
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6784 |
[rank0]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/.metadata.tmp'
|
| 6785 |
+
[rank0]:[W621 21:19:04.979946813 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6786 |
+
W0621 21:19:13.223000 3922086 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3922159 closing signal SIGTERM
|
| 6787 |
+
W0621 21:19:13.227000 3922086 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3922160 closing signal SIGTERM
|
| 6788 |
+
W0621 21:19:13.230000 3922086 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3922161 closing signal SIGTERM
|
| 6789 |
+
W0621 21:19:13.231000 3922086 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3922162 closing signal SIGTERM
|
| 6790 |
+
W0621 21:19:13.236000 3922086 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3922163 closing signal SIGTERM
|
| 6791 |
+
W0621 21:19:13.240000 3922086 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3922164 closing signal SIGTERM
|
| 6792 |
+
W0621 21:19:13.260000 3922086 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3922165 closing signal SIGTERM
|
| 6793 |
+
E0621 21:19:15.487000 3922086 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 3922158) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 6794 |
+
Traceback (most recent call last):
|
| 6795 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 6796 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 6797 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 6798 |
+
main()
|
| 6799 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 6800 |
+
return arg(*args, **kwargs)
|
| 6801 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 6802 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 6803 |
+
launch(args)
|
| 6804 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 6805 |
+
run(args)
|
| 6806 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 6807 |
+
elastic_launch(
|
| 6808 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 6809 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 6810 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 6811 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 6812 |
+
raise ChildFailedError(
|
| 6813 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 6814 |
+
============================================================
|
| 6815 |
+
./pretrain_gpt_profile.py FAILED
|
| 6816 |
+
------------------------------------------------------------
|
| 6817 |
+
Failures:
|
| 6818 |
+
<NO_OTHER_FAILURES>
|
| 6819 |
+
------------------------------------------------------------
|
| 6820 |
+
Root Cause (first observed failure):
|
| 6821 |
+
[0]:
|
| 6822 |
+
time : 2025-06-21_21:19:13
|
| 6823 |
+
host : fs-mbz-gpu-728
|
| 6824 |
+
rank : 0 (local_rank: 0)
|
| 6825 |
+
exitcode : 1 (pid: 3922158)
|
| 6826 |
+
error_file: <N/A>
|
| 6827 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 6828 |
+
============================================================
|
| 6829 |
+
+ set +x
|
| 6830 |
+
[rank14]:[W621 21:19:16.597385554 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-865]:57156, remote=[fs-mbz-gpu-728]:37481): failed to recv, got 0 bytes
|
| 6831 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6832 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x15088b9785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6833 |
+
frame #1: <unknown function> + 0x5ba8afe (0x15087485aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6834 |
+
frame #2: <unknown function> + 0x5baae40 (0x15087485ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6835 |
+
frame #3: <unknown function> + 0x5bab74a (0x15087485d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6836 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1508748571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6837 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x150831a509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 6838 |
+
frame #6: <unknown function> + 0xd3b6d (0x15088b4f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 6839 |
+
frame #7: <unknown function> + 0x94ac3 (0x15088ca36ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6840 |
+
frame #8: <unknown function> + 0x126850 (0x15088cac8850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6841 |
+
|
| 6842 |
+
[rank15]:[W621 21:19:16.597385455 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-865]:57224, remote=[fs-mbz-gpu-728]:37481): failed to recv, got 0 bytes
|
| 6843 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6844 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x145f943785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6845 |
+
frame #1: <unknown function> + 0x5ba8afe (0x145f7d25aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6846 |
+
frame #2: <unknown function> + 0x5baae40 (0x145f7d25ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6847 |
+
frame #3: <unknown function> + 0x5bab74a (0x145f7d25d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6848 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x145f7d2571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6849 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x145f3a4509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 6850 |
+
frame #6: <unknown function> + 0xd3b6d (0x145f93ef1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 6851 |
+
frame #7: <unknown function> + 0x94ac3 (0x145f953b8ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6852 |
+
frame #8: <unknown function> + 0x126850 (0x145f9544a850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6853 |
+
|
| 6854 |
+
[rank10]:[W621 21:19:16.597522826 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-865]:57206, remote=[fs-mbz-gpu-728]:37481): failed to recv, got 0 bytes
|
| 6855 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6856 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14d75bd785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6857 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14d744c5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6858 |
+
frame #2: <unknown function> + 0x5baae40 (0x14d744c5ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6859 |
+
frame #3: <unknown function> + 0x5bab74a (0x14d744c5d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6860 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x14d744c571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6861 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14d701e509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 6862 |
+
frame #6: <unknown function> + 0xd3b6d (0x14d75b8f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 6863 |
+
frame #7: <unknown function> + 0x94ac3 (0x14d75cdc5ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6864 |
+
frame #8: <unknown function> + 0x126850 (0x14d75ce57850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6865 |
+
|
| 6866 |
+
[rank14]:[W621 21:19:16.601790151 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 14] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 6867 |
+
[rank15]:[W621 21:19:16.601801332 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 15] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 6868 |
+
[rank10]:[W621 21:19:16.601904060 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 10] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 6869 |
+
[rank8]:[W621 21:19:16.629327826 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=75, addr=[fs-mbz-gpu-865]:57148, remote=[fs-mbz-gpu-728]:37481): failed to recv, got 0 bytes
|
| 6870 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6871 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1465d09785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6872 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1465b985aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6873 |
+
frame #2: <unknown function> + 0x5baae40 (0x1465b985ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6874 |
+
frame #3: <unknown function> + 0x5bab74a (0x1465b985d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6875 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1465b98571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6876 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x146576a509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 6877 |
+
frame #6: <unknown function> + 0xd3b6d (0x1465d04f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 6878 |
+
frame #7: <unknown function> + 0x94ac3 (0x1465d1a14ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6879 |
+
frame #8: <unknown function> + 0x126850 (0x1465d1aa6850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6880 |
+
|
| 6881 |
+
[rank8]:[W621 21:19:16.633647516 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 8] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 6882 |
+
[rank11]:[W621 21:19:16.688718409 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-865]:57220, remote=[fs-mbz-gpu-728]:37481): failed to recv, got 0 bytes
|
| 6883 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6884 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x146437b785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6885 |
+
frame #1: <unknown function> + 0x5ba8afe (0x146420e5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6886 |
+
frame #2: <unknown function> + 0x5baae40 (0x146420e5ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6887 |
+
frame #3: <unknown function> + 0x5bab74a (0x146420e5d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6888 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x146420e571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6889 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x1463de0509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 6890 |
+
frame #6: <unknown function> + 0xd3b6d (0x1463ce019b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 6891 |
+
frame #7: <unknown function> + 0x94ac3 (0x146438f6dac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6892 |
+
frame #8: <unknown function> + 0x126850 (0x146438fff850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6893 |
+
|
| 6894 |
+
[rank11]:[W621 21:19:16.692730257 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 11] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 6895 |
+
[rank12]:[W621 21:19:16.688662135 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-865]:57166, remote=[fs-mbz-gpu-728]:37481): failed to recv, got 0 bytes
|
| 6896 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6897 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14776f3785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6898 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14775865aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6899 |
+
frame #2: <unknown function> + 0x5baae40 (0x14775865ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6900 |
+
frame #3: <unknown function> + 0x5bab74a (0x14775865d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6901 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1477586571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6902 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x1477158509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 6903 |
+
frame #6: <unknown function> + 0xd3b6d (0x147705819b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 6904 |
+
frame #7: <unknown function> + 0x94ac3 (0x147770710ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6905 |
+
frame #8: <unknown function> + 0x126850 (0x1477707a2850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6906 |
+
|
| 6907 |
+
[rank9]:[W621 21:19:16.688664327 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-865]:57180, remote=[fs-mbz-gpu-728]:37481): failed to recv, got 0 bytes
|
| 6908 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6909 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14d335b785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6910 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14d31ea5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6911 |
+
frame #2: <unknown function> + 0x5baae40 (0x14d31ea5ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6912 |
+
frame #3: <unknown function> + 0x5bab74a (0x14d31ea5d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6913 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x14d31ea571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6914 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14d2dbc509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 6915 |
+
frame #6: <unknown function> + 0xd3b6d (0x14d3356f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 6916 |
+
frame #7: <unknown function> + 0x94ac3 (0x14d336be8ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6917 |
+
frame #8: <unknown function> + 0x126850 (0x14d336c7a850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6918 |
+
|
| 6919 |
+
[rank13]:[W621 21:19:16.688672768 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-865]:57196, remote=[fs-mbz-gpu-728]:37481): failed to recv, got 0 bytes
|
| 6920 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 6921 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1512e83785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6922 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1512d165aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6923 |
+
frame #2: <unknown function> + 0x5baae40 (0x1512d165ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6924 |
+
frame #3: <unknown function> + 0x5bab74a (0x1512d165d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6925 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1512d16571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6926 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x15128e8509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 6927 |
+
frame #6: <unknown function> + 0xd3b6d (0x15127e819b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 6928 |
+
frame #7: <unknown function> + 0x94ac3 (0x1512e96b9ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6929 |
+
frame #8: <unknown function> + 0x126850 (0x1512e974b850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6930 |
+
|
| 6931 |
+
[rank12]:[W621 21:19:16.692870336 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 12] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 6932 |
+
[rank9]:[W621 21:19:16.692884070 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 9] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 6933 |
+
[rank13]:[W621 21:19:16.692933680 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 13] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 6934 |
+
W0621 21:19:16.473000 2522629 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2522699 closing signal SIGTERM
|
| 6935 |
+
W0621 21:19:16.478000 2522629 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2522700 closing signal SIGTERM
|
| 6936 |
+
W0621 21:19:16.480000 2522629 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2522701 closing signal SIGTERM
|
| 6937 |
+
W0621 21:19:16.485000 2522629 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2522702 closing signal SIGTERM
|
| 6938 |
+
W0621 21:19:16.487000 2522629 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2522703 closing signal SIGTERM
|
| 6939 |
+
W0621 21:19:16.489000 2522629 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2522704 closing signal SIGTERM
|
| 6940 |
+
W0621 21:19:16.533000 2522629 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2522705 closing signal SIGTERM
|
| 6941 |
+
W0621 21:19:16.548000 2522629 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2522707 closing signal SIGTERM
|
| 6942 |
+
[W621 21:19:18.748504106 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-865]:42814, remote=[fs-mbz-gpu-728]:29500): Broken pipe
|
| 6943 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6944 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1474ef7785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6945 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1474d865aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6946 |
+
frame #2: <unknown function> + 0x5baa358 (0x1474d865c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6947 |
+
frame #3: <unknown function> + 0x5babb3e (0x1474d865db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6948 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1474d8657ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6949 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1474d8657ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6950 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1474d8658f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6951 |
+
frame #7: <unknown function> + 0xc0f526 (0x1474e798b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6952 |
+
frame #8: <unknown function> + 0x37f17d (0x1474e70fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6953 |
+
<omitting python frames>
|
| 6954 |
+
frame #17: <unknown function> + 0x94ac3 (0x1474f07f3ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6955 |
+
frame #18: <unknown function> + 0x126850 (0x1474f0885850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6956 |
+
|
| 6957 |
+
W0621 21:19:18.428000 2522629 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-865_2522629_0' has failed to send a keep-alive heartbeat to the rendezvous '343202' due to an error of type RendezvousConnectionError.
|
| 6958 |
+
[W621 21:19:23.757992050 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-865]:42814, remote=[fs-mbz-gpu-728]:29500): Broken pipe
|
| 6959 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6960 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1474ef7785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6961 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1474d865aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6962 |
+
frame #2: <unknown function> + 0x5baa358 (0x1474d865c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6963 |
+
frame #3: <unknown function> + 0x5babb3e (0x1474d865db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6964 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1474d8657ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6965 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1474d8657ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6966 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1474d8658f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6967 |
+
frame #7: <unknown function> + 0xc0f526 (0x1474e798b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6968 |
+
frame #8: <unknown function> + 0x37f17d (0x1474e70fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6969 |
+
<omitting python frames>
|
| 6970 |
+
frame #17: <unknown function> + 0x94ac3 (0x1474f07f3ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6971 |
+
frame #18: <unknown function> + 0x126850 (0x1474f0885850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6972 |
+
|
| 6973 |
+
W0621 21:19:23.435000 2522629 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-865_2522629_0' has failed to send a keep-alive heartbeat to the rendezvous '343202' due to an error of type RendezvousConnectionError.
|
| 6974 |
+
[W621 21:19:23.821313515 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-865]:42814, remote=[fs-mbz-gpu-728]:29500): Broken pipe
|
| 6975 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6976 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1474ef7785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6977 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1474d865aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6978 |
+
frame #2: <unknown function> + 0x5baa358 (0x1474d865c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6979 |
+
frame #3: <unknown function> + 0x5babb3e (0x1474d865db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6980 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1474d8657ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6981 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1474d8657ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6982 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1474d8658f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6983 |
+
frame #7: <unknown function> + 0xc0f526 (0x1474e798b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6984 |
+
frame #8: <unknown function> + 0x37f17d (0x1474e70fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 6985 |
+
<omitting python frames>
|
| 6986 |
+
frame #26: <unknown function> + 0x29d90 (0x1474f0788d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6987 |
+
frame #27: __libc_start_main + 0x80 (0x1474f0788e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 6988 |
+
|
| 6989 |
+
W0621 21:19:23.505000 2522629 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-865_2522629_0' has failed to shutdown the rendezvous '343202' due to an error of type RendezvousConnectionError.
|
| 6990 |
+
[W621 21:19:23.835472402 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-865]:42814, remote=[fs-mbz-gpu-728]:29500): Broken pipe
|
| 6991 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 6992 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1474ef7785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 6993 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1474d865aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6994 |
+
frame #2: <unknown function> + 0x5baa358 (0x1474d865c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6995 |
+
frame #3: <unknown function> + 0x5babb3e (0x1474d865db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6996 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1474d8657ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6997 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1474d8657ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6998 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1474d8658f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 6999 |
+
frame #7: <unknown function> + 0xc0f526 (0x1474e798b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 7000 |
+
frame #8: <unknown function> + 0x37f17d (0x1474e70fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 7001 |
+
<omitting python frames>
|
| 7002 |
+
frame #26: <unknown function> + 0x29d90 (0x1474f0788d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 7003 |
+
frame #27: __libc_start_main + 0x80 (0x1474f0788e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 7004 |
+
|
| 7005 |
+
W0621 21:19:23.516000 2522629 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-865_2522629_0' has failed to shutdown the rendezvous '343202' due to an error of type RendezvousConnectionError.
|
| 7006 |
+
Traceback (most recent call last):
|
| 7007 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
|
| 7008 |
+
return getattr(self._store, store_op)(*args, **kwargs)
|
| 7009 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7010 |
+
torch.distributed.DistNetworkError: failed to recv, got 0 bytes
|
| 7011 |
+
|
| 7012 |
+
The above exception was the direct cause of the following exception:
|
| 7013 |
+
|
| 7014 |
+
Traceback (most recent call last):
|
| 7015 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 7016 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 7017 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 7018 |
+
main()
|
| 7019 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 7020 |
+
return arg(*args, **kwargs)
|
| 7021 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 7022 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 7023 |
+
launch(args)
|
| 7024 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 7025 |
+
run(args)
|
| 7026 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 7027 |
+
elastic_launch(
|
| 7028 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 7029 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 7030 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7031 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
|
| 7032 |
+
result = agent.run()
|
| 7033 |
+
^^^^^^^^^^^
|
| 7034 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
|
| 7035 |
+
result = f(*args, **kwargs)
|
| 7036 |
+
^^^^^^^^^^^^^^^^^^
|
| 7037 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
|
| 7038 |
+
result = self._invoke_run(role)
|
| 7039 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 7040 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
|
| 7041 |
+
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
|
| 7042 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7043 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
|
| 7044 |
+
self._state_holder.sync()
|
| 7045 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 437, in sync
|
| 7046 |
+
get_response = self._backend.get_state()
|
| 7047 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7048 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 75, in get_state
|
| 7049 |
+
base64_state: bytes = self._call_store("get", self._key)
|
| 7050 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7051 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
|
| 7052 |
+
raise RendezvousConnectionError(
|
| 7053 |
+
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
|
| 7054 |
+
+ set +x
|
| 7055 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 7056 |
+
+ export PROF_CTX_LENGTH=81920
|
| 7057 |
+
+ PROF_CTX_LENGTH=81920
|
| 7058 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L81920*tp8.cp2.bs2.json'
|
| 7059 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L81920*tp8.cp2.bs2.json' ']'
|
| 7060 |
+
+ echo 'Running ctx_length=81920, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=2'
|
| 7061 |
+
+ srun bash ./attnserver.sh
|
| 7062 |
+
+ which python3
|
| 7063 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343202 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-728:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 81920 --max-position-embeddings 81920 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 7064 |
+
+ which python3
|
| 7065 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343202 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-728:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 81920 --max-position-embeddings 81920 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 7066 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 7067 |
+
and will be removed in future. Use torchrun.
|
| 7068 |
+
Note that --use-env is set by default in torchrun.
|
| 7069 |
+
If your script expects `--local-rank` argument to be set, please
|
| 7070 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 7071 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 7072 |
+
further instructions
|
| 7073 |
+
|
| 7074 |
+
main()
|
| 7075 |
+
W0621 21:19:26.938000 2525849 site-packages/torch/distributed/run.py:766]
|
| 7076 |
+
W0621 21:19:26.938000 2525849 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7077 |
+
W0621 21:19:26.938000 2525849 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 7078 |
+
W0621 21:19:26.938000 2525849 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7079 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 7080 |
+
and will be removed in future. Use torchrun.
|
| 7081 |
+
Note that --use-env is set by default in torchrun.
|
| 7082 |
+
If your script expects `--local-rank` argument to be set, please
|
| 7083 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 7084 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 7085 |
+
further instructions
|
| 7086 |
+
|
| 7087 |
+
main()
|
| 7088 |
+
W0621 21:19:27.143000 3925433 site-packages/torch/distributed/run.py:766]
|
| 7089 |
+
W0621 21:19:27.143000 3925433 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7090 |
+
W0621 21:19:27.143000 3925433 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 7091 |
+
W0621 21:19:27.143000 3925433 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343202.out.log
CHANGED
|
@@ -29921,3 +29921,22 @@ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.95
|
|
| 29921 |
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9507585, 15, gather: 1.8799049854278564
|
| 29922 |
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9546258, 0, gather: 0.007202863693237305
|
| 29923 |
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0099s
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29921 |
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9507585, 15, gather: 1.8799049854278564
|
| 29922 |
DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9546258, 0, gather: 0.007202863693237305
|
| 29923 |
DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0099s
|
| 29924 |
+
Running ctx_length=81920, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=2
|
| 29925 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 29926 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 29927 |
+
--------------------------------
|
| 29928 |
+
CTX_LENGTH: 81920
|
| 29929 |
+
TP_SIZE: 8
|
| 29930 |
+
CP_SIZE: 2
|
| 29931 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 29932 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 29933 |
+
--------------------------------
|
| 29934 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 29935 |
+
--------------------------------
|
| 29936 |
+
CTX_LENGTH: 81920
|
| 29937 |
+
TP_SIZE: 8
|
| 29938 |
+
CP_SIZE: 2
|
| 29939 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 29940 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 29941 |
+
--------------------------------
|
| 29942 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
attnserver.run_attnserver.slurm.sh.343203.err.log
CHANGED
|
@@ -695,3 +695,104 @@ W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766]
|
|
| 695 |
W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] *****************************************
|
| 696 |
W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 697 |
W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] *****************************************
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 695 |
W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] *****************************************
|
| 696 |
W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 697 |
W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] *****************************************
|
| 698 |
+
[rank5]:[W621 21:18:42.008152355 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 699 |
+
[rank13]:[W621 21:18:42.669951592 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 700 |
+
[rank1]:[W621 21:18:42.009315982 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 701 |
+
[rank9]:[W621 21:18:42.671191991 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 702 |
+
[rank15]:[W621 21:18:42.671262833 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 703 |
+
[rank12]:[W621 21:18:42.671775299 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 704 |
+
[rank7]:[W621 21:18:42.012603791 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 705 |
+
[rank4]:[W621 21:18:42.014142683 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 706 |
+
[rank8]:[W621 21:18:43.829344186 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 707 |
+
[rank0]:[W621 21:18:44.272155162 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 708 |
+
[rank14]:[W621 21:18:44.950915815 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 709 |
+
[rank11]:[W621 21:18:44.951789519 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 710 |
+
[rank10]:[W621 21:18:44.952640951 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 711 |
+
[rank3]:[W621 21:18:44.294979689 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 712 |
+
[rank2]:[W621 21:18:44.295832026 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 713 |
+
[rank6]:[W621 21:18:44.297861170 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 714 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 715 |
+
warnings.warn(
|
| 716 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 717 |
+
warnings.warn(
|
| 718 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 719 |
+
warnings.warn(
|
| 720 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 721 |
+
warnings.warn(
|
| 722 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 723 |
+
warnings.warn(
|
| 724 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 725 |
+
warnings.warn(
|
| 726 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 727 |
+
warnings.warn(
|
| 728 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 729 |
+
warnings.warn(
|
| 730 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 731 |
+
warnings.warn(
|
| 732 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 733 |
+
warnings.warn(
|
| 734 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 735 |
+
warnings.warn(
|
| 736 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 737 |
+
warnings.warn(
|
| 738 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 739 |
+
warnings.warn(
|
| 740 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 741 |
+
warnings.warn(
|
| 742 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 743 |
+
warnings.warn(
|
| 744 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 745 |
+
warnings.warn(
|
| 746 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 747 |
+
warnings.warn(
|
| 748 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 749 |
+
warnings.warn(
|
| 750 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 751 |
+
warnings.warn(
|
| 752 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 753 |
+
warnings.warn(
|
| 754 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 755 |
+
warnings.warn(
|
| 756 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 757 |
+
warnings.warn(
|
| 758 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 759 |
+
warnings.warn(
|
| 760 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 761 |
+
warnings.warn(
|
| 762 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 763 |
+
warnings.warn(
|
| 764 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 765 |
+
warnings.warn(
|
| 766 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 767 |
+
warnings.warn(
|
| 768 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 769 |
+
warnings.warn(
|
| 770 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 771 |
+
warnings.warn(
|
| 772 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 773 |
+
warnings.warn(
|
| 774 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 775 |
+
warnings.warn(
|
| 776 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 777 |
+
warnings.warn(
|
| 778 |
+
[rank0]: Traceback (most recent call last):
|
| 779 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 780 |
+
[rank0]: pretrain(
|
| 781 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 782 |
+
[rank0]: save_checkpoint(
|
| 783 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 784 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 785 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 786 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
|
| 787 |
+
[rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
|
| 788 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
|
| 789 |
+
[rank0]: torch.save(common_state_dict, path)
|
| 790 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
|
| 791 |
+
[rank0]: with _open_zipfile_writer(f) as opened_zipfile:
|
| 792 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
|
| 793 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
|
| 794 |
+
[rank0]: return container(name_or_buffer)
|
| 795 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 796 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
|
| 797 |
+
[rank0]: torch._C.PyTorchFileWriter(
|
| 798 |
+
[rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
|
attnserver.run_attnserver.slurm.sh.343203.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343204.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343204.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343205.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343206.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343206.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343207.err.log
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
|
| 2 |
+
++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 3 |
+
++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
|
| 4 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 5 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 6 |
+
+++ export _CE_M=
|
| 7 |
+
+++ _CE_M=
|
| 8 |
+
+++ export _CE_CONDA=
|
| 9 |
+
+++ _CE_CONDA=
|
| 10 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 11 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 12 |
+
+++ '[' -z x ']'
|
| 13 |
+
++ conda activate
|
| 14 |
+
++ local cmd=activate
|
| 15 |
+
++ case "$cmd" in
|
| 16 |
+
++ __conda_activate activate
|
| 17 |
+
++ '[' -n '' ']'
|
| 18 |
+
++ local ask_conda
|
| 19 |
+
+++ PS1=
|
| 20 |
+
+++ __conda_exe shell.posix activate
|
| 21 |
+
+++ '[' -n '' ']'
|
| 22 |
+
+++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
|
| 23 |
+
++ ask_conda='unset _CE_M
|
| 24 |
+
unset _CE_CONDA
|
| 25 |
+
PS1='\''(base) '\''
|
| 26 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 27 |
+
export CONDA_SHLVL='\''1'\''
|
| 28 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 29 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 30 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 31 |
+
++ eval 'unset _CE_M
|
| 32 |
+
unset _CE_CONDA
|
| 33 |
+
PS1='\''(base) '\''
|
| 34 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 35 |
+
export CONDA_SHLVL='\''1'\''
|
| 36 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 37 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 38 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 39 |
+
+++ unset _CE_M
|
| 40 |
+
+++ unset _CE_CONDA
|
| 41 |
+
+++ PS1='(base) '
|
| 42 |
+
+++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 43 |
+
+++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 44 |
+
+++ export CONDA_SHLVL=1
|
| 45 |
+
+++ CONDA_SHLVL=1
|
| 46 |
+
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
|
| 47 |
+
+++ CONDA_PROMPT_MODIFIER='(base) '
|
| 48 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 49 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 50 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 51 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 52 |
+
++ __conda_hashr
|
| 53 |
+
++ '[' -n '' ']'
|
| 54 |
+
++ '[' -n '' ']'
|
| 55 |
+
++ hash -r
|
| 56 |
+
+ conda activate junda-attnserver
|
| 57 |
+
+ local cmd=activate
|
| 58 |
+
+ case "$cmd" in
|
| 59 |
+
+ __conda_activate activate junda-attnserver
|
| 60 |
+
+ '[' -n '' ']'
|
| 61 |
+
+ local ask_conda
|
| 62 |
+
++ PS1='(base) '
|
| 63 |
+
++ __conda_exe shell.posix activate junda-attnserver
|
| 64 |
+
++ '[' -n '' ']'
|
| 65 |
+
++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
|
| 66 |
+
+ ask_conda='unset _CE_M
|
| 67 |
+
unset _CE_CONDA
|
| 68 |
+
PS1='\''(junda-attnserver) '\''
|
| 69 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 70 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 71 |
+
export CONDA_SHLVL='\''2'\''
|
| 72 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 73 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 74 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 75 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 76 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 77 |
+
+ eval 'unset _CE_M
|
| 78 |
+
unset _CE_CONDA
|
| 79 |
+
PS1='\''(junda-attnserver) '\''
|
| 80 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 81 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 82 |
+
export CONDA_SHLVL='\''2'\''
|
| 83 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 84 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 85 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 86 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 87 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 88 |
+
++ unset _CE_M
|
| 89 |
+
++ unset _CE_CONDA
|
| 90 |
+
++ PS1='(junda-attnserver) '
|
| 91 |
+
++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 92 |
+
++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 93 |
+
++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 94 |
+
++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 95 |
+
++ export CONDA_SHLVL=2
|
| 96 |
+
++ CONDA_SHLVL=2
|
| 97 |
+
++ export CONDA_DEFAULT_ENV=junda-attnserver
|
| 98 |
+
++ CONDA_DEFAULT_ENV=junda-attnserver
|
| 99 |
+
++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
|
| 100 |
+
++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
|
| 101 |
+
++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 102 |
+
++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 103 |
+
++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 104 |
+
++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 105 |
+
++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 106 |
+
++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 107 |
+
+ __conda_hashr
|
| 108 |
+
+ '[' -n '' ']'
|
| 109 |
+
+ '[' -n '' ']'
|
| 110 |
+
+ hash -r
|
| 111 |
+
+ export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 112 |
+
+ CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 113 |
+
+ mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 114 |
+
+ export PROF_TP_SIZE=8
|
| 115 |
+
+ PROF_TP_SIZE=8
|
| 116 |
+
+ export PROF_CP_SIZE=1
|
| 117 |
+
+ PROF_CP_SIZE=1
|
| 118 |
+
+ export PROF_BS=1
|
| 119 |
+
+ PROF_BS=1
|
| 120 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 121 |
+
+ export PROF_CTX_LENGTH=1024
|
| 122 |
+
+ PROF_CTX_LENGTH=1024
|
| 123 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp1.bs1.json'
|
| 124 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp1.bs1.json' ']'
|
| 125 |
+
+ echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=1'
|
| 126 |
+
+ srun bash ./attnserver.sh
|
| 127 |
+
+ which python3
|
| 128 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343207 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-661:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 1 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 129 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 130 |
+
and will be removed in future. Use torchrun.
|
| 131 |
+
Note that --use-env is set by default in torchrun.
|
| 132 |
+
If your script expects `--local-rank` argument to be set, please
|
| 133 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 134 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 135 |
+
further instructions
|
| 136 |
+
|
| 137 |
+
main()
|
| 138 |
+
W0621 21:19:26.076000 1511074 site-packages/torch/distributed/run.py:766]
|
| 139 |
+
W0621 21:19:26.076000 1511074 site-packages/torch/distributed/run.py:766] *****************************************
|
| 140 |
+
W0621 21:19:26.076000 1511074 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 141 |
+
W0621 21:19:26.076000 1511074 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343207.out.log
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Running ctx_length=1024, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=1
|
| 2 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 3 |
+
--------------------------------
|
| 4 |
+
CTX_LENGTH: 1024
|
| 5 |
+
TP_SIZE: 8
|
| 6 |
+
CP_SIZE: 1
|
| 7 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 8 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9 |
+
--------------------------------
|
| 10 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
attnserver.run_attnserver.slurm.sh.343208.err.log
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
|
| 2 |
+
++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 3 |
+
++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
|
| 4 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 5 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 6 |
+
+++ export _CE_M=
|
| 7 |
+
+++ _CE_M=
|
| 8 |
+
+++ export _CE_CONDA=
|
| 9 |
+
+++ _CE_CONDA=
|
| 10 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 11 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 12 |
+
+++ '[' -z x ']'
|
| 13 |
+
++ conda activate
|
| 14 |
+
++ local cmd=activate
|
| 15 |
+
++ case "$cmd" in
|
| 16 |
+
++ __conda_activate activate
|
| 17 |
+
++ '[' -n '' ']'
|
| 18 |
+
++ local ask_conda
|
| 19 |
+
+++ PS1=
|
| 20 |
+
+++ __conda_exe shell.posix activate
|
| 21 |
+
+++ '[' -n '' ']'
|
| 22 |
+
+++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
|
| 23 |
+
++ ask_conda='unset _CE_M
|
| 24 |
+
unset _CE_CONDA
|
| 25 |
+
PS1='\''(base) '\''
|
| 26 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 27 |
+
export CONDA_SHLVL='\''1'\''
|
| 28 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 29 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 30 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 31 |
+
++ eval 'unset _CE_M
|
| 32 |
+
unset _CE_CONDA
|
| 33 |
+
PS1='\''(base) '\''
|
| 34 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 35 |
+
export CONDA_SHLVL='\''1'\''
|
| 36 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 37 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 38 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 39 |
+
+++ unset _CE_M
|
| 40 |
+
+++ unset _CE_CONDA
|
| 41 |
+
+++ PS1='(base) '
|
| 42 |
+
+++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 43 |
+
+++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 44 |
+
+++ export CONDA_SHLVL=1
|
| 45 |
+
+++ CONDA_SHLVL=1
|
| 46 |
+
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
|
| 47 |
+
+++ CONDA_PROMPT_MODIFIER='(base) '
|
| 48 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 49 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 50 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 51 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 52 |
+
++ __conda_hashr
|
| 53 |
+
++ '[' -n '' ']'
|
| 54 |
+
++ '[' -n '' ']'
|
| 55 |
+
++ hash -r
|
| 56 |
+
+ conda activate junda-attnserver
|
| 57 |
+
+ local cmd=activate
|
| 58 |
+
+ case "$cmd" in
|
| 59 |
+
+ __conda_activate activate junda-attnserver
|
| 60 |
+
+ '[' -n '' ']'
|
| 61 |
+
+ local ask_conda
|
| 62 |
+
++ PS1='(base) '
|
| 63 |
+
++ __conda_exe shell.posix activate junda-attnserver
|
| 64 |
+
++ '[' -n '' ']'
|
| 65 |
+
++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
|
| 66 |
+
+ ask_conda='unset _CE_M
|
| 67 |
+
unset _CE_CONDA
|
| 68 |
+
PS1='\''(junda-attnserver) '\''
|
| 69 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 70 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 71 |
+
export CONDA_SHLVL='\''2'\''
|
| 72 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 73 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 74 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 75 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 76 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 77 |
+
+ eval 'unset _CE_M
|
| 78 |
+
unset _CE_CONDA
|
| 79 |
+
PS1='\''(junda-attnserver) '\''
|
| 80 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 81 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 82 |
+
export CONDA_SHLVL='\''2'\''
|
| 83 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 84 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 85 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 86 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 87 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 88 |
+
++ unset _CE_M
|
| 89 |
+
++ unset _CE_CONDA
|
| 90 |
+
++ PS1='(junda-attnserver) '
|
| 91 |
+
++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 92 |
+
++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 93 |
+
++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 94 |
+
++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 95 |
+
++ export CONDA_SHLVL=2
|
| 96 |
+
++ CONDA_SHLVL=2
|
| 97 |
+
++ export CONDA_DEFAULT_ENV=junda-attnserver
|
| 98 |
+
++ CONDA_DEFAULT_ENV=junda-attnserver
|
| 99 |
+
++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
|
| 100 |
+
++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
|
| 101 |
+
++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 102 |
+
++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 103 |
+
++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 104 |
+
++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 105 |
+
++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 106 |
+
++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 107 |
+
+ __conda_hashr
|
| 108 |
+
+ '[' -n '' ']'
|
| 109 |
+
+ '[' -n '' ']'
|
| 110 |
+
+ hash -r
|
| 111 |
+
+ export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 112 |
+
+ CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 113 |
+
+ mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 114 |
+
+ export PROF_TP_SIZE=8
|
| 115 |
+
+ PROF_TP_SIZE=8
|
| 116 |
+
+ export PROF_CP_SIZE=1
|
| 117 |
+
+ PROF_CP_SIZE=1
|
| 118 |
+
+ export PROF_BS=2
|
| 119 |
+
+ PROF_BS=2
|
| 120 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 121 |
+
+ export PROF_CTX_LENGTH=1024
|
| 122 |
+
+ PROF_CTX_LENGTH=1024
|
| 123 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp1.bs2.json'
|
| 124 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp1.bs2.json' ']'
|
| 125 |
+
+ echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=2'
|
| 126 |
+
+ srun bash ./attnserver.sh
|
| 127 |
+
+ which python3
|
| 128 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343208 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-886:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 1 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 129 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 130 |
+
and will be removed in future. Use torchrun.
|
| 131 |
+
Note that --use-env is set by default in torchrun.
|
| 132 |
+
If your script expects `--local-rank` argument to be set, please
|
| 133 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 134 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 135 |
+
further instructions
|
| 136 |
+
|
| 137 |
+
main()
|
| 138 |
+
W0621 21:19:26.532000 1692678 site-packages/torch/distributed/run.py:766]
|
| 139 |
+
W0621 21:19:26.532000 1692678 site-packages/torch/distributed/run.py:766] *****************************************
|
| 140 |
+
W0621 21:19:26.532000 1692678 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 141 |
+
W0621 21:19:26.532000 1692678 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343208.out.log
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Running ctx_length=1024, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=2
|
| 2 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 3 |
+
--------------------------------
|
| 4 |
+
CTX_LENGTH: 1024
|
| 5 |
+
TP_SIZE: 8
|
| 6 |
+
CP_SIZE: 1
|
| 7 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 8 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9 |
+
--------------------------------
|
| 10 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
attnserver.run_attnserver.slurm.sh.343209.err.log
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
|
| 2 |
+
++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 3 |
+
++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
|
| 4 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 5 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 6 |
+
+++ export _CE_M=
|
| 7 |
+
+++ _CE_M=
|
| 8 |
+
+++ export _CE_CONDA=
|
| 9 |
+
+++ _CE_CONDA=
|
| 10 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 11 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 12 |
+
+++ '[' -z x ']'
|
| 13 |
+
++ conda activate
|
| 14 |
+
++ local cmd=activate
|
| 15 |
+
++ case "$cmd" in
|
| 16 |
+
++ __conda_activate activate
|
| 17 |
+
++ '[' -n '' ']'
|
| 18 |
+
++ local ask_conda
|
| 19 |
+
+++ PS1=
|
| 20 |
+
+++ __conda_exe shell.posix activate
|
| 21 |
+
+++ '[' -n '' ']'
|
| 22 |
+
+++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
|
| 23 |
+
++ ask_conda='unset _CE_M
|
| 24 |
+
unset _CE_CONDA
|
| 25 |
+
PS1='\''(base) '\''
|
| 26 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 27 |
+
export CONDA_SHLVL='\''1'\''
|
| 28 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 29 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 30 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 31 |
+
++ eval 'unset _CE_M
|
| 32 |
+
unset _CE_CONDA
|
| 33 |
+
PS1='\''(base) '\''
|
| 34 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 35 |
+
export CONDA_SHLVL='\''1'\''
|
| 36 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 37 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 38 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 39 |
+
+++ unset _CE_M
|
| 40 |
+
+++ unset _CE_CONDA
|
| 41 |
+
+++ PS1='(base) '
|
| 42 |
+
+++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 43 |
+
+++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 44 |
+
+++ export CONDA_SHLVL=1
|
| 45 |
+
+++ CONDA_SHLVL=1
|
| 46 |
+
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
|
| 47 |
+
+++ CONDA_PROMPT_MODIFIER='(base) '
|
| 48 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 49 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 50 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 51 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 52 |
+
++ __conda_hashr
|
| 53 |
+
++ '[' -n '' ']'
|
| 54 |
+
++ '[' -n '' ']'
|
| 55 |
+
++ hash -r
|
| 56 |
+
+ conda activate junda-attnserver
|
| 57 |
+
+ local cmd=activate
|
| 58 |
+
+ case "$cmd" in
|
| 59 |
+
+ __conda_activate activate junda-attnserver
|
| 60 |
+
+ '[' -n '' ']'
|
| 61 |
+
+ local ask_conda
|
| 62 |
+
++ PS1='(base) '
|
| 63 |
+
++ __conda_exe shell.posix activate junda-attnserver
|
| 64 |
+
++ '[' -n '' ']'
|
| 65 |
+
++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
|
| 66 |
+
+ ask_conda='unset _CE_M
|
| 67 |
+
unset _CE_CONDA
|
| 68 |
+
PS1='\''(junda-attnserver) '\''
|
| 69 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 70 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 71 |
+
export CONDA_SHLVL='\''2'\''
|
| 72 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 73 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 74 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 75 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 76 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 77 |
+
+ eval 'unset _CE_M
|
| 78 |
+
unset _CE_CONDA
|
| 79 |
+
PS1='\''(junda-attnserver) '\''
|
| 80 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 81 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 82 |
+
export CONDA_SHLVL='\''2'\''
|
| 83 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 84 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 85 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 86 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 87 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 88 |
+
++ unset _CE_M
|
| 89 |
+
++ unset _CE_CONDA
|
| 90 |
+
++ PS1='(junda-attnserver) '
|
| 91 |
+
++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 92 |
+
++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 93 |
+
++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 94 |
+
++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 95 |
+
++ export CONDA_SHLVL=2
|
| 96 |
+
++ CONDA_SHLVL=2
|
| 97 |
+
++ export CONDA_DEFAULT_ENV=junda-attnserver
|
| 98 |
+
++ CONDA_DEFAULT_ENV=junda-attnserver
|
| 99 |
+
++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
|
| 100 |
+
++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
|
| 101 |
+
++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 102 |
+
++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 103 |
+
++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 104 |
+
++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 105 |
+
++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 106 |
+
++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 107 |
+
+ __conda_hashr
|
| 108 |
+
+ '[' -n '' ']'
|
| 109 |
+
+ '[' -n '' ']'
|
| 110 |
+
+ hash -r
|
| 111 |
+
+ export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 112 |
+
+ CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 113 |
+
+ mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 114 |
+
+ export PROF_TP_SIZE=8
|
| 115 |
+
+ PROF_TP_SIZE=8
|
| 116 |
+
+ export PROF_CP_SIZE=1
|
| 117 |
+
+ PROF_CP_SIZE=1
|
| 118 |
+
+ export PROF_BS=4
|
| 119 |
+
+ PROF_BS=4
|
| 120 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 121 |
+
+ export PROF_CTX_LENGTH=1024
|
| 122 |
+
+ PROF_CTX_LENGTH=1024
|
| 123 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp1.bs4.json'
|
| 124 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp1.bs4.json' ']'
|
| 125 |
+
+ echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=4'
|
| 126 |
+
+ srun bash ./attnserver.sh
|
| 127 |
+
+ which python3
|
| 128 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343209 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-702:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 1 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 129 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 130 |
+
and will be removed in future. Use torchrun.
|
| 131 |
+
Note that --use-env is set by default in torchrun.
|
| 132 |
+
If your script expects `--local-rank` argument to be set, please
|
| 133 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 134 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 135 |
+
further instructions
|
| 136 |
+
|
| 137 |
+
main()
|
| 138 |
+
W0621 21:19:25.037000 1978718 site-packages/torch/distributed/run.py:766]
|
| 139 |
+
W0621 21:19:25.037000 1978718 site-packages/torch/distributed/run.py:766] *****************************************
|
| 140 |
+
W0621 21:19:25.037000 1978718 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 141 |
+
W0621 21:19:25.037000 1978718 site-packages/torch/distributed/run.py:766] *****************************************
|
| 142 |
+
[rank1]:[W621 21:19:45.103630717 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 143 |
+
[rank3]:[W621 21:19:46.254546836 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 144 |
+
[rank0]:[W621 21:19:46.278781300 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 145 |
+
[rank4]:[W621 21:19:46.282298864 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 146 |
+
[rank7]:[W621 21:19:46.283689693 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 147 |
+
[rank5]:[W621 21:19:46.293795584 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 148 |
+
[rank6]:[W621 21:19:46.294037250 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 149 |
+
[rank2]:[W621 21:19:46.295639773 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
attnserver.run_attnserver.slurm.sh.343209.out.log
ADDED
|
@@ -0,0 +1,537 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Running ctx_length=1024, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=4
|
| 2 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 3 |
+
--------------------------------
|
| 4 |
+
CTX_LENGTH: 1024
|
| 5 |
+
TP_SIZE: 8
|
| 6 |
+
CP_SIZE: 1
|
| 7 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 8 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9 |
+
--------------------------------
|
| 10 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 11 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 13 |
+
Number of virtual stages per pipeline stage: None
|
| 14 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 15 |
+
using torch.float16 for parameters ...
|
| 16 |
+
------------------------ arguments ------------------------
|
| 17 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 18 |
+
account_for_loss_in_pipeline_split .............. False
|
| 19 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 20 |
+
adam_beta1 ...................................... 0.9
|
| 21 |
+
adam_beta2 ...................................... 0.999
|
| 22 |
+
adam_eps ........................................ 1e-08
|
| 23 |
+
add_bias_linear ................................. True
|
| 24 |
+
add_position_embedding .......................... True
|
| 25 |
+
add_qkv_bias .................................... True
|
| 26 |
+
adlr_autoresume ................................. False
|
| 27 |
+
adlr_autoresume_interval ........................ 1000
|
| 28 |
+
align_grad_reduce ............................... True
|
| 29 |
+
align_param_gather .............................. False
|
| 30 |
+
app_tag_run_name ................................ None
|
| 31 |
+
app_tag_run_version ............................. 0.0.0
|
| 32 |
+
apply_layernorm_1p .............................. False
|
| 33 |
+
apply_query_key_layer_scaling ................... False
|
| 34 |
+
apply_residual_connection_post_layernorm ........ False
|
| 35 |
+
apply_rope_fusion ............................... False
|
| 36 |
+
async_save ...................................... None
|
| 37 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 38 |
+
attention_backend ............................... AttnBackend.auto
|
| 39 |
+
attention_dropout ............................... 0.1
|
| 40 |
+
attention_softmax_in_fp32 ....................... False
|
| 41 |
+
auto_detect_ckpt_format ......................... False
|
| 42 |
+
barrier_with_L1_time ............................ True
|
| 43 |
+
bert_binary_head ................................ True
|
| 44 |
+
bert_embedder_type .............................. megatron
|
| 45 |
+
bert_load ....................................... None
|
| 46 |
+
bf16 ............................................ False
|
| 47 |
+
bias_dropout_fusion ............................. True
|
| 48 |
+
bias_gelu_fusion ................................ True
|
| 49 |
+
bias_swiglu_fusion .............................. True
|
| 50 |
+
biencoder_projection_dim ........................ 0
|
| 51 |
+
biencoder_shared_query_context_model ............ False
|
| 52 |
+
block_data_path ................................. None
|
| 53 |
+
calc_ft_timeouts ................................ False
|
| 54 |
+
calculate_per_token_loss ........................ False
|
| 55 |
+
check_for_large_grads ........................... False
|
| 56 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 57 |
+
check_for_spiky_loss ............................ False
|
| 58 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 59 |
+
ckpt_assume_constant_structure .................. False
|
| 60 |
+
ckpt_convert_format ............................. None
|
| 61 |
+
ckpt_convert_save ............................... None
|
| 62 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 63 |
+
ckpt_format ..................................... torch_dist
|
| 64 |
+
ckpt_fully_parallel_load ........................ False
|
| 65 |
+
ckpt_fully_parallel_save ........................ True
|
| 66 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 67 |
+
ckpt_step ....................................... None
|
| 68 |
+
classes_fraction ................................ 1.0
|
| 69 |
+
clip_grad ....................................... 1.0
|
| 70 |
+
clone_scatter_output_in_embedding ............... True
|
| 71 |
+
config_logger_dir ...............................
|
| 72 |
+
consumed_train_samples .......................... 0
|
| 73 |
+
consumed_valid_samples .......................... 0
|
| 74 |
+
context_parallel_size ........................... 1
|
| 75 |
+
cp_comm_type .................................... ['p2p']
|
| 76 |
+
create_attention_mask_in_dataloader ............. True
|
| 77 |
+
cross_entropy_fusion_impl ....................... native
|
| 78 |
+
cross_entropy_loss_fusion ....................... False
|
| 79 |
+
cuda_graph_scope ................................ full
|
| 80 |
+
cuda_graph_warmup_steps ......................... 3
|
| 81 |
+
data_args_path .................................. None
|
| 82 |
+
data_cache_path ................................. None
|
| 83 |
+
data_parallel_random_init ....................... False
|
| 84 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 85 |
+
data_parallel_size .............................. 1
|
| 86 |
+
data_path ....................................... None
|
| 87 |
+
data_per_class_fraction ......................... 1.0
|
| 88 |
+
data_sharding ................................... True
|
| 89 |
+
dataloader_type ................................. single
|
| 90 |
+
ddp_average_in_collective ....................... False
|
| 91 |
+
ddp_bucket_size ................................. None
|
| 92 |
+
ddp_num_buckets ................................. None
|
| 93 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 94 |
+
decoder_first_pipeline_num_layers ............... None
|
| 95 |
+
decoder_last_pipeline_num_layers ................ None
|
| 96 |
+
decoder_num_layers .............................. None
|
| 97 |
+
decoder_seq_length .............................. None
|
| 98 |
+
decoupled_lr .................................... None
|
| 99 |
+
decoupled_min_lr ................................ None
|
| 100 |
+
decrease_batch_size_if_needed ................... False
|
| 101 |
+
defer_embedding_wgrad_compute ................... False
|
| 102 |
+
deprecated_use_mcore_models ..................... False
|
| 103 |
+
deterministic_mode .............................. False
|
| 104 |
+
dino_bottleneck_size ............................ 256
|
| 105 |
+
dino_freeze_last_layer .......................... 1
|
| 106 |
+
dino_head_hidden_size ........................... 2048
|
| 107 |
+
dino_local_crops_number ......................... 10
|
| 108 |
+
dino_local_img_size ............................. 96
|
| 109 |
+
dino_norm_last_layer ............................ False
|
| 110 |
+
dino_teacher_temp ............................... 0.07
|
| 111 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 112 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 113 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 114 |
+
disable_mamba_mem_eff_path ...................... False
|
| 115 |
+
disable_straggler_on_startup .................... False
|
| 116 |
+
dist_ckpt_format_deprecated ..................... None
|
| 117 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 118 |
+
distribute_saved_activations .................... False
|
| 119 |
+
distributed_backend ............................. nccl
|
| 120 |
+
distributed_timeout_minutes ..................... 10
|
| 121 |
+
embedding_path .................................. None
|
| 122 |
+
empty_unused_memory_level ....................... 0
|
| 123 |
+
enable_cuda_graph ............................... False
|
| 124 |
+
enable_ft_package ............................... False
|
| 125 |
+
enable_gloo_process_groups ...................... True
|
| 126 |
+
enable_msc ...................................... True
|
| 127 |
+
enable_one_logger ............................... True
|
| 128 |
+
encoder_num_layers .............................. 2
|
| 129 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 130 |
+
encoder_seq_length .............................. 1024
|
| 131 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 132 |
+
end_weight_decay ................................ 0.1
|
| 133 |
+
eod_mask_loss ................................... False
|
| 134 |
+
error_injection_rate ............................ 0
|
| 135 |
+
error_injection_type ............................ transient_error
|
| 136 |
+
eval_interval ................................... 16
|
| 137 |
+
eval_iters ...................................... 1
|
| 138 |
+
evidence_data_path .............................. None
|
| 139 |
+
exit_duration_in_mins ........................... None
|
| 140 |
+
exit_interval ................................... None
|
| 141 |
+
exit_on_missing_checkpoint ...................... False
|
| 142 |
+
exit_signal_handler ............................. False
|
| 143 |
+
exp_avg_dtype ................................... torch.float32
|
| 144 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 145 |
+
expert_model_parallel_size ...................... 1
|
| 146 |
+
expert_tensor_parallel_size ..................... 8
|
| 147 |
+
external_cuda_graph ............................. False
|
| 148 |
+
ffn_hidden_size ................................. 16384
|
| 149 |
+
finetune ........................................ False
|
| 150 |
+
first_last_layers_bf16 .......................... False
|
| 151 |
+
flash_decode .................................... False
|
| 152 |
+
fp16 ............................................ True
|
| 153 |
+
fp16_lm_cross_entropy ........................... False
|
| 154 |
+
fp32_residual_connection ........................ False
|
| 155 |
+
fp8 ............................................. None
|
| 156 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 157 |
+
fp8_amax_history_len ............................ 1
|
| 158 |
+
fp8_interval .................................... 1
|
| 159 |
+
fp8_margin ...................................... 0
|
| 160 |
+
fp8_param_gather ................................ False
|
| 161 |
+
fp8_recipe ...................................... delayed
|
| 162 |
+
fp8_wgrad ....................................... True
|
| 163 |
+
fsdp_double_buffer .............................. False
|
| 164 |
+
global_batch_size ............................... 1
|
| 165 |
+
grad_reduce_in_bf16 ............................. False
|
| 166 |
+
gradient_accumulation_fusion .................... True
|
| 167 |
+
gradient_reduce_div_fusion ...................... True
|
| 168 |
+
group_query_attention ........................... True
|
| 169 |
+
head_lr_mult .................................... 1.0
|
| 170 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 171 |
+
heterogeneous_layers_config_path ................ None
|
| 172 |
+
hidden_dropout .................................. 0.1
|
| 173 |
+
hidden_size ..................................... 4096
|
| 174 |
+
hierarchical_context_parallel_sizes ............. None
|
| 175 |
+
high_priority_stream_groups ..................... []
|
| 176 |
+
hybrid_attention_ratio .......................... 0.0
|
| 177 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 178 |
+
hybrid_override_pattern ......................... None
|
| 179 |
+
hysteresis ...................................... 2
|
| 180 |
+
ict_head_size ................................... None
|
| 181 |
+
ict_load ........................................ None
|
| 182 |
+
img_h ........................................... 224
|
| 183 |
+
img_w ........................................... 224
|
| 184 |
+
indexer_batch_size .............................. 128
|
| 185 |
+
indexer_log_interval ............................ 1000
|
| 186 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 187 |
+
inference_dynamic_batching ...................... False
|
| 188 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 189 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 190 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 191 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 192 |
+
inference_dynamic_batching_max_requests_override None
|
| 193 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 194 |
+
inference_max_batch_size ........................ 8
|
| 195 |
+
inference_max_seq_length ........................ 2560
|
| 196 |
+
inference_rng_tracker ........................... False
|
| 197 |
+
init_method_std ................................. 0.02
|
| 198 |
+
init_method_xavier_uniform ...................... False
|
| 199 |
+
init_model_with_meta_device ..................... False
|
| 200 |
+
initial_loss_scale .............................. 4294967296
|
| 201 |
+
inprocess_active_world_size ..................... 8
|
| 202 |
+
inprocess_barrier_timeout ....................... 120
|
| 203 |
+
inprocess_completion_timeout .................... 120
|
| 204 |
+
inprocess_empty_cuda_cache ...................... False
|
| 205 |
+
inprocess_granularity ........................... node
|
| 206 |
+
inprocess_hard_timeout .......................... 90
|
| 207 |
+
inprocess_heartbeat_interval .................... 30
|
| 208 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 209 |
+
inprocess_last_call_wait ........................ 1
|
| 210 |
+
inprocess_max_iterations ........................ None
|
| 211 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 212 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 213 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 214 |
+
inprocess_restart ............................... False
|
| 215 |
+
inprocess_soft_timeout .......................... 60
|
| 216 |
+
inprocess_termination_grace_time ................ 1
|
| 217 |
+
is_hybrid_model ................................. False
|
| 218 |
+
iter_per_epoch .................................. 1250
|
| 219 |
+
iterations_to_skip .............................. []
|
| 220 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 221 |
+
kv_channels ..................................... 64
|
| 222 |
+
kv_lora_rank .................................... 32
|
| 223 |
+
lazy_mpu_init ................................... None
|
| 224 |
+
load ............................................ gpt-checkpoint
|
| 225 |
+
load_model_opt_format ........................... False
|
| 226 |
+
local_rank ...................................... 0
|
| 227 |
+
log_interval .................................... 1
|
| 228 |
+
log_loss_scale_to_tensorboard ................... True
|
| 229 |
+
log_memory_to_tensorboard ....................... False
|
| 230 |
+
log_num_zeros_in_grad ........................... False
|
| 231 |
+
log_params_norm ................................. False
|
| 232 |
+
log_progress .................................... False
|
| 233 |
+
log_straggler ................................... False
|
| 234 |
+
log_throughput .................................. False
|
| 235 |
+
log_timers_to_tensorboard ....................... False
|
| 236 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 237 |
+
log_world_size_to_tensorboard ................... False
|
| 238 |
+
logging_level ................................... 0
|
| 239 |
+
loss_scale ...................................... None
|
| 240 |
+
loss_scale_window ............................... 1000
|
| 241 |
+
lr .............................................. 0.0005
|
| 242 |
+
lr_decay_iters .................................. 150000
|
| 243 |
+
lr_decay_samples ................................ None
|
| 244 |
+
lr_decay_style .................................. cosine
|
| 245 |
+
lr_warmup_fraction .............................. None
|
| 246 |
+
lr_warmup_init .................................. 0.0
|
| 247 |
+
lr_warmup_iters ................................. 2
|
| 248 |
+
lr_warmup_samples ............................... 0
|
| 249 |
+
lr_wsd_decay_iters .............................. None
|
| 250 |
+
lr_wsd_decay_samples ............................ None
|
| 251 |
+
lr_wsd_decay_style .............................. exponential
|
| 252 |
+
main_grads_dtype ................................ torch.float32
|
| 253 |
+
main_params_dtype ............................... torch.float32
|
| 254 |
+
make_vocab_size_divisible_by .................... 128
|
| 255 |
+
mamba_head_dim .................................. 64
|
| 256 |
+
mamba_num_groups ................................ 8
|
| 257 |
+
mamba_num_heads ................................. None
|
| 258 |
+
mamba_state_dim ................................. 128
|
| 259 |
+
manual_gc ....................................... False
|
| 260 |
+
manual_gc_eval .................................. True
|
| 261 |
+
manual_gc_interval .............................. 0
|
| 262 |
+
mask_factor ..................................... 1.0
|
| 263 |
+
mask_prob ....................................... 0.15
|
| 264 |
+
mask_type ....................................... random
|
| 265 |
+
masked_softmax_fusion ........................... True
|
| 266 |
+
max_position_embeddings ......................... 1024
|
| 267 |
+
max_tokens_to_oom ............................... 12000
|
| 268 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 269 |
+
merge_file ...................................... merges.txt
|
| 270 |
+
micro_batch_size ................................ 1
|
| 271 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 272 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 273 |
+
min_loss_scale .................................. 1.0
|
| 274 |
+
min_lr .......................................... 0.0
|
| 275 |
+
mlp_chunks_for_prefill .......................... 1
|
| 276 |
+
mmap_bin_files .................................. True
|
| 277 |
+
mock_data ....................................... True
|
| 278 |
+
moe_apply_probs_on_input ........................ False
|
| 279 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 280 |
+
moe_enable_deepep ............................... False
|
| 281 |
+
moe_expert_capacity_factor ...................... None
|
| 282 |
+
moe_extended_tp ................................. False
|
| 283 |
+
moe_ffn_hidden_size ............................. None
|
| 284 |
+
moe_grouped_gemm ................................ False
|
| 285 |
+
moe_input_jitter_eps ............................ None
|
| 286 |
+
moe_layer_freq .................................. 1
|
| 287 |
+
moe_layer_recompute ............................. False
|
| 288 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 289 |
+
moe_per_layer_logging ........................... False
|
| 290 |
+
moe_permute_fusion .............................. False
|
| 291 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 292 |
+
moe_router_dtype ................................ None
|
| 293 |
+
moe_router_enable_expert_bias ................... False
|
| 294 |
+
moe_router_force_load_balancing ................. False
|
| 295 |
+
moe_router_group_topk ........................... None
|
| 296 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 297 |
+
moe_router_num_groups ........................... None
|
| 298 |
+
moe_router_padding_for_fp8 ...................... False
|
| 299 |
+
moe_router_pre_softmax .......................... False
|
| 300 |
+
moe_router_score_function ....................... softmax
|
| 301 |
+
moe_router_topk ................................. 2
|
| 302 |
+
moe_router_topk_scaling_factor .................. None
|
| 303 |
+
moe_shared_expert_intermediate_size ............. None
|
| 304 |
+
moe_shared_expert_overlap ....................... False
|
| 305 |
+
moe_token_dispatcher_type ....................... allgather
|
| 306 |
+
moe_token_drop_policy ........................... probs
|
| 307 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 308 |
+
moe_use_upcycling ............................... False
|
| 309 |
+
moe_z_loss_coeff ................................ None
|
| 310 |
+
mrope_section ................................... None
|
| 311 |
+
mscale .......................................... 1.0
|
| 312 |
+
mscale_all_dim .................................. 1.0
|
| 313 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 314 |
+
mtp_num_layers .................................. None
|
| 315 |
+
multi_latent_attention .......................... False
|
| 316 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 317 |
+
nccl_communicator_config_path ................... None
|
| 318 |
+
nccl_ub ......................................... False
|
| 319 |
+
no_load_optim ................................... None
|
| 320 |
+
no_load_rng ..................................... None
|
| 321 |
+
no_persist_layer_norm ........................... False
|
| 322 |
+
no_rope_freq .................................... None
|
| 323 |
+
no_save_optim ................................... None
|
| 324 |
+
no_save_rng ..................................... None
|
| 325 |
+
non_persistent_ckpt_type ........................ None
|
| 326 |
+
non_persistent_global_ckpt_dir .................. None
|
| 327 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 328 |
+
non_persistent_local_ckpt_dir ................... None
|
| 329 |
+
non_persistent_save_interval .................... None
|
| 330 |
+
norm_epsilon .................................... 1e-05
|
| 331 |
+
normalization ................................... LayerNorm
|
| 332 |
+
num_attention_heads ............................. 64
|
| 333 |
+
num_channels .................................... 3
|
| 334 |
+
num_classes ..................................... 1000
|
| 335 |
+
num_dataset_builder_threads ..................... 1
|
| 336 |
+
num_distributed_optimizer_instances ............. 1
|
| 337 |
+
num_experts ..................................... None
|
| 338 |
+
num_layers ...................................... 2
|
| 339 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 340 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 341 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 342 |
+
num_query_groups ................................ 16
|
| 343 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 344 |
+
num_workers ..................................... 2
|
| 345 |
+
object_storage_cache_path ....................... None
|
| 346 |
+
one_logger_async ................................ False
|
| 347 |
+
one_logger_project .............................. megatron-lm
|
| 348 |
+
one_logger_run_name ............................. None
|
| 349 |
+
onnx_safe ....................................... None
|
| 350 |
+
openai_gelu ..................................... False
|
| 351 |
+
optimizer ....................................... adam
|
| 352 |
+
optimizer_cpu_offload ........................... False
|
| 353 |
+
optimizer_offload_fraction ...................... 1.0
|
| 354 |
+
output_bert_embeddings .......................... False
|
| 355 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 356 |
+
overlap_grad_reduce ............................. False
|
| 357 |
+
overlap_p2p_comm ................................ False
|
| 358 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 359 |
+
overlap_param_gather ............................ False
|
| 360 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 361 |
+
override_opt_param_scheduler .................... False
|
| 362 |
+
params_dtype .................................... torch.float16
|
| 363 |
+
patch_dim ....................................... 16
|
| 364 |
+
per_split_data_args_path ........................ None
|
| 365 |
+
perform_initialization .......................... True
|
| 366 |
+
pin_cpu_grads ................................... True
|
| 367 |
+
pin_cpu_params .................................. True
|
| 368 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 369 |
+
pipeline_model_parallel_size .................... 1
|
| 370 |
+
pipeline_model_parallel_split_rank .............. None
|
| 371 |
+
position_embedding_type ......................... learned_absolute
|
| 372 |
+
pretrained_checkpoint ........................... None
|
| 373 |
+
profile ......................................... False
|
| 374 |
+
profile_ranks ................................... [0]
|
| 375 |
+
profile_step_end ................................ 12
|
| 376 |
+
profile_step_start .............................. 10
|
| 377 |
+
q_lora_rank ..................................... None
|
| 378 |
+
qk_head_dim ..................................... 128
|
| 379 |
+
qk_l2_norm ...................................... False
|
| 380 |
+
qk_layernorm .................................... False
|
| 381 |
+
qk_pos_emb_head_dim ............................. 64
|
| 382 |
+
query_in_block_prob ............................. 0.1
|
| 383 |
+
rampup_batch_size ............................... None
|
| 384 |
+
rank ............................................ 0
|
| 385 |
+
recompute_granularity ........................... None
|
| 386 |
+
recompute_method ................................ None
|
| 387 |
+
recompute_modules ............................... None
|
| 388 |
+
recompute_num_layers ............................ None
|
| 389 |
+
record_memory_history ........................... False
|
| 390 |
+
relative_attention_max_distance ................. 128
|
| 391 |
+
relative_attention_num_buckets .................. 32
|
| 392 |
+
replication ..................................... False
|
| 393 |
+
replication_factor .............................. 2
|
| 394 |
+
replication_jump ................................ None
|
| 395 |
+
rerun_mode ...................................... disabled
|
| 396 |
+
reset_attention_mask ............................ False
|
| 397 |
+
reset_position_ids .............................. False
|
| 398 |
+
result_rejected_tracker_filename ................ None
|
| 399 |
+
retriever_report_topk_accuracies ................ []
|
| 400 |
+
retriever_score_scaling ......................... False
|
| 401 |
+
retriever_seq_length ............................ 256
|
| 402 |
+
retro_add_retriever ............................. False
|
| 403 |
+
retro_attention_gate ............................ 1
|
| 404 |
+
retro_cyclic_train_iters ........................ None
|
| 405 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 406 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 407 |
+
retro_encoder_layers ............................ 2
|
| 408 |
+
retro_num_neighbors ............................. 2
|
| 409 |
+
retro_num_retrieved_chunks ...................... 2
|
| 410 |
+
retro_project_dir ............................... None
|
| 411 |
+
retro_verify_neighbor_count ..................... True
|
| 412 |
+
rope_scaling_factor ............................. 8.0
|
| 413 |
+
rotary_base ..................................... 10000
|
| 414 |
+
rotary_interleaved .............................. False
|
| 415 |
+
rotary_percent .................................. 1.0
|
| 416 |
+
rotary_scaling_factor ........................... 1.0
|
| 417 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 418 |
+
run_workload_inspector_server ................... False
|
| 419 |
+
sample_rate ..................................... 1.0
|
| 420 |
+
save ............................................ gpt-checkpoint
|
| 421 |
+
save_interval ................................... 16
|
| 422 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 423 |
+
seed ............................................ 1234
|
| 424 |
+
seq_length ...................................... 1024
|
| 425 |
+
sequence_parallel ............................... False
|
| 426 |
+
sgd_momentum .................................... 0.9
|
| 427 |
+
short_seq_prob .................................. 0.1
|
| 428 |
+
skip_train ...................................... False
|
| 429 |
+
skipped_train_samples ........................... 0
|
| 430 |
+
spec ............................................ None
|
| 431 |
+
split ........................................... None
|
| 432 |
+
squared_relu .................................... False
|
| 433 |
+
start_weight_decay .............................. 0.1
|
| 434 |
+
straggler_ctrlr_port ............................ 65535
|
| 435 |
+
straggler_minmax_count .......................... 1
|
| 436 |
+
suggested_communication_unit_size ............... None
|
| 437 |
+
swiglu .......................................... False
|
| 438 |
+
swin_backbone_type .............................. tiny
|
| 439 |
+
symmetric_ar_type ............................... None
|
| 440 |
+
te_rng_tracker .................................. False
|
| 441 |
+
tensor_model_parallel_size ...................... 8
|
| 442 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 443 |
+
tensorboard_log_interval ........................ 1
|
| 444 |
+
tensorboard_queue_size .......................... 1000
|
| 445 |
+
test_data_path .................................. None
|
| 446 |
+
test_mode ....................................... False
|
| 447 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 448 |
+
tiktoken_pattern ................................ None
|
| 449 |
+
tiktoken_special_tokens ......................... None
|
| 450 |
+
timing_log_level ................................ 0
|
| 451 |
+
timing_log_option ............................... minmax
|
| 452 |
+
titles_data_path ................................ None
|
| 453 |
+
tokenizer_model ................................. None
|
| 454 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 455 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 456 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 457 |
+
tp_comm_bulk_dgrad .............................. True
|
| 458 |
+
tp_comm_bulk_wgrad .............................. True
|
| 459 |
+
tp_comm_overlap ................................. False
|
| 460 |
+
tp_comm_overlap_ag .............................. True
|
| 461 |
+
tp_comm_overlap_cfg ............................. None
|
| 462 |
+
tp_comm_overlap_rs .............................. True
|
| 463 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 464 |
+
tp_comm_split_ag ................................ True
|
| 465 |
+
tp_comm_split_rs ................................ True
|
| 466 |
+
train_data_path ................................. None
|
| 467 |
+
train_iters ..................................... 10
|
| 468 |
+
train_samples ................................... None
|
| 469 |
+
train_sync_interval ............................. None
|
| 470 |
+
transformer_impl ................................ transformer_engine
|
| 471 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 472 |
+
untie_embeddings_and_output_weights ............. False
|
| 473 |
+
use_checkpoint_args ............................. False
|
| 474 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 475 |
+
use_cpu_initialization .......................... None
|
| 476 |
+
use_custom_fsdp ................................. False
|
| 477 |
+
use_dist_ckpt ................................... True
|
| 478 |
+
use_dist_ckpt_deprecated ........................ False
|
| 479 |
+
use_distributed_optimizer ....................... False
|
| 480 |
+
use_flash_attn .................................. False
|
| 481 |
+
use_legacy_models ............................... False
|
| 482 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 483 |
+
use_one_sent_docs ............................... False
|
| 484 |
+
use_persistent_ckpt_worker ...................... False
|
| 485 |
+
use_precision_aware_optimizer ................... False
|
| 486 |
+
use_pytorch_profiler ............................ False
|
| 487 |
+
use_ring_exchange_p2p ........................... False
|
| 488 |
+
use_rope_scaling ................................ False
|
| 489 |
+
use_rotary_position_embeddings .................. False
|
| 490 |
+
use_sharp ....................................... False
|
| 491 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 492 |
+
use_torch_fsdp2 ................................. False
|
| 493 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 494 |
+
use_tp_pp_dp_mapping ............................ False
|
| 495 |
+
v_head_dim ...................................... 128
|
| 496 |
+
valid_data_path ................................. None
|
| 497 |
+
variable_seq_lengths ............................ False
|
| 498 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 499 |
+
vision_backbone_type ............................ vit
|
| 500 |
+
vision_pretraining .............................. False
|
| 501 |
+
vision_pretraining_type ......................... classify
|
| 502 |
+
vocab_extra_ids ................................. 0
|
| 503 |
+
vocab_file ...................................... vocab.json
|
| 504 |
+
vocab_size ...................................... None
|
| 505 |
+
wandb_exp_name ..................................
|
| 506 |
+
wandb_project ...................................
|
| 507 |
+
wandb_save_dir ..................................
|
| 508 |
+
weight_decay .................................... 0.1
|
| 509 |
+
weight_decay_incr_style ......................... constant
|
| 510 |
+
wgrad_deferral_limit ............................ 0
|
| 511 |
+
world_size ...................................... 8
|
| 512 |
+
yaml_cfg ........................................ None
|
| 513 |
+
-------------------- end of arguments ---------------------
|
| 514 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 515 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 516 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 517 |
+
> padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
|
| 518 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 519 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 520 |
+
> initializing torch distributed ...
|
| 521 |
+
> initialized tensor model parallel with size 8
|
| 522 |
+
> initialized pipeline model parallel with size 1
|
| 523 |
+
> setting random seeds to 1234 ...
|
| 524 |
+
> compiling dataset index builder ...
|
| 525 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 526 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 527 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 528 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 529 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 530 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 531 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 532 |
+
make: Nothing to be done for 'default'.
|
| 533 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 534 |
+
>>> done with dataset index builder. Compilation time: 0.044 seconds
|
| 535 |
+
> compiling and loading fused kernels ...
|
| 536 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 537 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.457 seconds
|
attnserver.run_attnserver.slurm.sh.343210.err.log
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
|
| 2 |
+
++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 3 |
+
++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
|
| 4 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 5 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 6 |
+
+++ export _CE_M=
|
| 7 |
+
+++ _CE_M=
|
| 8 |
+
+++ export _CE_CONDA=
|
| 9 |
+
+++ _CE_CONDA=
|
| 10 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 11 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 12 |
+
+++ '[' -z x ']'
|
| 13 |
+
++ conda activate
|
| 14 |
+
++ local cmd=activate
|
| 15 |
+
++ case "$cmd" in
|
| 16 |
+
++ __conda_activate activate
|
| 17 |
+
++ '[' -n '' ']'
|
| 18 |
+
++ local ask_conda
|
| 19 |
+
+++ PS1=
|
| 20 |
+
+++ __conda_exe shell.posix activate
|
| 21 |
+
+++ '[' -n '' ']'
|
| 22 |
+
+++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
|
| 23 |
+
++ ask_conda='unset _CE_M
|
| 24 |
+
unset _CE_CONDA
|
| 25 |
+
PS1='\''(base) '\''
|
| 26 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 27 |
+
export CONDA_SHLVL='\''1'\''
|
| 28 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 29 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 30 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 31 |
+
++ eval 'unset _CE_M
|
| 32 |
+
unset _CE_CONDA
|
| 33 |
+
PS1='\''(base) '\''
|
| 34 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 35 |
+
export CONDA_SHLVL='\''1'\''
|
| 36 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 37 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 38 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 39 |
+
+++ unset _CE_M
|
| 40 |
+
+++ unset _CE_CONDA
|
| 41 |
+
+++ PS1='(base) '
|
| 42 |
+
+++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 43 |
+
+++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 44 |
+
+++ export CONDA_SHLVL=1
|
| 45 |
+
+++ CONDA_SHLVL=1
|
| 46 |
+
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
|
| 47 |
+
+++ CONDA_PROMPT_MODIFIER='(base) '
|
| 48 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 49 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 50 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 51 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 52 |
+
++ __conda_hashr
|
| 53 |
+
++ '[' -n '' ']'
|
| 54 |
+
++ '[' -n '' ']'
|
| 55 |
+
++ hash -r
|
| 56 |
+
+ conda activate junda-attnserver
|
| 57 |
+
+ local cmd=activate
|
| 58 |
+
+ case "$cmd" in
|
| 59 |
+
+ __conda_activate activate junda-attnserver
|
| 60 |
+
+ '[' -n '' ']'
|
| 61 |
+
+ local ask_conda
|
| 62 |
+
++ PS1='(base) '
|
| 63 |
+
++ __conda_exe shell.posix activate junda-attnserver
|
| 64 |
+
++ '[' -n '' ']'
|
| 65 |
+
++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
|
| 66 |
+
+ ask_conda='unset _CE_M
|
| 67 |
+
unset _CE_CONDA
|
| 68 |
+
PS1='\''(junda-attnserver) '\''
|
| 69 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 70 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 71 |
+
export CONDA_SHLVL='\''2'\''
|
| 72 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 73 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 74 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 75 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 76 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 77 |
+
+ eval 'unset _CE_M
|
| 78 |
+
unset _CE_CONDA
|
| 79 |
+
PS1='\''(junda-attnserver) '\''
|
| 80 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 81 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 82 |
+
export CONDA_SHLVL='\''2'\''
|
| 83 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 84 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 85 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 86 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 87 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 88 |
+
++ unset _CE_M
|
| 89 |
+
++ unset _CE_CONDA
|
| 90 |
+
++ PS1='(junda-attnserver) '
|
| 91 |
+
++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 92 |
+
++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 93 |
+
++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 94 |
+
++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 95 |
+
++ export CONDA_SHLVL=2
|
| 96 |
+
++ CONDA_SHLVL=2
|
| 97 |
+
++ export CONDA_DEFAULT_ENV=junda-attnserver
|
| 98 |
+
++ CONDA_DEFAULT_ENV=junda-attnserver
|
| 99 |
+
++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
|
| 100 |
+
++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
|
| 101 |
+
++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 102 |
+
++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 103 |
+
++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 104 |
+
++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 105 |
+
++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 106 |
+
++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 107 |
+
+ __conda_hashr
|
| 108 |
+
+ '[' -n '' ']'
|
| 109 |
+
+ '[' -n '' ']'
|
| 110 |
+
+ hash -r
|
| 111 |
+
+ export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 112 |
+
+ CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 113 |
+
+ mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 114 |
+
+ export PROF_TP_SIZE=8
|
| 115 |
+
+ PROF_TP_SIZE=8
|
| 116 |
+
+ export PROF_CP_SIZE=1
|
| 117 |
+
+ PROF_CP_SIZE=1
|
| 118 |
+
+ export PROF_BS=8
|
| 119 |
+
+ PROF_BS=8
|
| 120 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 121 |
+
+ export PROF_CTX_LENGTH=1024
|
| 122 |
+
+ PROF_CTX_LENGTH=1024
|
| 123 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp1.bs8.json'
|
| 124 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp1.bs8.json' ']'
|
| 125 |
+
+ echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8'
|
| 126 |
+
+ srun bash ./attnserver.sh
|
| 127 |
+
+ which python3
|
| 128 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343210 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-768:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 1 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 129 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 130 |
+
and will be removed in future. Use torchrun.
|
| 131 |
+
Note that --use-env is set by default in torchrun.
|
| 132 |
+
If your script expects `--local-rank` argument to be set, please
|
| 133 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 134 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 135 |
+
further instructions
|
| 136 |
+
|
| 137 |
+
main()
|
| 138 |
+
W0621 21:19:24.688000 2174748 site-packages/torch/distributed/run.py:766]
|
| 139 |
+
W0621 21:19:24.688000 2174748 site-packages/torch/distributed/run.py:766] *****************************************
|
| 140 |
+
W0621 21:19:24.688000 2174748 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 141 |
+
W0621 21:19:24.688000 2174748 site-packages/torch/distributed/run.py:766] *****************************************
|
| 142 |
+
[rank1]:[W621 21:19:46.843638124 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 143 |
+
[rank3]:[W621 21:19:46.273254932 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 144 |
+
[rank7]:[W621 21:19:46.287551098 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 145 |
+
[rank0]:[W621 21:19:46.300492516 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 146 |
+
[rank5]:[W621 21:19:46.306446014 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 147 |
+
[rank4]:[W621 21:19:46.306935911 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 148 |
+
[rank6]:[W621 21:19:46.310121545 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 149 |
+
[rank2]:[W621 21:19:46.317946415 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
attnserver.run_attnserver.slurm.sh.343210.out.log
ADDED
|
@@ -0,0 +1,536 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Running ctx_length=1024, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=8
|
| 2 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 3 |
+
--------------------------------
|
| 4 |
+
CTX_LENGTH: 1024
|
| 5 |
+
TP_SIZE: 8
|
| 6 |
+
CP_SIZE: 1
|
| 7 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 8 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9 |
+
--------------------------------
|
| 10 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 11 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 13 |
+
Number of virtual stages per pipeline stage: None
|
| 14 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 15 |
+
using torch.float16 for parameters ...
|
| 16 |
+
------------------------ arguments ------------------------
|
| 17 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 18 |
+
account_for_loss_in_pipeline_split .............. False
|
| 19 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 20 |
+
adam_beta1 ...................................... 0.9
|
| 21 |
+
adam_beta2 ...................................... 0.999
|
| 22 |
+
adam_eps ........................................ 1e-08
|
| 23 |
+
add_bias_linear ................................. True
|
| 24 |
+
add_position_embedding .......................... True
|
| 25 |
+
add_qkv_bias .................................... True
|
| 26 |
+
adlr_autoresume ................................. False
|
| 27 |
+
adlr_autoresume_interval ........................ 1000
|
| 28 |
+
align_grad_reduce ............................... True
|
| 29 |
+
align_param_gather .............................. False
|
| 30 |
+
app_tag_run_name ................................ None
|
| 31 |
+
app_tag_run_version ............................. 0.0.0
|
| 32 |
+
apply_layernorm_1p .............................. False
|
| 33 |
+
apply_query_key_layer_scaling ................... False
|
| 34 |
+
apply_residual_connection_post_layernorm ........ False
|
| 35 |
+
apply_rope_fusion ............................... False
|
| 36 |
+
async_save ...................................... None
|
| 37 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 38 |
+
attention_backend ............................... AttnBackend.auto
|
| 39 |
+
attention_dropout ............................... 0.1
|
| 40 |
+
attention_softmax_in_fp32 ....................... False
|
| 41 |
+
auto_detect_ckpt_format ......................... False
|
| 42 |
+
barrier_with_L1_time ............................ True
|
| 43 |
+
bert_binary_head ................................ True
|
| 44 |
+
bert_embedder_type .............................. megatron
|
| 45 |
+
bert_load ....................................... None
|
| 46 |
+
bf16 ............................................ False
|
| 47 |
+
bias_dropout_fusion ............................. True
|
| 48 |
+
bias_gelu_fusion ................................ True
|
| 49 |
+
bias_swiglu_fusion .............................. True
|
| 50 |
+
biencoder_projection_dim ........................ 0
|
| 51 |
+
biencoder_shared_query_context_model ............ False
|
| 52 |
+
block_data_path ................................. None
|
| 53 |
+
calc_ft_timeouts ................................ False
|
| 54 |
+
calculate_per_token_loss ........................ False
|
| 55 |
+
check_for_large_grads ........................... False
|
| 56 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 57 |
+
check_for_spiky_loss ............................ False
|
| 58 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 59 |
+
ckpt_assume_constant_structure .................. False
|
| 60 |
+
ckpt_convert_format ............................. None
|
| 61 |
+
ckpt_convert_save ............................... None
|
| 62 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 63 |
+
ckpt_format ..................................... torch_dist
|
| 64 |
+
ckpt_fully_parallel_load ........................ False
|
| 65 |
+
ckpt_fully_parallel_save ........................ True
|
| 66 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 67 |
+
ckpt_step ....................................... None
|
| 68 |
+
classes_fraction ................................ 1.0
|
| 69 |
+
clip_grad ....................................... 1.0
|
| 70 |
+
clone_scatter_output_in_embedding ............... True
|
| 71 |
+
config_logger_dir ...............................
|
| 72 |
+
consumed_train_samples .......................... 0
|
| 73 |
+
consumed_valid_samples .......................... 0
|
| 74 |
+
context_parallel_size ........................... 1
|
| 75 |
+
cp_comm_type .................................... ['p2p']
|
| 76 |
+
create_attention_mask_in_dataloader ............. True
|
| 77 |
+
cross_entropy_fusion_impl ....................... native
|
| 78 |
+
cross_entropy_loss_fusion ....................... False
|
| 79 |
+
cuda_graph_scope ................................ full
|
| 80 |
+
cuda_graph_warmup_steps ......................... 3
|
| 81 |
+
data_args_path .................................. None
|
| 82 |
+
data_cache_path ................................. None
|
| 83 |
+
data_parallel_random_init ....................... False
|
| 84 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 85 |
+
data_parallel_size .............................. 1
|
| 86 |
+
data_path ....................................... None
|
| 87 |
+
data_per_class_fraction ......................... 1.0
|
| 88 |
+
data_sharding ................................... True
|
| 89 |
+
dataloader_type ................................. single
|
| 90 |
+
ddp_average_in_collective ....................... False
|
| 91 |
+
ddp_bucket_size ................................. None
|
| 92 |
+
ddp_num_buckets ................................. None
|
| 93 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 94 |
+
decoder_first_pipeline_num_layers ............... None
|
| 95 |
+
decoder_last_pipeline_num_layers ................ None
|
| 96 |
+
decoder_num_layers .............................. None
|
| 97 |
+
decoder_seq_length .............................. None
|
| 98 |
+
decoupled_lr .................................... None
|
| 99 |
+
decoupled_min_lr ................................ None
|
| 100 |
+
decrease_batch_size_if_needed ................... False
|
| 101 |
+
defer_embedding_wgrad_compute ................... False
|
| 102 |
+
deprecated_use_mcore_models ..................... False
|
| 103 |
+
deterministic_mode .............................. False
|
| 104 |
+
dino_bottleneck_size ............................ 256
|
| 105 |
+
dino_freeze_last_layer .......................... 1
|
| 106 |
+
dino_head_hidden_size ........................... 2048
|
| 107 |
+
dino_local_crops_number ......................... 10
|
| 108 |
+
dino_local_img_size ............................. 96
|
| 109 |
+
dino_norm_last_layer ............................ False
|
| 110 |
+
dino_teacher_temp ............................... 0.07
|
| 111 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 112 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 113 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 114 |
+
disable_mamba_mem_eff_path ...................... False
|
| 115 |
+
disable_straggler_on_startup .................... False
|
| 116 |
+
dist_ckpt_format_deprecated ..................... None
|
| 117 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 118 |
+
distribute_saved_activations .................... False
|
| 119 |
+
distributed_backend ............................. nccl
|
| 120 |
+
distributed_timeout_minutes ..................... 10
|
| 121 |
+
embedding_path .................................. None
|
| 122 |
+
empty_unused_memory_level ....................... 0
|
| 123 |
+
enable_cuda_graph ............................... False
|
| 124 |
+
enable_ft_package ............................... False
|
| 125 |
+
enable_gloo_process_groups ...................... True
|
| 126 |
+
enable_msc ...................................... True
|
| 127 |
+
enable_one_logger ............................... True
|
| 128 |
+
encoder_num_layers .............................. 2
|
| 129 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 130 |
+
encoder_seq_length .............................. 1024
|
| 131 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 132 |
+
end_weight_decay ................................ 0.1
|
| 133 |
+
eod_mask_loss ................................... False
|
| 134 |
+
error_injection_rate ............................ 0
|
| 135 |
+
error_injection_type ............................ transient_error
|
| 136 |
+
eval_interval ................................... 16
|
| 137 |
+
eval_iters ...................................... 1
|
| 138 |
+
evidence_data_path .............................. None
|
| 139 |
+
exit_duration_in_mins ........................... None
|
| 140 |
+
exit_interval ................................... None
|
| 141 |
+
exit_on_missing_checkpoint ...................... False
|
| 142 |
+
exit_signal_handler ............................. False
|
| 143 |
+
exp_avg_dtype ................................... torch.float32
|
| 144 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 145 |
+
expert_model_parallel_size ...................... 1
|
| 146 |
+
expert_tensor_parallel_size ..................... 8
|
| 147 |
+
external_cuda_graph ............................. False
|
| 148 |
+
ffn_hidden_size ................................. 16384
|
| 149 |
+
finetune ........................................ False
|
| 150 |
+
first_last_layers_bf16 .......................... False
|
| 151 |
+
flash_decode .................................... False
|
| 152 |
+
fp16 ............................................ True
|
| 153 |
+
fp16_lm_cross_entropy ........................... False
|
| 154 |
+
fp32_residual_connection ........................ False
|
| 155 |
+
fp8 ............................................. None
|
| 156 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 157 |
+
fp8_amax_history_len ............................ 1
|
| 158 |
+
fp8_interval .................................... 1
|
| 159 |
+
fp8_margin ...................................... 0
|
| 160 |
+
fp8_param_gather ................................ False
|
| 161 |
+
fp8_recipe ...................................... delayed
|
| 162 |
+
fp8_wgrad ....................................... True
|
| 163 |
+
fsdp_double_buffer .............................. False
|
| 164 |
+
global_batch_size ............................... 1
|
| 165 |
+
grad_reduce_in_bf16 ............................. False
|
| 166 |
+
gradient_accumulation_fusion .................... True
|
| 167 |
+
gradient_reduce_div_fusion ...................... True
|
| 168 |
+
group_query_attention ........................... True
|
| 169 |
+
head_lr_mult .................................... 1.0
|
| 170 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 171 |
+
heterogeneous_layers_config_path ................ None
|
| 172 |
+
hidden_dropout .................................. 0.1
|
| 173 |
+
hidden_size ..................................... 4096
|
| 174 |
+
hierarchical_context_parallel_sizes ............. None
|
| 175 |
+
high_priority_stream_groups ..................... []
|
| 176 |
+
hybrid_attention_ratio .......................... 0.0
|
| 177 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 178 |
+
hybrid_override_pattern ......................... None
|
| 179 |
+
hysteresis ...................................... 2
|
| 180 |
+
ict_head_size ................................... None
|
| 181 |
+
ict_load ........................................ None
|
| 182 |
+
img_h ........................................... 224
|
| 183 |
+
img_w ........................................... 224
|
| 184 |
+
indexer_batch_size .............................. 128
|
| 185 |
+
indexer_log_interval ............................ 1000
|
| 186 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 187 |
+
inference_dynamic_batching ...................... False
|
| 188 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 189 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 190 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 191 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 192 |
+
inference_dynamic_batching_max_requests_override None
|
| 193 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 194 |
+
inference_max_batch_size ........................ 8
|
| 195 |
+
inference_max_seq_length ........................ 2560
|
| 196 |
+
inference_rng_tracker ........................... False
|
| 197 |
+
init_method_std ................................. 0.02
|
| 198 |
+
init_method_xavier_uniform ...................... False
|
| 199 |
+
init_model_with_meta_device ..................... False
|
| 200 |
+
initial_loss_scale .............................. 4294967296
|
| 201 |
+
inprocess_active_world_size ..................... 8
|
| 202 |
+
inprocess_barrier_timeout ....................... 120
|
| 203 |
+
inprocess_completion_timeout .................... 120
|
| 204 |
+
inprocess_empty_cuda_cache ...................... False
|
| 205 |
+
inprocess_granularity ........................... node
|
| 206 |
+
inprocess_hard_timeout .......................... 90
|
| 207 |
+
inprocess_heartbeat_interval .................... 30
|
| 208 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 209 |
+
inprocess_last_call_wait ........................ 1
|
| 210 |
+
inprocess_max_iterations ........................ None
|
| 211 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 212 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 213 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 214 |
+
inprocess_restart ............................... False
|
| 215 |
+
inprocess_soft_timeout .......................... 60
|
| 216 |
+
inprocess_termination_grace_time ................ 1
|
| 217 |
+
is_hybrid_model ................................. False
|
| 218 |
+
iter_per_epoch .................................. 1250
|
| 219 |
+
iterations_to_skip .............................. []
|
| 220 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 221 |
+
kv_channels ..................................... 64
|
| 222 |
+
kv_lora_rank .................................... 32
|
| 223 |
+
lazy_mpu_init ................................... None
|
| 224 |
+
load ............................................ gpt-checkpoint
|
| 225 |
+
load_model_opt_format ........................... False
|
| 226 |
+
local_rank ...................................... 0
|
| 227 |
+
log_interval .................................... 1
|
| 228 |
+
log_loss_scale_to_tensorboard ................... True
|
| 229 |
+
log_memory_to_tensorboard ....................... False
|
| 230 |
+
log_num_zeros_in_grad ........................... False
|
| 231 |
+
log_params_norm ................................. False
|
| 232 |
+
log_progress .................................... False
|
| 233 |
+
log_straggler ................................... False
|
| 234 |
+
log_throughput .................................. False
|
| 235 |
+
log_timers_to_tensorboard ....................... False
|
| 236 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 237 |
+
log_world_size_to_tensorboard ................... False
|
| 238 |
+
logging_level ................................... 0
|
| 239 |
+
loss_scale ...................................... None
|
| 240 |
+
loss_scale_window ............................... 1000
|
| 241 |
+
lr .............................................. 0.0005
|
| 242 |
+
lr_decay_iters .................................. 150000
|
| 243 |
+
lr_decay_samples ................................ None
|
| 244 |
+
lr_decay_style .................................. cosine
|
| 245 |
+
lr_warmup_fraction .............................. None
|
| 246 |
+
lr_warmup_init .................................. 0.0
|
| 247 |
+
lr_warmup_iters ................................. 2
|
| 248 |
+
lr_warmup_samples ............................... 0
|
| 249 |
+
lr_wsd_decay_iters .............................. None
|
| 250 |
+
lr_wsd_decay_samples ............................ None
|
| 251 |
+
lr_wsd_decay_style .............................. exponential
|
| 252 |
+
main_grads_dtype ................................ torch.float32
|
| 253 |
+
main_params_dtype ............................... torch.float32
|
| 254 |
+
make_vocab_size_divisible_by .................... 128
|
| 255 |
+
mamba_head_dim .................................. 64
|
| 256 |
+
mamba_num_groups ................................ 8
|
| 257 |
+
mamba_num_heads ................................. None
|
| 258 |
+
mamba_state_dim ................................. 128
|
| 259 |
+
manual_gc ....................................... False
|
| 260 |
+
manual_gc_eval .................................. True
|
| 261 |
+
manual_gc_interval .............................. 0
|
| 262 |
+
mask_factor ..................................... 1.0
|
| 263 |
+
mask_prob ....................................... 0.15
|
| 264 |
+
mask_type ....................................... random
|
| 265 |
+
masked_softmax_fusion ........................... True
|
| 266 |
+
max_position_embeddings ......................... 1024
|
| 267 |
+
max_tokens_to_oom ............................... 12000
|
| 268 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 269 |
+
merge_file ...................................... merges.txt
|
| 270 |
+
micro_batch_size ................................ 1
|
| 271 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 272 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 273 |
+
min_loss_scale .................................. 1.0
|
| 274 |
+
min_lr .......................................... 0.0
|
| 275 |
+
mlp_chunks_for_prefill .......................... 1
|
| 276 |
+
mmap_bin_files .................................. True
|
| 277 |
+
mock_data ....................................... True
|
| 278 |
+
moe_apply_probs_on_input ........................ False
|
| 279 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 280 |
+
moe_enable_deepep ............................... False
|
| 281 |
+
moe_expert_capacity_factor ...................... None
|
| 282 |
+
moe_extended_tp ................................. False
|
| 283 |
+
moe_ffn_hidden_size ............................. None
|
| 284 |
+
moe_grouped_gemm ................................ False
|
| 285 |
+
moe_input_jitter_eps ............................ None
|
| 286 |
+
moe_layer_freq .................................. 1
|
| 287 |
+
moe_layer_recompute ............................. False
|
| 288 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 289 |
+
moe_per_layer_logging ........................... False
|
| 290 |
+
moe_permute_fusion .............................. False
|
| 291 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 292 |
+
moe_router_dtype ................................ None
|
| 293 |
+
moe_router_enable_expert_bias ................... False
|
| 294 |
+
moe_router_force_load_balancing ................. False
|
| 295 |
+
moe_router_group_topk ........................... None
|
| 296 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 297 |
+
moe_router_num_groups ........................... None
|
| 298 |
+
moe_router_padding_for_fp8 ...................... False
|
| 299 |
+
moe_router_pre_softmax .......................... False
|
| 300 |
+
moe_router_score_function ....................... softmax
|
| 301 |
+
moe_router_topk ................................. 2
|
| 302 |
+
moe_router_topk_scaling_factor .................. None
|
| 303 |
+
moe_shared_expert_intermediate_size ............. None
|
| 304 |
+
moe_shared_expert_overlap ....................... False
|
| 305 |
+
moe_token_dispatcher_type ....................... allgather
|
| 306 |
+
moe_token_drop_policy ........................... probs
|
| 307 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 308 |
+
moe_use_upcycling ............................... False
|
| 309 |
+
moe_z_loss_coeff ................................ None
|
| 310 |
+
mrope_section ................................... None
|
| 311 |
+
mscale .......................................... 1.0
|
| 312 |
+
mscale_all_dim .................................. 1.0
|
| 313 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 314 |
+
mtp_num_layers .................................. None
|
| 315 |
+
multi_latent_attention .......................... False
|
| 316 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 317 |
+
nccl_communicator_config_path ................... None
|
| 318 |
+
nccl_ub ......................................... False
|
| 319 |
+
no_load_optim ................................... None
|
| 320 |
+
no_load_rng ..................................... None
|
| 321 |
+
no_persist_layer_norm ........................... False
|
| 322 |
+
no_rope_freq .................................... None
|
| 323 |
+
no_save_optim ................................... None
|
| 324 |
+
no_save_rng ..................................... None
|
| 325 |
+
non_persistent_ckpt_type ........................ None
|
| 326 |
+
non_persistent_global_ckpt_dir .................. None
|
| 327 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 328 |
+
non_persistent_local_ckpt_dir ................... None
|
| 329 |
+
non_persistent_save_interval .................... None
|
| 330 |
+
norm_epsilon .................................... 1e-05
|
| 331 |
+
normalization ................................... LayerNorm
|
| 332 |
+
num_attention_heads ............................. 64
|
| 333 |
+
num_channels .................................... 3
|
| 334 |
+
num_classes ..................................... 1000
|
| 335 |
+
num_dataset_builder_threads ..................... 1
|
| 336 |
+
num_distributed_optimizer_instances ............. 1
|
| 337 |
+
num_experts ..................................... None
|
| 338 |
+
num_layers ...................................... 2
|
| 339 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 340 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 341 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 342 |
+
num_query_groups ................................ 16
|
| 343 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 344 |
+
num_workers ..................................... 2
|
| 345 |
+
object_storage_cache_path ....................... None
|
| 346 |
+
one_logger_async ................................ False
|
| 347 |
+
one_logger_project .............................. megatron-lm
|
| 348 |
+
one_logger_run_name ............................. None
|
| 349 |
+
onnx_safe ....................................... None
|
| 350 |
+
openai_gelu ..................................... False
|
| 351 |
+
optimizer ....................................... adam
|
| 352 |
+
optimizer_cpu_offload ........................... False
|
| 353 |
+
optimizer_offload_fraction ...................... 1.0
|
| 354 |
+
output_bert_embeddings .......................... False
|
| 355 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 356 |
+
overlap_grad_reduce ............................. False
|
| 357 |
+
overlap_p2p_comm ................................ False
|
| 358 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 359 |
+
overlap_param_gather ............................ False
|
| 360 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 361 |
+
override_opt_param_scheduler .................... False
|
| 362 |
+
params_dtype .................................... torch.float16
|
| 363 |
+
patch_dim ....................................... 16
|
| 364 |
+
per_split_data_args_path ........................ None
|
| 365 |
+
perform_initialization .......................... True
|
| 366 |
+
pin_cpu_grads ................................... True
|
| 367 |
+
pin_cpu_params .................................. True
|
| 368 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 369 |
+
pipeline_model_parallel_size .................... 1
|
| 370 |
+
pipeline_model_parallel_split_rank .............. None
|
| 371 |
+
position_embedding_type ......................... learned_absolute
|
| 372 |
+
pretrained_checkpoint ........................... None
|
| 373 |
+
profile ......................................... False
|
| 374 |
+
profile_ranks ................................... [0]
|
| 375 |
+
profile_step_end ................................ 12
|
| 376 |
+
profile_step_start .............................. 10
|
| 377 |
+
q_lora_rank ..................................... None
|
| 378 |
+
qk_head_dim ..................................... 128
|
| 379 |
+
qk_l2_norm ...................................... False
|
| 380 |
+
qk_layernorm .................................... False
|
| 381 |
+
qk_pos_emb_head_dim ............................. 64
|
| 382 |
+
query_in_block_prob ............................. 0.1
|
| 383 |
+
rampup_batch_size ............................... None
|
| 384 |
+
rank ............................................ 0
|
| 385 |
+
recompute_granularity ........................... None
|
| 386 |
+
recompute_method ................................ None
|
| 387 |
+
recompute_modules ............................... None
|
| 388 |
+
recompute_num_layers ............................ None
|
| 389 |
+
record_memory_history ........................... False
|
| 390 |
+
relative_attention_max_distance ................. 128
|
| 391 |
+
relative_attention_num_buckets .................. 32
|
| 392 |
+
replication ..................................... False
|
| 393 |
+
replication_factor .............................. 2
|
| 394 |
+
replication_jump ................................ None
|
| 395 |
+
rerun_mode ...................................... disabled
|
| 396 |
+
reset_attention_mask ............................ False
|
| 397 |
+
reset_position_ids .............................. False
|
| 398 |
+
result_rejected_tracker_filename ................ None
|
| 399 |
+
retriever_report_topk_accuracies ................ []
|
| 400 |
+
retriever_score_scaling ......................... False
|
| 401 |
+
retriever_seq_length ............................ 256
|
| 402 |
+
retro_add_retriever ............................. False
|
| 403 |
+
retro_attention_gate ............................ 1
|
| 404 |
+
retro_cyclic_train_iters ........................ None
|
| 405 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 406 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 407 |
+
retro_encoder_layers ............................ 2
|
| 408 |
+
retro_num_neighbors ............................. 2
|
| 409 |
+
retro_num_retrieved_chunks ...................... 2
|
| 410 |
+
retro_project_dir ............................... None
|
| 411 |
+
retro_verify_neighbor_count ..................... True
|
| 412 |
+
rope_scaling_factor ............................. 8.0
|
| 413 |
+
rotary_base ..................................... 10000
|
| 414 |
+
rotary_interleaved .............................. False
|
| 415 |
+
rotary_percent .................................. 1.0
|
| 416 |
+
rotary_scaling_factor ........................... 1.0
|
| 417 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 418 |
+
run_workload_inspector_server ................... False
|
| 419 |
+
sample_rate ..................................... 1.0
|
| 420 |
+
save ............................................ gpt-checkpoint
|
| 421 |
+
save_interval ................................... 16
|
| 422 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 423 |
+
seed ............................................ 1234
|
| 424 |
+
seq_length ...................................... 1024
|
| 425 |
+
sequence_parallel ............................... False
|
| 426 |
+
sgd_momentum .................................... 0.9
|
| 427 |
+
short_seq_prob .................................. 0.1
|
| 428 |
+
skip_train ...................................... False
|
| 429 |
+
skipped_train_samples ........................... 0
|
| 430 |
+
spec ............................................ None
|
| 431 |
+
split ........................................... None
|
| 432 |
+
squared_relu .................................... False
|
| 433 |
+
start_weight_decay .............................. 0.1
|
| 434 |
+
straggler_ctrlr_port ............................ 65535
|
| 435 |
+
straggler_minmax_count .......................... 1
|
| 436 |
+
suggested_communication_unit_size ............... None
|
| 437 |
+
swiglu .......................................... False
|
| 438 |
+
swin_backbone_type .............................. tiny
|
| 439 |
+
symmetric_ar_type ............................... None
|
| 440 |
+
te_rng_tracker .................................. False
|
| 441 |
+
tensor_model_parallel_size ...................... 8
|
| 442 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 443 |
+
tensorboard_log_interval ........................ 1
|
| 444 |
+
tensorboard_queue_size .......................... 1000
|
| 445 |
+
test_data_path .................................. None
|
| 446 |
+
test_mode ....................................... False
|
| 447 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 448 |
+
tiktoken_pattern ................................ None
|
| 449 |
+
tiktoken_special_tokens ......................... None
|
| 450 |
+
timing_log_level ................................ 0
|
| 451 |
+
timing_log_option ............................... minmax
|
| 452 |
+
titles_data_path ................................ None
|
| 453 |
+
tokenizer_model ................................. None
|
| 454 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 455 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 456 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 457 |
+
tp_comm_bulk_dgrad .............................. True
|
| 458 |
+
tp_comm_bulk_wgrad .............................. True
|
| 459 |
+
tp_comm_overlap ................................. False
|
| 460 |
+
tp_comm_overlap_ag .............................. True
|
| 461 |
+
tp_comm_overlap_cfg ............................. None
|
| 462 |
+
tp_comm_overlap_rs .............................. True
|
| 463 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 464 |
+
tp_comm_split_ag ................................ True
|
| 465 |
+
tp_comm_split_rs ................................ True
|
| 466 |
+
train_data_path ................................. None
|
| 467 |
+
train_iters ..................................... 10
|
| 468 |
+
train_samples ................................... None
|
| 469 |
+
train_sync_interval ............................. None
|
| 470 |
+
transformer_impl ................................ transformer_engine
|
| 471 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 472 |
+
untie_embeddings_and_output_weights ............. False
|
| 473 |
+
use_checkpoint_args ............................. False
|
| 474 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 475 |
+
use_cpu_initialization .......................... None
|
| 476 |
+
use_custom_fsdp ................................. False
|
| 477 |
+
use_dist_ckpt ................................... True
|
| 478 |
+
use_dist_ckpt_deprecated ........................ False
|
| 479 |
+
use_distributed_optimizer ....................... False
|
| 480 |
+
use_flash_attn .................................. False
|
| 481 |
+
use_legacy_models ............................... False
|
| 482 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 483 |
+
use_one_sent_docs ............................... False
|
| 484 |
+
use_persistent_ckpt_worker ...................... False
|
| 485 |
+
use_precision_aware_optimizer ................... False
|
| 486 |
+
use_pytorch_profiler ............................ False
|
| 487 |
+
use_ring_exchange_p2p ........................... False
|
| 488 |
+
use_rope_scaling ................................ False
|
| 489 |
+
use_rotary_position_embeddings .................. False
|
| 490 |
+
use_sharp ....................................... False
|
| 491 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 492 |
+
use_torch_fsdp2 ................................. False
|
| 493 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 494 |
+
use_tp_pp_dp_mapping ............................ False
|
| 495 |
+
v_head_dim ...................................... 128
|
| 496 |
+
valid_data_path ................................. None
|
| 497 |
+
variable_seq_lengths ............................ False
|
| 498 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 499 |
+
vision_backbone_type ............................ vit
|
| 500 |
+
vision_pretraining .............................. False
|
| 501 |
+
vision_pretraining_type ......................... classify
|
| 502 |
+
vocab_extra_ids ................................. 0
|
| 503 |
+
vocab_file ...................................... vocab.json
|
| 504 |
+
vocab_size ...................................... None
|
| 505 |
+
wandb_exp_name ..................................
|
| 506 |
+
wandb_project ...................................
|
| 507 |
+
wandb_save_dir ..................................
|
| 508 |
+
weight_decay .................................... 0.1
|
| 509 |
+
weight_decay_incr_style ......................... constant
|
| 510 |
+
wgrad_deferral_limit ............................ 0
|
| 511 |
+
world_size ...................................... 8
|
| 512 |
+
yaml_cfg ........................................ None
|
| 513 |
+
-------------------- end of arguments ---------------------
|
| 514 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 515 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 516 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 517 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 518 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 519 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 520 |
+
> padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
|
| 521 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 522 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 523 |
+
> initializing torch distributed ...
|
| 524 |
+
> initialized tensor model parallel with size 8
|
| 525 |
+
> initialized pipeline model parallel with size 1
|
| 526 |
+
> setting random seeds to 1234 ...
|
| 527 |
+
> compiling dataset index builder ...
|
| 528 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 529 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 530 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 531 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 532 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 533 |
+
make: Nothing to be done for 'default'.
|
| 534 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 535 |
+
>>> done with dataset index builder. Compilation time: 0.047 seconds
|
| 536 |
+
> compiling and loading fused kernels ...
|