id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,896,185,986
|
[FSDP2] CPU Offload Doest Not Work with `torch.nn.utils.clip_grad_norm`
|
leonardo0lyj
|
open
|
[
"oncall: distributed",
"triaged"
] | 5
|
NONE
|
### 🐛 Describe the bug
Hey Andrew @awgu , as a big fan of FSDP2 and DTensor, I find an potential issue with CPU Offload x clip grad norm 😄
*Demand*
- `fully_shard(offload_policy=CPUOffloadPolicy())`
- `torch.nn.utils.clip_grad_norm_(model.parameters())`
*Result*
- RuntimeError, DTensor grad clip tries to do allreduce on cpu device.
```
E File "xxxxx", line 200, in test_cpu_offload
E torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm, norm_type)
E ^^^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 21, in _no_grad_wrapper
E return func(*args, **kwargs)
E ^^^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 82, in clip_grad_norm_
E clip_coef = max_norm / (total_norm + 1e-6)
E ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
E File "/usr/lib/python3.11/site-packages/torch/_tensor.py", line 41, in wrapped
E return f(*args, **kwargs)
E ^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/_tensor.py", line 967, in __rdiv__
E return self.reciprocal() * other
E ^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/_compile.py", line 31, in inner
E return disable_fn(*args, **kwargs)
E ^^^^^^^^^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 600, in _fn
E return fn(*args, **kwargs)
E ^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/distributed/_tensor/api.py", line 310, in __torch_dispatch__
E return DTensor._op_dispatcher.dispatch(
E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/distributed/_tensor/_dispatch.py", line 172, in dispatch
E self.redistribute_local_args(
E File "/usr/lib/python3.11/site-packages/torch/distributed/_tensor/_dispatch.py", line 265, in redistribute_local_args
E resharded_local_tensor = redistribute_local_tensor(
E ^^^^^^^^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/distributed/_tensor/_redistribute.py", line 183, in redistribute_local_tensor
E new_local_tensor = partial_spec._reduce_value(
E ^^^^^^^^^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/distributed/_tensor/ops/math_ops.py", line 125, in _reduce_value
E reduced_tensor = super()._reduce_value(tensor, mesh, mesh_dim)
E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/distributed/_tensor/placement_types.py", line 418, in _reduce_value
E return funcol.all_reduce(
E ^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/distributed/_functional_collectives.py", line 248, in all_reduce
E tensor = torch.ops._c10d_functional.all_reduce(self, reduceOp.lower(), group_name)
E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E File "/usr/lib/python3.11/site-packages/torch/_ops.py", line 1061, in __call__
E return self_._op(*args, **(kwargs or {}))
E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E RuntimeError: No backend type associated with device type cpu
```
*Code*
```python
class TestClipGradNorm(DTensorTestBase):
@with_comms
def test_cpu_offload(self):
class MLP(nn.Module):
def __init__(self, hidden_dim: int, bias: bool = False):
super().__init__()
self.fc1 = nn.Linear(hidden_dim, hidden_dim, bias=bias)
self.gelu = nn.GELU()
self.fc2 = nn.Linear(hidden_dim, hidden_dim, bias=bias)
def forward(self, x):
x = self.fc1(x)
x = self.gelu(x)
x = self.fc2(x)
return x
model = MLP(hidden_dim=16)
fully_shard(model, offload_policy=CPUOffloadPolicy())
input = torch.randn((4, 16)).cuda()
output = model(input)
output.mean().backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.5, norm_type=2)
```
*Manual Solution*
- manually move all `sharded_param.grad` to `cuda` before `clip_grad_norm`, then move back to `cpu`
- (in practice, it is painful and error-prone to do so)
*Automatic Solution*
- modify `torch.nn.utils.clip_grad_norm_` to still allow norm calculation on cpu gradients, but move calculated cpu norm to `cuda` before `allreduce` (i.e.,`DTensor.redistribute`), then move back to cpu
- e.g. code
```python
...
for (device, _), ([device_grads], _) in grouped_grads.items():
norms.extend(torch.linalg.vector_norm(g, norm_type) for g in device_grads if g.numel() > 0)
total_norm = torch.linalg.vector_norm(torch.stack([norm.to("cuda") for norm in norms]), norm_type)
dist.all_reduce(total_norm, op=dist.ReduceOp.MAX, group=NCCLProcessGroup())
clip_coef = max_norm / total_norm
clip_coef_clamped = torch.clamp(clip_coef, max=1.0)
for grad in grads:
grad.mul_(clip_coef_clamped.to("cpu"))
...
```
How do you think? Thanks 🙏
### Versions
PyTorch version: 2.4.1+gitee1b680
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.31
Python version: 3.11.10 (main, Nov 21 2024, 15:54:09) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.15.120.bsk.2-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-40GB
GPU 1: NVIDIA A800-SXM4-40GB
GPU 2: NVIDIA A800-SXM4-40GB
GPU 3: NVIDIA A800-SXM4-40GB
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 120
On-line CPU(s) list: 0-119
Thread(s) per core: 2
Core(s) per socket: 30
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
Stepping: 6
CPU MHz: 2294.616
BogoMIPS: 4589.23
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.8 MiB
L1i cache: 1.9 MiB
L2 cache: 75 MiB
L3 cache: 108 MiB
NUMA node0 CPU(s): 0-59
NUMA node1 CPU(s): 60-119
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] torch==2.4.1+gitee1b680
[pip3] torchdistx==0.3.0.dev0+cu121
[pip3] torchvision==0.17.0+b2383d4
[pip3] triton==3.0.0
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,896,135,507
|
[WIP] Initial implementation of Grouped Gemm API
|
ngimel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cuda",
"ci-no-td",
"ciflow/rocm-mi300"
] | 15
|
COLLABORATOR
|
This PR provides initial cutlass implementation of grouped gemm api as described in this [document](https://docs.google.com/document/d/1985La6wUUVH1AGBkNhaGKUXzx-9ybtbUp567-vYVOM4/edit?tab=t.0#heading=h.g8lzbjnyzzx9). Any combination of 2d and 3d inputs is supported, with 2d input being jagged, and the offsets of the jagged input being given by device tensor `offs`. Only H100 is supported, and only fp8_e4m3 with bf16 output and rowwise scaling. All the dimensions of each individual gemm have to be multiple of 16, that's cutlass limitation.
I'll need to add those checks, for dynamic dimensions unfortunately the checks will have to be a device assert.
I had to copy-paste cutlass's `Sm90RowBroadcast` and `Sm90ColBroadcast` structs with minor changes to enable scales given as pointer arrays, ideally those should be part of cutlass itself.
I copied the schedules from the similar grouped gemm in FBGEMM, but there's a lot of room to improve perf, especially for `fast_accum=False`.
Next steps would be perf tuning and increasing coverage to B100, I don't know how cutlass grouped gemm example handles blockwise scaling on B100.
cc @vkuzo @drisspg @lw
| true
|
2,896,117,881
|
refactor delayed compile to use code context
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148530
* #148509
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,896,093,958
|
Fix clang-tidy bugprone* warnings
|
cyyever
|
open
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"open source",
"release notes: quantization",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan
| true
|
2,896,083,375
|
macos15 M4 can not install torch-2.6.0-cp310-none-macosx_11_0_arm64.whl
|
jasperchen01
|
closed
|
[
"module: binaries",
"triaged",
"module: macos"
] | 2
|
NONE
|
pip install torch-2.6.0-cp310-none-macosx_11_0_arm64.whl.
ERROR: torch-2.5.1-cp310-none-macosx_11_0_arm64.whl is not a supported wheel on this platform.
I also tried torch-2.6.0-cp312-none-macosx_11_0_arm64.whl, torch-2.6.0-cp313-none-macosx_11_0_arm64.whl. They all have the same issue.
cc @seemethere @malfet @osalpekar @atalman @albanD
| true
|
2,896,081,565
|
[FlexAttention] Error using create_block_mask with mask head number greater than 1
|
ChenlongDeng
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2
|
NONE
|
### 🐛 Describe the bug
I have encountered an error when using the flex_attention function in combination with a block mask. Specifically, the error occurs when the mask `created by create_block_mask` is configured with a head number greater than 1. If the mask head number is set to 1, the code runs without any issues.
To reproduce this problem, I have provided the code snippet below. This code is self-contained and should directly reproduce the error I am experiencing.
```python
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
import torch
flex_attention = torch.compile(flex_attention, dynamic=False)
q = torch.randn(1, 32, 8, 128, dtype=torch.bfloat16, device="cuda:0")
k = torch.randn(1, 32, 8, 128, dtype=torch.bfloat16, device="cuda:0")
v = torch.randn(1, 32, 8, 128, dtype=torch.bfloat16, device="cuda:0")
def easy_head_attention_mod(head_num):
head_type = torch.tensor([False if i % head_num == 0 else True for i in range(head_num)], dtype=torch.bool, device=q.device)
def mask_mod(b, h, q_idx, kv_idx):
bi_mask = True & head_type[h]
causal_mask = q_idx >= kv_idx
return bi_mask & causal_mask
return mask_mod
mask_mod = easy_head_attention_mod(32) # Error occurs when head_num is greater than 1, e.g., 32
# If head_num is set to 1 (e.g., mask_mod = easy_head_attention_mod(1)), the code runs without error
mask = create_block_mask(mask_mod, 1, 32, 8, 8, device=q.device, _compile=True)
# Use `enable_gqa=True` with corresponding inputs here would bring more bugs
attn_output = flex_attention(q, k, v, block_mask=mask)
print(attn_output.shape)
```
Upon running the code, the following Assertion Error is raised:
```shell
/tmp/torchinductor_root/hp/chpkqukqm77fa3dop4cafoobwxnw5allj4hh2ti37awkdk6gh34c.py:118: unknown: block: [0,3,0], thread: [62,0,0] Assertion `` failed.
/tmp/torchinductor_root/hp/chpkqukqm77fa3dop4cafoobwxnw5allj4hh2ti37awkdk6gh34c.py:118: unknown: block: [0,3,0], thread: [63,0,0] Assertion `` failed.
...
```
Thank you for your time and attention to this issue. I hope this report is helpful in identifying and resolving the problem.
### Versions
torch==2.7.0.dev20250302+cu118
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,896,066,943
|
Implement gradient for the `residuals` of `torch.linalg.lstsq`
|
Bichidian
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 9
|
CONTRIBUTOR
|
Fixes #147543.
I have written some tests in python using `gradcheck`. Please advise where I should put these tests.
| true
|
2,896,051,092
|
DISABLED test_sdpa_rewriter_11_cuda (__main__.SDPAPatternRewriterCudaTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sdpa_rewriter_11_cuda&suite=SDPAPatternRewriterCudaTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38207753776).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sdpa_rewriter_11_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 582, in _test_sdpa_rewriter_11
self._check_common(dot_prod_attention)
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 85, in _check_common
self.assertGreaterEqual(counters["inductor"]["fuse_attention"], 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1250, in assertGreaterEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 0 not greater than or equal to 1
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_fused_attention.py SDPAPatternRewriterCudaTests.test_sdpa_rewriter_11_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_fused_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,896,051,007
|
DISABLED test_graph_break_before___enter__ (__main__.ContextlibContextManagerTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_graph_break_before___enter__&suite=ContextlibContextManagerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38205104098).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_graph_break_before___enter__`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_ctx_manager.py", line 2166, in test_graph_break_before___enter__
torch.compile(fn, backend="eager", fullgraph=False)(x)
File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: InternalTorchDynamoError not raised
To execute this test, run the following from the base repo dir:
python test/dynamo/test_ctx_manager.py ContextlibContextManagerTests.test_graph_break_before___enter__
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_ctx_manager.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,896,050,969
|
DISABLED test_globals_change_in_other_file (__main__.ContextlibContextManagerTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_globals_change_in_other_file&suite=ContextlibContextManagerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38207230837).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_globals_change_in_other_file`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_ctx_manager.py", line 1897, in test_globals_change_in_other_file
res = fn(torch.ones(10))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 637, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1444, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 600, in __call__
return _compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1065, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 767, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 803, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 721, in transform
tracer.run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3315, in run
super().run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1988, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1050, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/lazy.py", line 201, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1067, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3536, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3715, in inline_call_
self.run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1988, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1050, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 544, in call_function
cm_obj.call_method(tx, "__init__", args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 861, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 935, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1067, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3536, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3715, in inline_call_
self.run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2086, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1050, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 1160, in call_function
tensor_variable = wrap_fx_proxy(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2297, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2363, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2461, in _wrap_fx_proxy
return handle_traced_output(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2663, in handle_traced_output
unimplemented(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 441, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor generator call_function <function ContextlibContextManagerTests.test_globals_change_in_other_file.<locals>.update_global_ctx at 0x7fe72f73b7f0>
from user code:
File "/var/lib/jenkins/pytorch/test/dynamo/test_ctx_manager.py", line 1889, in fn
with update_global_ctx():
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 281, in helper
return _GeneratorContextManager(func, args, kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 103, in __init__
self.gen = func(*args, **kwds)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_ctx_manager.py ContextlibContextManagerTests.test_globals_change_in_other_file
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_ctx_manager.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,959,706
|
[Intel GPU][pt2e] Enable quantized grouped convolution at XPU
|
ZhiweiYan-96
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
# Motivation&Details
This PR fix a bug that blocked quantized group convolution before. The bug is caused by that, grouped convolution requires setting weight scale mask on both group dimension and output channel dimension. This PR fixs the wrong mask in integration and add grouped conv in UT.
# UT
` python test/inductor/test_mkldnn_pattern_matcher.py -k test_qconv2d_xpu`
# Runtime exemplification
```onednn_verbose,v1,primitive,exec,gpu:0,convolution,jit:ir,forward_training,src:s8::blocked:acdb::f0 wei:s8::blocked:abcde::f0 bia:f32::blocked:a::f0 dst:f32::blocked:acdb::f0,attr-scratchpad:user attr-scales:src0:0:f32+dst:0:f32+wei:3:f32 attr-zero-points:src0:0:s32,alg:convolution_direct,g4mb1_ic128oc128_ih4oh2kh3sh1dh0ph0_iw4ow2kw3sw1dw0pw0,0.0529785``
The verbose shows that we successfully run into quantized convolution, where weight is `abcde` format(group conv).
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148522
* #148423
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,957,057
|
[cutlass backend] Forward fix for less aligned gemm shapes
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148521
Differential Revision: [D70600093](https://our.internmc.facebook.com/intern/diff/D70600093/)
1. Check if config name filtering still works.
Tested, it works
2. do we get C++ compile error
Yes, potentially we need to filter them out manually.
Here we get this.
```
static_assert(threads_minor == 0 || (TileSizeK % threads_minor == 0));
```
We need to move some assertions to gemm_template.py
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,878,898
|
Add missing header for Windows dynamo builds
|
cyyever
|
closed
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 2
|
COLLABORATOR
|
Fixes #148317
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan
| true
|
2,895,857,449
|
reshape is decomposed to view setting allow_copy=False making it fail in some case!
|
laithsakka
|
open
|
[
"triaged",
"module: dynamic shapes",
"data dependent error"
] | 6
|
CONTRIBUTOR
|
I have a reshape, and its being decomposed to
```
@register_decomposition(aten.view.default)
def view(a: TensorLikeType, *shape: ShapeType) -> TensorLikeType:
return _reshape_view_helper(a, *shape, allow_copy=False)
```
but this call fail because we pass allow_copy = False however it would succeed if we pass allow_copy=True
for reshape it should be true since:
```
# torch.reshape doesn't support unpacked shapes
def reshape(a: TensorLikeType, *shape: ShapeType) -> TensorLikeType:
return _reshape_view_helper(a, *shape, allow_copy=True)
```
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,895,850,306
|
Preview (Nightly) version cuda12.8 cannot find torchaudio file
|
xrfbb
|
open
|
[
"module: build",
"module: cuda",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
Preview (Nightly) version cuda12.8 cannot find torchaudio file
### Versions
Preview (Nightly) version cuda12.8 cannot find torchaudio file
cc @malfet @seemethere @ptrblck @msaroufim @eqy
| true
|
2,895,834,920
|
[PT2] Port use_triton_dot_compress to PT2 pre_grad passes
|
huxintong
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: add use_triton_dot_compress in pre_grad
Test Plan:
```
scripts/aetk/aetk -L
%run ~/fbsource/fbcode/caffe2/test/inductor/fb/test_customized_triton_kernel_passes.py
```
Reviewed By: frank-wei
Differential Revision: D68909838
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,833,919
|
[ca][aot] mark activations as maybe dynamic
|
xmfan
|
closed
|
[
"topic: not user facing",
"ciflow/inductor"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149367
* __->__ #148516
* #149642
* #149641
* #149229
CA will lift all the activations as graph inputs. Outside of CA, I don't think these marked activation tensors are ever visible as inputs to torch.compile
| true
|
2,895,815,372
|
DISABLED test_set_stance_eager_then_compile (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_set_stance_eager_then_compile&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38194939340).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 20 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_set_stance_eager_then_compile`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 1092, in test_set_stance_eager_then_compile
self.assertEqual(cnts.frame_count, 1)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 2.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_set_stance_eager_then_compile
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,815,322
|
DISABLED test_freevars_as_inputs_to_wrap_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_freevars_as_inputs_to_wrap_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38195794959).
Over the past 3 hours, it has been determined flaky in 18 workflow(s) with 36 failures and 18 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_freevars_as_inputs_to_wrap_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,776,883
|
Add sparsity
|
drisspg
|
closed
|
[
"Merged",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148513
| true
|
2,895,765,793
|
[MPS] Fix unary_kernel_strided logic
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Fixes bug introduced by https://github.com/pytorch/pytorch/pull/148350
Before this change
```
% python3 -c "import torch; x, y = torch.arange(128.0, device='mps').reshape(2, 8, 8).unbind(0); print(torch.sqrt(x[::2, ::2], out=y[::2, ::2]))"
tensor([[ 0.0000, 1.4142, 2.0000, 2.4495],
[ 80.0000, 82.0000, 84.0000, 86.0000],
[ 96.0000, 98.0000, 100.0000, 102.0000],
[112.0000, 114.0000, 116.0000, 118.0000]], device='mps:0')
```
After this change
```
% python3 -c "import torch; x, y = torch.arange(128.0, device='mps').reshape(2, 8, 8).unbind(0); print(torch.sqrt(x[::2, ::2], out=y[::2, ::2]))"
tensor([[0.0000, 1.4142, 2.0000, 2.4495],
[4.0000, 4.2426, 4.4721, 4.6904],
[5.6569, 5.8310, 6.0000, 6.1644],
[6.9282, 7.0711, 7.2111, 7.3485]], device='mps:0')
```
One can not avoid copies if both input and output tensors have the same strides, one needs to make sure that they are dense-in-storage (transposed tensor would be dense, but say selecting every odd and even column wouldn't)
Add regression test to prevent those from happening again
Also, no need to check that sizes match, luckily it is checked by the structured op (and `out` for unary ops does not support broadcasting, I just checked)
Revived needs_copy_logic, though it will become irrelevant after https://github.com/pytorch/pytorch/pull/148468 is landed
| true
|
2,895,756,919
|
[MAIA] [Autocast] Enable autocast on MAIA device
|
wschin
|
closed
|
[
"triaged",
"open source",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Fixes #148510.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| true
|
2,895,755,466
|
[MAIA][Autocast] torch.autocast doesn't work on MAIA device
|
wschin
|
closed
|
[
"triaged",
"module: amp (automated mixed precision)"
] | 2
|
COLLABORATOR
|
In our internal codebase, the following code fails because MAIA device does not have autocast support.
```py
emb = torch.rand(1, 6 ,64)
with torch.autocast(device_type="maia"):
cos = emb.cos()
```
We plan to fix this by adding `AutocastMAIA` dispatch key and updating autocast.h/cpp to include fall-through and default kernel implementations.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| true
|
2,895,746,463
|
Add aot_eager_then_compile stance
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148530
* __->__ #148509
Sometimes `eager_then_compile` stance isn't enough since some models are so close to the memory limit that going to eager will OOM since we don't get the memory reductions from activation checkpointing. This PR introduces `aot_eager_then_compile` which avoids the expensive inductor compile, but still does aot_eager to get the benefits of memory reduction in the first invocation.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,732,915
|
Record how many parameters we're parsing within dynamo
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148508
This allows us to track how many paramaters we have in compilations.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,729,524
|
[export] Fix AttrProxy slicing
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Fixes https://fb.workplace.com/groups/1028545332188949/permalink/1159599265750221/
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,895,717,514
|
Support basic TorchBind in aot_compile and aoti_compile_and_package
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 19
|
CONTRIBUTOR
|
Summary:
**Codegen**
- Skip some codegen parts for torchbind (such as arg decleration) because they are loaded in proxy executor, so we do not need to declare torchbind args in cpp code
- Added a helper method to get the schema of CallTorchBind HOP. The returned schema is only the schema of `obj.method()`.
**Serialization**
Add support for torchbind object in serialization
- For CallTorchBind HOP, we need to handle it specially because of it's schema. The output serialized args is in the format of `(obj, method, *args, **kwargs)`.
- it.TorchBindObject inputs are serialized to `as_custom_obj` Argument.
**Packaging**
Add torchbind objects file and `custom_objs_config.json` file to generated files output of `aot_compile`.
The json file is stored in the `data/aotinductor/<model_name>` folder in pt2 archive.
The torchbind objects are stored in data/constants/ folder in pt2 archive.
The format of torchbind objects are `f"{CUSTOM_OBJ_FILENAME_PREFIX}{custom_obj_idx}"`. e.g. `custom_obj_0`.
CustomClassHolder objects implement their own pickle methods.
Note that this `custom_objs_config.json` file is different from the `model_constants_config.json` file produced in package_sigmoid(). The keys in `custom_objs_config` directly correspond to the arg name in extern nodes json.
The key in `model_constants_config.json` produced by `package_sigmoid` is the attribute name in the user mode code.
This is required for both internal and OSS torchbind support.
For OSS torchbind support, we also need to package torchbind_constants into the .pt2 output.
**Work Left**
We still need to add torchbind support in ProxyExecutor for inductor.aoti_load_package to work. See other diffs in the stack.
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r schema
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r aot_compile
```
Differential Revision: D69490718
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,696,311
|
Change constexpr annotation to specific initialization (test: triton_kernel_constants)
|
FindHao
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
MEMBER
|
This pull request includes changes to the `test/inductor/test_triton_kernels.py` file to update the usage of `tl.constexpr` annotations in accordance with recent changes in Triton.
https://github.com/triton-lang/triton/pull/5961 doesn't allow constexpr annotation `x: triton.language.constexpr = 42` anymore. The suggested way is to instantiate variables as constexpr (`x = triton.language.constexpr(42)`).
This is a part of fixing ci errors https://github.com/pytorch/pytorch/pull/147320 for triton pin updates.
Test Plan:
```
python test/inductor/test_triton_kernels.py -k triton_kernel_constants
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,680,731
|
[FSDP2] improve error msg for duplicate wraps
|
weifengpy
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
should remind people to check fsdp wrapped modules, instead of showing error aorund ND device mesh
```
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
model = ToyModel()
for module in model.modules():
fully_shard(module)
fully_shard(model)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,895,679,174
|
[triton] Warp specialization support in torchinductor
|
mandroid6
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 26
|
CONTRIBUTOR
|
Summary:
Currently only `num_warps` and `num_stages` are supported as one of the kernel options for inductor auto-tuning using `TritonTemplate`. In order to allow warp-specialization kernel options should allow specifying `num_consumer_groups` and `num_buffers_warp_spec` as well.
Test Plan:
## Unit test
Added tests for `test_triton_template_warp_specialization` to verify generated kenrnel contains configs for `num_consumer_groups` and `num_buffers_warp_spec`.
## Functional Testing
Specific to flexattention.
```
import torch
from torch.nn.attention.flex_attention import flex_attention
from triton.testing import do_bench
make_tensor = lambda: torch.rand(8, 16, 8192, 128, device="cuda", dtype=torch.bfloat16)
q, k, v = make_tensor(), make_tensor(), make_tensor()
flex_compiled = torch.compile(flex_attention, fullgraph=True)
print(do_bench(lambda: flex_compiled(q, k, v, kernel_options={"num_warps": 4})))
```
triton do_bench results:
- default compile: 15.176783561706543
- with warp-spec: 9.452800750732422
## Extra notes
- generated triton kernel using `TORCH_LOGS=output_code`: P1740612877
- TTGIR for fused kernel: P1740614685
Differential Revision: D70212243
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,649,313
|
[Inductor-CPU] Disable auto-tuning for templated int8 WoQ GEMM for small M to fix perf regression
|
sanchitintel
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
COLLABORATOR
|
### Summary
Described in #148494 - this PR fixes a regression (compared to the default Inductor-CPU behavior of not using max-autotune) for templated int8 WoQ GEMM (with BF16 activation) for small M dimension by disabling auto-tuning for small `M`, so that the ATen `_weight_int8pack_mm` kernel would be used.
The significance is next-token generation of LLMs.
Turning off auto-tuning for small `M` is a workaround. Ideally, we should improve the auto-tuning infra to prevent templated AVX512 GEMM for int8 WoQ being chosen if `_weight_int8pack_mm` would be faster E2E.
### Details
During auto-tuning, AVX512 GEMM micro-kernel is chosen for small `M`, but it's faster during auto-tuning, and performs worse E2E, which is expected as it can exploit cache locality for inputs while being called several times for the same inputs in a loop, but the same behavior isn't observed for its ATen counterpart `_weight_int8pack_mm`, which performs worse than it during auto-tuning but performs better E2E. However, it too would've benefited from better cache locality for inputs if it had been benchmarked for a longer time-period. Even so, the latency of the templated GEMM would still have been lower, even if we had benchmarked for more time.
|M | N | K | Templated GEMM latency during autotuning benchmarking | Templated GEMM latency E2E | `_weight_int8pack_mm` latency during autotuning benchmarking | `_weight_int8pack_mm` latency E2E | Ratio of E2E latency of templated GEMM over `_weight_int8pack_mm`|
|---|---|---|-------------------|----------------|----------------------|----------------|-----|
| 1|4096|4096|31.2 us |91.1 us |108.7 us | 76.07 us|1.19 |
|1|1024|4096| 16.1 us | 33.36 us | 52.9 us | 24.275 us |1.37 |
|1|14336|4096| 112.8 us | 274.16 us |335.3 us |233.197 us| 1.17|
|1|4096|14336|128.1 us | 280.76 us| 330 us | 237.797 us| 1.18|
|1|4096|128256|1.642 ms|2.16 ms | 2.118ms|2.034 ms| 1.06 |
### UTs
`python test/inductor/test_cpu_select_algorithm.py -v -k test_int8_woq_mm`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,648,956
|
[codemod] Remove unused-variable in caffe2/torch/csrc/distributed/c10d/cuda/AsyncMM.cu
|
r-barnes
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary:
LLVM-15 has a warning `-Wunused-variable` which we treat as an error because it's so often diagnostic of a code issue. Unused variables can compromise readability or, worse, performance.
This diff either (a) removes an unused variable and, possibly, it's associated code or (b) qualifies the variable with `[[maybe_unused]]`.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Reviewed By: dtolnay
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,895,638,991
|
[Dyamo] Replace unimplemented with unimplemented_v2 for variables/distributed
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,617,690
|
Fix only logging ir_post_fusion with torch_compile_debug enabled
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148499
Because we were invoking the logs through `V.debug`, it was not running if TORCH_COMPILE_DEBUG was not set. this is because there is some magic the in debug [getattr](https://github.com/pytorch/pytorch/blob/d789c22712a1e7761fe77b19093f0a43caaaf0f3/torch/_inductor/debug.py#L468-L480).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,609,079
|
`distributed.checkpoint.async_save` leading to `TypedStorage is deprecated.`
|
jamesbraza
|
closed
|
[
"oncall: distributed checkpointing"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The below code using `torch.distributed.checkpoint.async_save` with `torch==2.5.1` emits a warning:
```python
from pathlib import Path
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch import nn
model = nn.Sequential(
nn.Linear(784, 512), nn.ReLU(), nn.Linear(512, 256), nn.ReLU(), nn.Linear(256, 10)
)
dist.init_process_group(
backend="gloo", world_size=1, rank=0, init_method="tcp://localhost:10998"
)
future = dcp.async_save(
{"model": model.state_dict()},
checkpoint_id=Path("checkpoint_dir"),
process_group=dist.new_group(backend="gloo"),
)
future.result()
```
```none
/path/to/.venv/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py:116: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
if tensor.storage().size() != tensor.numel():
```
Can we update `async_save` so its usage doesn't emit a `TypedStorage is deprecated` warning?
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.6
Libc version: N/A
Python version: 3.12.8 (main, Jan 28 2025, 10:06:03) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] Could not collect
cc @LucasLLC @pradeepfn
| true
|
2,895,592,351
|
Implement fast access to individual elements of jagged nested tensors
|
fleonce
|
open
|
[
"triaged",
"open source",
"topic: performance",
"release notes: nested tensor"
] | 6
|
CONTRIBUTOR
|
I removed the dependency on `tensor.unbind()` discussed in #148379 and replaced it with basic indexing ops on the values tensor based on the inputs.
Feedback would greatly be appreciated, I am not sure i got the part with the lengths right - wasnt able to find a lot of documentation on jagged tensors, I hope I understood `NestedTensor._lengths` correctly
Fixes #148379
| true
|
2,895,569,280
|
[ROCm] fix CK compile for gfx1200
|
alugorey
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
gfx1200 causes the CK-based GEMM to fail to compile because CK is choosing an incorrect FP8 interpretation. CK assumes FP8 interpretation is static and chosen prior to compilation. This PR is a work-around that makes the selection dynamic during hipclang compilation passes.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,895,550,299
|
UNSTABLE trunk / libtorch-linux-focal-cuda12.4-py3.10-gcc9-debug / build
|
malfet
|
closed
|
[
"module: ci",
"triaged",
"module: regression",
"unstable"
] | 12
|
CONTRIBUTOR
|
See https://hud.pytorch.org/hud/pytorch/pytorch/c677f3251f46b4bffdaa7758fb7102d665b6f11b/1?per_page=50&name_filter=%20libtorch-linux-focal-cuda12.4-py3.10-gcc9-debug%20%2F%20build but revert did not help
Currently failing out with errors similar to:
```
/usr/bin/ld: /var/lib/jenkins/cpp-build/caffe2/build/lib/libtorch_cuda.so: undefined reference to `std::__throw_bad_array_new_length()'
```
cc @seemethere @pytorch/pytorch-dev-infra
| true
|
2,895,543,976
|
[Inductor-CPU] Templated int8 WoQ GEMMs (with BF16 activation) may cause regressions for next-token generation of LLMs
|
sanchitintel
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 9
|
COLLABORATOR
|
### 🐛 Describe the bug
Inductor-CPU templated int8 WoQ (with BF16 activation) GEMMs for next-token generation (with small `M` dimension) are faster than their ATen counterparts during auto-tuning, so they're chosen at compile time, but they might cause a regression when a model is run end-to-end. (A digression: during auto-tuning, templated GEMMs are only benchmarked against their ATen counterpart, while the templated GEMM that runs E2E also has some epilogue fusions).
The root-cause for this behavior is unknown at this point.
### Solution to fix regression (compared to Inductor-CPU max-autotune disabled)
Currently, an AVX512 GEMM micro-kernel is being used for small `M` & an AMX ISA micro-kernel is being used for large `M` dimension.
We should disable the AVX512 GEMM micro-kernel when AMX ISA is available, so that:
1. For small M, `_weight_int8pack_mm` would be chosen during auto-tuning -> no regression for next-token latency E2E.
2. For large M, templated GEMM kernel with AMX micro-kernel would be chosen -> lower first token latency E2E
PR: #148502
### Solution to improve end-to-end templated int8 WoQ GEMM performance over Inductor-CPU for small M
?
### Versions
Main branch
cc @chauhang @penguinwu
| true
|
2,895,541,696
|
[cutlass backend][BE] Fix two small things in cutlass backend standalone debugger
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148493
Differential Revision: [D70583777](https://our.internmc.facebook.com/intern/diff/D70583777/)
Two really small things:
* The bits in BlockFillRandomUniform would round float to ints
* when bias exists, the order of args are C, A, B, D
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,537,919
|
[triton hash update] update the pinned triton hash
|
pytorchupdatebot
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/rocm"
] | 206
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned triton hash.
| true
|
2,895,509,352
|
[aot cache][ca] remove restriction on caching ca's aot inference graph
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 12
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148491
* #148381
but still can't cache CA's aot inference graph yet: the CA functional ops aren't serializable
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,496,094
|
chore: fix code descriptions in the test package
|
threewebcode
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
The parameter and function description have something wrong and make them correct.
Fixes #ISSUE_NUMBER
| true
|
2,895,462,715
|
Disable some SVE autovec
|
Nicoshev
|
closed
|
[
"module: cpu",
"module: third_party",
"fb-exported",
"module: arm",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: bug fixes"
] | 10
|
CONTRIBUTOR
|
Summary: autovec miscompiles on patterns of the type:
```cpp
for (const auto i : c10::irange())
```
Same issue as described in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=117001 and addressed by https://github.com/pytorch/pytorch/pull/137795 for gcc, but not clang
Test Plan:
buck2 build //caffe2/caffe2/fb/transforms:sigrid_interface
Differential Revision: D70422723
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,895,458,451
|
Suppress more warnings
|
tugsbayasgalan
|
open
|
[
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149288
* __->__ #148488
* #148485
| true
|
2,895,445,035
|
export lift_constants_pass creates ugly warning
|
tugsbayasgalan
|
open
|
[
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Outputs:
```
/data/users/tmanlaibaatar/pytorch/torch/_export/passes/lift_constants_pass.py:210: UserWarning: _param_constant777 created when tracing File "/home/tmanlaibaatar/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/transformers/models/blip/modeling_blip_text.py", line 877, in forward
outputs = self.bert(
File "/home/tmanlaibaatar/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/transformers/models/blip/modeling_blip_text.py", line 782, in forward
encoder_outputs = self.encoder(
File "/home/tmanlaibaatar/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/transformers/models/blip/modeling_blip_text.py", line 436, in forward
layer_outputs = layer_module(
File "/home/tmanlaibaatar/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/transformers/models/blip/modeling_blip_text.py", line 368, in forward
layer_output = apply_chunking_to_forward(
File "/home/tmanlaibaatar/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/transformers/models/blip/modeling_blip_text.py", line 300, in forward
hidden_states = self.dense(hidden_states)
File "/data/users/tmanlaibaatar/pytorch/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias) is a parameter. Butit's not registered with register_parameter(). export will treat it as a constant tensor
warnings.warn(
```
I think we shouldn't emit this, since it is not very actionable to user and pollutes screen
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
2,895,401,933
|
ci: Add workflow dispatch for commit hash update
|
seemethere
|
closed
|
[
"Merged",
"topic: not user facing"
] | 4
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148486
* #148472
* #148466
Maybe this should also be split into its own workflow instead of piggy
backing off of nightly?
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,895,391,658
|
Demote logger of runtime_asserts_frozen to be fired only on debug mode
|
tugsbayasgalan
|
open
|
[
"fb-exported",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149288
* #148488
* __->__ #148485
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,895,386,826
|
[BE][pytree] rename argument name in register function to match the type annotations: `*_fn -> *_func`
|
XuehaiPan
|
open
|
[
"open source",
"topic: not user facing",
"module: pytree",
"fx",
"ciflow/inductor",
"release notes: export"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148484
* #148474
This PR renames the arguments name in `register_pytree_node` from `*_fn -> *_func`. Either the new names or the old names can be passed. A `FutureWarning` will be emitted when the old argument names are passed.
cc @zou3519 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,895,379,486
|
Remove warnings on non-buffer tensor constants
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148483
* #148364
Export already registers tensor constants directly in the graph and this is also true for Torchbind objects. This removes warning that pollutes the output.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D70577856](https://our.internmc.facebook.com/intern/diff/D70577856)
| true
|
2,895,365,921
|
Export shouldn't warn when registering constant tensor attribute on graph module.
|
tugsbayasgalan
|
open
|
[
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.a = torch.ones(4, 4)
def forward(self, start):
return start + self.a
f = DummyModel()
ep = torch.export.export(f, (torch.ones(4, 4),)).module()
```
This emits:
```
/data/users/tmanlaibaatar/pytorch/torch/export/_unlift.py:81: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
getattr_node = gm.graph.get_attr(lifted_node)
/data/users/tmanlaibaatar/pytorch/torch/fx/graph.py:1794: UserWarning: Node a target a a of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
```
This is very annoying when you are working on large modules.
### Versions
Main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
2,895,357,504
|
[dynamo][guards] Fix mem leak caused be refcount increment
|
anijain2305
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148481
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,345,752
|
[dynamo][guards] Fix mem leak caused be refcount increment
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Should help [internalfb.com/sevmanager/view/491701](https://www.internalfb.com/sevmanager/view/491701)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,343,242
|
export is emitting too many not actionable warnings.
|
tugsbayasgalan
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"export-triaged",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
1. git clone https://github.com/zhxchen17/torchnative
2. python wip/flux_aoti.py
Will see:
```
# [W0225](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fintern%2Fworkorder%2F0225&h=AT2t6XcEf6l_vVsx3j6QkHMgmzPh0AuviZbhpkYwGGq3CfsUcLL7qLTVvbPbAOguj_DtDf_dTP0ugC0skKsAftoeW8YJeUt5xT3HUk6-88963uE_joajh-VGjsnfcYt68RoOE53XU-fvgmmmNYxZJPqOx_nlanUa60_SZg&__tn__=-UK-R&c[0]=AT2r1u2gOMrie6XSnqmtbhFriQL62sDidA4esMpbwbdwmMsiykZqK_LGHsz7TYyYll9SAZtDFXDV_7u_vfpnYdfI-x__0S93-6ygASlVii5DbUqgezLBNT1MJ8BUsIGvNP1mkCi6xY6uQLhufShEZSF7p1CxbPFHYSjlgK93c7NedLS7tmHxKAVEE5LHja9yWTRg) 13:16:17.932000 76863 torch/fx/experimental/symbolic_shapes.py:6974] runtime_asserts_frozen but then got 24*s0 < 2147483648
# [W0225](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fintern%2Fworkorder%2F0225&h=AT2t6XcEf6l_vVsx3j6QkHMgmzPh0AuviZbhpkYwGGq3CfsUcLL7qLTVvbPbAOguj_DtDf_dTP0ugC0skKsAftoeW8YJeUt5xT3HUk6-88963uE_joajh-VGjsnfcYt68RoOE53XU-fvgmmmNYxZJPqOx_nlanUa60_SZg&__tn__=-UK-R&c[0]=AT2r1u2gOMrie6XSnqmtbhFriQL62sDidA4esMpbwbdwmMsiykZqK_LGHsz7TYyYll9SAZtDFXDV_7u_vfpnYdfI-x__0S93-6ygASlVii5DbUqgezLBNT1MJ8BUsIGvNP1mkCi6xY6uQLhufShEZSF7p1CxbPFHYSjlgK93c7NedLS7tmHxKAVEE5LHja9yWTRg) 13:16:17.941000 76863 torch/fx/experimental/symbolic_shapes.py:6974] runtime_asserts_frozen but then got 9216*s0 < 2147483648
# [W0225](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fintern%2Fworkorder%2F0225&h=AT2t6XcEf6l_vVsx3j6QkHMgmzPh0AuviZbhpkYwGGq3CfsUcLL7qLTVvbPbAOguj_DtDf_dTP0ugC0skKsAftoeW8YJeUt5xT3HUk6-88963uE_joajh-VGjsnfcYt68RoOE53XU-fvgmmmNYxZJPqOx_nlanUa60_SZg&__tn__=-UK-R&c[0]=AT2r1u2gOMrie6XSnqmtbhFriQL62sDidA4esMpbwbdwmMsiykZqK_LGHsz7TYyYll9SAZtDFXDV_7u_vfpnYdfI-x__0S93-6ygASlVii5DbUqgezLBNT1MJ8BUsIGvNP1mkCi6xY6uQLhufShEZSF7p1CxbPFHYSjlgK93c7NedLS7tmHxKAVEE5LHja9yWTRg) 13:16:18.042000 76863 torch/fx/experimental/symbolic_shapes.py:6974] runtime_asserts_frozen but then got 6144*s0 < 2147483648
# [W0225](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fintern%2Fworkorder%2F0225&h=AT2t6XcEf6l_vVsx3j6QkHMgmzPh0AuviZbhpkYwGGq3CfsUcLL7qLTVvbPbAOguj_DtDf_dTP0ugC0skKsAftoeW8YJeUt5xT3HUk6-88963uE_joajh-VGjsnfcYt68RoOE53XU-fvgmmmNYxZJPqOx_nlanUa60_SZg&__tn__=-UK-R&c[0]=AT2r1u2gOMrie6XSnqmtbhFriQL62sDidA4esMpbwbdwmMsiykZqK_LGHsz7TYyYll9SAZtDFXDV_7u_vfpnYdfI-x__0S93-6ygASlVii5DbUqgezLBNT1MJ8BUsIGvNP1mkCi6xY6uQLhufShEZSF7p1CxbPFHYSjlgK93c7NedLS7tmHxKAVEE5LHja9yWTRg) 13:16:18.168000 76863 torch/fx/experimental/symbolic_shapes.py:6974] runtime_asserts_frozen but then got 64*s0 < 2147483648
# [W0225](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fintern%2Fworkorder%2F0225&h=AT2t6XcEf6l_vVsx3j6QkHMgmzPh0AuviZbhpkYwGGq3CfsUcLL7qLTVvbPbAOguj_DtDf_dTP0ugC0skKsAftoeW8YJeUt5xT3HUk6-88963uE_joajh-VGjsnfcYt68RoOE53XU-fvgmmmNYxZJPqOx_nlanUa60_SZg&__tn__=-UK-R&c[0]=AT2r1u2gOMrie6XSnqmtbhFriQL62sDidA4esMpbwbdwmMsiykZqK_LGHsz7TYyYll9SAZtDFXDV_7u_vfpnYdfI-x__0S93-6ygASlVii5DbUqgezLBNT1MJ8BUsIGvNP1mkCi6xY6uQLhufShEZSF7p1CxbPFHYSjlgK93c7NedLS7tmHxKAVEE5LHja9yWTRg) 13:16:18.303000 76863 torch/fx/experimental/symbolic_shapes.py:6974] runtime_asserts_frozen but then got 224*s0 < 2147483648
# [W0225](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fintern%2Fworkorder%2F0225&h=AT2t6XcEf6l_vVsx3j6QkHMgmzPh0AuviZbhpkYwGGq3CfsUcLL7qLTVvbPbAOguj_DtDf_dTP0ugC0skKsAftoeW8YJeUt5xT3HUk6-88963uE_joajh-VGjsnfcYt68RoOE53XU-fvgmmmNYxZJPqOx_nlanUa60_SZg&__tn__=-UK-R&c[0]=AT2r1u2gOMrie6XSnqmtbhFriQL62sDidA4esMpbwbdwmMsiykZqK_LGHsz7TYyYll9SAZtDFXDV_7u_vfpnYdfI-x__0S93-6ygASlVii5DbUqgezLBNT1MJ8BUsIGvNP1mkCi6xY6uQLhufShEZSF7p1CxbPFHYSjlgK93c7NedLS7tmHxKAVEE5LHja9yWTRg) 13:16:18.803000 76863 torch/fx/experimental/symbolic_shapes.py:6974] runtime_asserts_frozen but then got 12288*s0 < 2147483648
# [W0225](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fintern%2Fworkorder%2F0225&h=AT2t6XcEf6l_vVsx3j6QkHMgmzPh0AuviZbhpkYwGGq3CfsUcLL7qLTVvbPbAOguj_DtDf_dTP0ugC0skKsAftoeW8YJeUt5xT3HUk6-88963uE_joajh-VGjsnfcYt68RoOE53XU-fvgmmmNYxZJPqOx_nlanUa60_SZg&__tn__=-UK-R&c[0]=AT2r1u2gOMrie6XSnqmtbhFriQL62sDidA4esMpbwbdwmMsiykZqK_LGHsz7TYyYll9SAZtDFXDV_7u_vfpnYdfI-x__0S93-6ygASlVii5DbUqgezLBNT1MJ8BUsIGvNP1mkCi6xY6uQLhufShEZSF7p1CxbPFHYSjlgK93c7NedLS7tmHxKAVEE5LHja9yWTRg) 13:16:25.112000 76863 torch/fx/experimental/symbolic_shapes.py:6974] runtime_asserts_frozen but then got 43008*s0 < 2147483648
# [W0225](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fintern%2Fworkorder%2F0225&h=AT2t6XcEf6l_vVsx3j6QkHMgmzPh0AuviZbhpkYwGGq3CfsUcLL7qLTVvbPbAOguj_DtDf_dTP0ugC0skKsAftoeW8YJeUt5xT3HUk6-88963uE_joajh-VGjsnfcYt68RoOE53XU-fvgmmmNYxZJPqOx_nlanUa60_SZg&__tn__=-UK-R&c[0]=AT2r1u2gOMrie6XSnqmtbhFriQL62sDidA4esMpbwbdwmMsiykZqK_LGHsz7TYyYll9SAZtDFXDV_7u_vfpnYdfI-x__0S93-6ygASlVii5DbUqgezLBNT1MJ8BUsIGvNP1mkCi6xY6uQLhufShEZSF7p1CxbPFHYSjlgK93c7NedLS7tmHxKAVEE5LHja9yWTRg) 13:16:25.344000 76863 torch/fx/experimental/symbolic_shapes.py:6974] runtime_asserts_frozen but then got 30720*s0 < 2147483648
```
### Versions
Main
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
2,895,334,191
|
export dynamic shapes API throws weird error on upper bound.
|
tugsbayasgalan
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro instruction:
1. git clone https://github.com/zhxchen17/torchnative
2. Apply this patch
```
diff --git a/wip/flux_aoti.py b/wip/flux_aoti.py
index c48dc74..4218be3 100644
--- a/wip/flux_aoti.py
+++ b/wip/flux_aoti.py
@@ -103,10 +103,12 @@ with torch.inference_mode():
fmodel.cuda()
vals_fmodel = copy_tensors(fmodel_samples[0])
+ dim = torch.export.Dim("dim")
+
def create_dynamic_shape_v2(x):
col = {}
for ix, i in enumerate(x.shape):
- col[ix] = torch.export.Dim.AUTO
+ col[ix] = dim
return col
dynamic_shap_v2 = pytree.tree_map_only(
```
3. Run python wip/flux_aoti.py
This will give you error:
```
# torch._dynamo.exc.UserError: Constraints violated (height)! For more information, run with TORCH_LOGS="+dynamic".
# - Not all values of height = L['args'][1]['img'].size()[1] in the specified range satisfy the generated guard 2 <= L['args'][1]['img'].size()[1] and L['args'][1]['img'].size()[1] <= 9223372036854775807
# Suggested fixes:
# height = Dim('height', max=9223372036854775807)
```
From the error, it looks like we are running into something that is checking if the dynamic dim is less than a max value. This should be obvious.
### Versions
Main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
2,895,328,763
|
XPU not available until I sign into server locally
|
alexanderwebber
|
closed
|
[
"triaged",
"module: xpu"
] | 15
|
NONE
|
### 🐛 Describe the bug
If I connect to my desktop remotely through either ssh or VS Code, when I run:
```
import torch
if torch.xpu.is_available():
device = torch.device("xpu")
else:
device = torch.device("cpu")
print(f"Using device: {device}")
```
it prints cpu.
However, if I sign into my desktop locally, and then ssh or access via VS Code, it prints xpu.
I am guessing there is some initialization happening when I sign into my desktop locally that does not happen via SSH or VS Code.
Apologies if this is the wrong avenue to ask this question, but any tips on how to resolve this? I frequently access my desktop remotely and it would be very helpful to fix this issue as I need to reboot occasionally while out of town, and, as such, I cannot login locally.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5900X 12-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 59%
CPU max MHz: 4951.0000
CPU min MHz: 550.0000
BogoMIPS: 7399.94
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] pytorch-triton-xpu==3.2.0
[pip3] torch==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[conda] Could not collect
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,895,315,935
|
Illegal memory access in scaled_dot_product_attention if only attn_mask requires grad
|
Aleko2286
|
open
|
[
"module: cuda",
"triaged",
"module: sdpa"
] | 1
|
NONE
|
### 🐛 Describe the bug
There is an illegal memory access in torch.nn.functional.scaled_dot_product_attention during the backward pass when using a float attention mask that requires grad while q, k and v do not require grad.
```python
import torch
q, k, v = (torch.randn((1, 1, 64, 16), device="cuda") for _ in range(3))
mask = torch.randn((1, 1, 64, 64), device="cuda", requires_grad=True)
o = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask)
o.sum().backward()
print(mask.grad)
```
It works fine on the CPU or if any of the other inputs also require grad.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.13.4-1-default-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 570.124.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 5600X 6-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 91%
CPU max MHz: 4651.0000
CPU min MHz: 550.0000
BogoMIPS: 7399.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @ptrblck @msaroufim @eqy
| true
|
2,895,299,093
|
Dynamo replaces exception by hard error in `run_node`
|
guilhermeleobas
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
While working on PR [#146500](https://github.com/pytorch/pytorch/pull/146500), I noticed that some tests assert that a PyTorch function raises an exception for certain inputs. One example is `TestTorchDeviceTypeCPU.test_broadcast_fn_ge_cpu`.
Below is a minimal reproducer of what most of the failing tests attempt to do:
```python
import torch
@torch.compile(backend='eager')
def fn(t):
t0 = torch.randn(2)
try:
t.expand_as(t0)
except RuntimeError:
return t.sin()
return t.cos()
t = torch.randn(2, 3)
y = fn(t)
print(y)
```
The call `t.expand_as(t0)` raises `RuntimeError('expand: the requested shape has too few dimensions!')`. However, the user code never gets a chance to handle this exception because it is replaced by a hard error ([`TorchRuntimeError`](https://github.com/pytorch/pytorch/blob/7fcbaff206d1626353b414f433110de2dc9d3f48/torch/_dynamo/utils.py#L3217)).
In theory, we could replace this hard error with something Dynamo can process, but doing so would incorporate the expand_as call into the computation graph:
```diff
diff --git a/torch/_dynamo/utils.py b/torch/_dynamo/utils.py
index 2f7f5b14d0b..865539fdcda 100644
--- a/torch/_dynamo/utils.py
+++ b/torch/_dynamo/utils.py
@@ -3214,7 +3214,8 @@ def get_fake_value(node, tx, allow_non_graph_fake=False):
hints=[],
)
- raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
+ from .exc import raise_observed_exception
+ raise_observed_exception(type(e), tx)
if not allow_non_graph_fake:
_ = pytree.tree_map_only(
```
```python
class GraphModule(torch.nn.Module):
def forward(self, L_t_: "f32[2, 3][3, 1]cpu"):
l_t_ = L_t_
# File: /home/guilhermeleobas/git/pytorch/test.py:6 in fn, code: t0 = torch.randn(2)
t0: "f32[2][1]cpu" = torch.randn(2)
# File: /home/guilhermeleobas/git/pytorch/test.py:8 in fn, code: t.expand_as(t0)
expand_as = l_t_.expand_as(t0); t0 = expand_as = None
# File: /home/guilhermeleobas/git/pytorch/test.py:10 in fn, code: return t.sin()
sin: "f32[2, 3][3, 1]cpu" = l_t_.sin(); l_t_ = None
return (sin,)
```
If we include this in the graph, the same error would occur at runtime, which is not ideal.
Could Dynamo handle this kind of pattern in some way? Or this is probably something that can not be supported.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519
### Versions
main branch
| true
|
2,895,292,156
|
[BE][pytree] rename `NodeDef` member to match the type annotations: `*_fn -> *_func`
|
XuehaiPan
|
open
|
[
"open source",
"topic: not user facing",
"module: pytree",
"release notes: export"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148484
* __->__ #148474
This PR renames the member in `NodeDef` from `*_fn -> *_func`. The old names are aliased to the new names and will emit a `FutureWarning` when accessed.
cc @zou3519
| true
|
2,895,281,571
|
[dynamo][guards] Fix mem leak caused by extra refcount increment
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Should help https://www.internalfb.com/sevmanager/view/491701
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,243,089
|
ci: Add triton to update hash workflow
|
seemethere
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148486
* __->__ #148472
* #148466
Adds triton to our auto-update workflows so that PRs can be
automatically made and the triton team can follow up to fix any issues
that may arise.
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,895,240,913
|
[MPS][BE] Fix `c10::metal::sinc` implementation
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148468
* __->__ #148471
Restrict scalar implementation to `is_scalar_floating_point_v` types, but perform all internal computations in full 32-bit floats. Make complex implementation a template for `is_complex_v` types
This makes its eager kernel implementation for both real and complex type a trivial call to the template
| true
|
2,895,231,394
|
[dynamo] Properly account for non-list instances in list comparison
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148470
As title; this patch also removes an unused `list_compare` method.
Fixes #148179.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,217,330
|
Upgrade github ubuntu-20.04 runners to ubuntu-24.04
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
The github provided ubuntu-20.04 gha runners are being deprecated (https://togithub.com/actions/runner-images/issues/11101) so upgrade workflows using them to the latest runner 24.04
They are currently doing a brownout, resulting in failures like: https://github.com/pytorch/pytorch/actions/runs/13660782115
```
[do_update_viablestrict](https://github.com/pytorch/pytorch/actions/runs/13660782115/job/38192777885)
This is a scheduled Ubuntu 20.04 brownout. Ubuntu 20.04 LTS runner will be removed on 2025-04-01. For more details, see https://github.com/actions/runner-images/issues/11101
```
Should we be using ubuntu-latest instead?
I attempted to upgrade actionlint to 1.7.7 but on my local in test-infra it seems to add a lot of new checks, and on test-infra's CI, I seem to have uploaded the wrong executable or something so it failed. I'll try again later
| true
|
2,895,162,930
|
[MPS] Introduce strides unary op
|
malfet
|
closed
|
[
"Merged",
"topic: performance",
"release notes: mps",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148468
By adding following template
```metal
template <typename T, typename F>
kernel void unary_strided(
device result_of<F, T>* output [[buffer(0)]],
constant T* input [[buffer(1)]],
constant long* sizes [[buffer(2)]],
constant long* input_strides [[buffer(3)]],
constant long* output_strides [[buffer(4)]],
constant uint& ndim,
uint index [[thread_position_in_grid]]) {
F f;
int pos[max_ndim];
pos_from_thread_index(int(index), pos, sizes, ndim);
const auto input_offs = offset_from_coord(pos, input_strides, ndim);
const auto output_offs = offset_from_coord(pos, output_strides, ndim);
output[output_offs] = f(input[input_offs]);
}
```
and instantiating it for all existing unary shaders, which eliminates the need to any intermediate copies.
No extra testing are needed as those cases are already covered by `test_output_grad_match_corrcoef_cpu_float32` as well as `test_unary_ops_storage_offset_strided`
| true
|
2,895,152,726
|
[PGNCCL] Launch kernel on current stream & remove `record_stream` entirely
|
kwen2501
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
This PR has multiple changes to `ProcessGroupNCCL` (which unfortunately have to be atomic):
1. When async_op=False, we directly launch the collective on "current" stream instead of a trampoline stream and join back.
- Resolves #147729
- Resolves #146881
- Also saves an event sync and one pybind during the unnecessary `work.wait()` called by distributed_c10d.py.
2. Entirely remove `record_stream` and use CPU-side stashing for managing tensor lifetime against recycling.
- Resolves #147168
3. Remove tensor life management when async_op=False; only use it when async_op=True.
4. To guard against user not calling `work.wait()`, we ask watchdog to unstash tensors after detecting completion of collectives, to prevent us from holding reference to tensors forever. This is a safety net, rather than a service guarantee, see discussion [here](https://github.com/pytorch/pytorch/issues/147168#issuecomment-2660142460).
5. Profile in async_op=False mode would look different -- collective kernels would show up in the same line and compute kernels.
Joint work with @cenzhaometa who wants to remove the event sync overhead.
Cc: @ngimel @awgu @Aidyn-A @skyw @wconstab @leonardo0lyj
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,895,145,915
|
ci: Consolidate commit hash updates into a matrix
|
seemethere
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148486
* #148472
* __->__ #148466
Consolidates all of our commit hash update jobs into a single matrix to
make it easier to add more jobs later on.
Side note: How do I even test if this works?
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,895,133,064
|
[aarch64] add libcufile for cu126 and cu128
|
tinglvv
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
seeing ` File "/usr/local/lib/python3.12/site-packages/torch/__init__.py", line 411, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: libcufile.so.0: cannot open shared object file: No such file or directory` with arm cu128 nightly.
related to https://github.com/pytorch/pytorch/pull/148137
need to copy the dependency for arm build as well
cc @atalman @malfet @ptrblck @nWEIdia
| true
|
2,895,130,865
|
DISABLED test_capture_untracked_nonlocal_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 6
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_capture_untracked_nonlocal_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38174683777).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_capture_untracked_nonlocal_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,130,862
|
DISABLED test_set_stance_eager_then_compile_with_graph_break (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: asan, linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_set_stance_eager_then_compile_with_graph_break&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38173723107).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_set_stance_eager_then_compile_with_graph_break`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 1110, in test_set_stance_eager_then_compile_with_graph_break
self.assertEqual(cnts.frame_count, 2)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 2 but got 3.
Absolute difference: 1
Relative difference: 0.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/dynamo/test_decorators.py DecoratorTests.test_set_stance_eager_then_compile_with_graph_break
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,895,090,945
|
[c10d][PGNCCL] Fix capturability of isend and irecv
|
Aidyn-A
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
COLLABORATOR
|
This PR fixes an issue of inability to capture `isend`/`irecv` ops in `async` mode.
<details>
<summary>The repro code</summary>
```Python
import os
import torch
import torch.distributed as dist
USE_ASYNC = True
def test_func(x, rank):
if rank == 0:
x += 1
# Send the tensor to process 1
if USE_ASYNC:
a = dist.isend(tensor=x, dst=1)
else:
dist.send(tensor=x, dst=1)
else:
# Receive tensor from process 0
if USE_ASYNC:
a = dist.irecv(tensor=x, src=0)
else:
dist.recv(tensor=x, src=0)
if USE_ASYNC:
a.wait()
return x + 2
def run(rank):
torch.cuda.set_device(rank)
x = torch.ones(1, device='cuda')
with torch.cuda.stream(torch.cuda.Stream()):
for i in range(11):
x.copy_(torch.ones(1, device='cuda'))
y = test_func(x, rank)
print(f"Rank{rank} has data {y} in warmup")
torch.cuda.synchronize()
graph = torch.cuda.CUDAGraph()
x.copy_(torch.ones(1, device='cuda'))
with torch.cuda.graph(graph):
y = test_func(x, rank)
for i in range(1):
x.copy_(torch.ones(1, device='cuda'))
graph.replay()
print(f"Rank{rank} has data {y} after graph replay")
def main():
rank = int(os.environ['RANK'])
local_rank = int(os.environ['LOCAL_RANK'])
world_size = int(os.environ['WORLD_SIZE'])
dist.init_process_group('nccl', rank=rank, world_size=world_size)
run(local_rank)
if __name__ == "__main__":
main()
```
</details>
Fails with an error stating that work handle is of a NoneType:
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/workspace/repro.py", line 54, in <module>
[rank1]: main()
[rank1]: File "/workspace/repro.py", line 51, in main
[rank1]: run(local_rank)
[rank1]: File "/workspace/repro.py", line 38, in run
[rank1]: y = test_func(x, rank)
[rank1]: ^^^^^^^^^^^^^^^^^^
[rank1]: File "/workspace/repro.py", line 22, in test_func
[rank1]: a.wait()
[rank1]: ^^^^^^
[rank1]: AttributeError: 'NoneType' object has no attribute 'wait'
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,895,083,602
|
meta registration for torch._scaled_mm with mxfp8
|
vkuzo
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148461
Summary:
Adds the meta registration logic for torch.compile to work with
`torch._scaled_mm` with mxfp8. Thanks to @eellison for the pointer to make inductor work with this.
Test Plan:
```
pytest test/test_matmul_cuda.py -k test_blockwise_mxfp8_compile -s
```
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,895,076,468
|
[dynamo] Memory leak
|
anijain2305
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Might be related to https://www.internalfb.com/sevmanager/view/491701
```
import torch
import logging
@torch._dynamo.disable
def break_gn(x):
return torch.sin(x)
def gn(x0, x):
return x0 * break_gn(x)
class MyMod(torch.nn.Module):
def __init__(self):
super().__init__()
@torch._dynamo.disable(recursive=False)
def forward(self, input):
input = torch.sin(input)
x = input
x = gn(input, input)
x = gn(input, x)
x = gn(input, x)
return x
torch.cuda.memory._record_memory_history(stacks="python")
mod = MyMod().cuda()
fn = torch.compile(mod, backend="eager")
x = torch.randn(10, 10).cuda()
for _ in range(400):
fn(x)
torch.cuda.memory._dump_snapshot("my_snapshot.pickle")
```
### Error logs
_No response_
### Versions
NA
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,895,053,194
|
Remove `torch.testing` from `MOD_SKIPLIST`
|
guilhermeleobas
|
open
|
[
"open source",
"Stale",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148459
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,894,963,328
|
[PP] RFC for fixing microbatch splitting for dim != 0
|
H-Huang
|
closed
|
[
"oncall: distributed",
"Stale",
"release notes: distributed (pipeline)"
] | 2
|
MEMBER
|
There are two issues when we perform the microbatch splitting in `PipelineSchedule.step()` only arise when we don't use the default (split on dim=0):
1) The check for valid tensor stride will fail. We use `tensor_split()` which creates a view of the original tensor and does not update the stride. We could make each of these microbatches `contiguous()` which would update its stride at the cost of copying the input tensor, but i opted to just remove the check.
2) We don't have a way of splitting the `target`, the splitting for it always defaults to dim=0.
I added two of the easiest solutions, but open to discussion. Not sure if it is worth it to add a new argument into `Schedule` to control how target should be split.
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
cc @lessw2020
| true
|
2,894,960,011
|
backport torch.library.custom_op (and improvements) to older versions of PyTorch
|
zou3519
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
this is potentially worth it for APIs that are heavily used by library authors
cc @chauhang @penguinwu @bdhirsh
| true
|
2,894,900,448
|
[MPS] natural log metal kernel
|
Isalia20
|
closed
|
[
"open source",
"topic: performance",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
Issue https://github.com/pytorch/pytorch/issues/148219 highlighted the high dispatch times of ops which ran with MPS Graph on smaller tensors. This PR rewrites the log with metal kernel to mitigate that issue
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,894,848,700
|
Update docstring to match code.
|
jjh42
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Very tiny fix to doc string. Pass grid_size=None results in an Exception.
| true
|
2,894,826,371
|
[compiled_autograd] workaround windows compilation issue
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compiled autograd"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148454
torch.compile doesn't work on windows so we can ifdef-away the problem.
I do not know what the root cause actually is. Most notably, the pytorch
windows build is fine, but some third-party projects that use pytorch headers
on windows (e.g. torchaudio) have issues.
Test Plan:
- wait for CI
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan
| true
|
2,894,793,743
|
[ONNX] Use onnxscript apis for 2.7
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
Use onnxscript apis for 2.7.
Remove reference to `torchlib_opset()` and `torchlib_opset_version()` which were removed in the onnxscript 2.7 apis. These apis were removed because torchlib in onnxscript will always stay on opset 18. Future opset version bumps will happen in pytorch core after the migration of torchlib.
| true
|
2,894,779,750
|
Enable more nightly tests on s390x
|
AlekseiNikiforovIBM
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/s390"
] | 8
|
COLLABORATOR
|
Also enable some tests which probably were accidentally disabled.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,894,758,669
|
BC-linter should ignore testing/linter/adapters/
|
rec
|
open
|
[
"module: lint",
"triaged"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
BC-linter triggers on API changes to files in `testing/linter/adapters/` but this code isn't used external to Pytorch, or even anywhere else inside the project. (Noted by @amjames.)
### Versions
PyTorch version: 2.7.0a0+git5e189c7
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (conda-forge gcc 12.3.0-7) 12.3.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.39
Python version: 3.9.20 | packaged by conda-forge | (main, Sep 30 2024, 17:49:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2060
GPU 1: NVIDIA GeForce RTX 2060
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3970X 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 55%
CPU max MHz: 4549.1211
CPU min MHz: 2200.0000
BogoMIPS: 7400.32
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0a0+git5e189c7
[conda] cuda-cudart 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart-static 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-static_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cupti 12.4.127 he02047a_2 conda-forge
[conda] cuda-cupti-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-libraries-dev 12.4.1 ha770c72_1 conda-forge
[conda] cuda-nvrtc 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvrtc-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx-dev 12.4.127 ha770c72_2 conda-forge
[conda] cuda-opencl 12.4.127 he02047a_1 conda-forge
[conda] cuda-opencl-dev 12.4.127 he02047a_1 conda-forge
[conda] cudnn 9.3.0.75 h93bb076_0 conda-forge
[conda] libcublas 12.4.5.8 he02047a_2 conda-forge
[conda] libcublas-dev 12.4.5.8 he02047a_2 conda-forge
[conda] libcufft 11.2.1.3 he02047a_2 conda-forge
[conda] libcufft-dev 11.2.1.3 he02047a_2 conda-forge
[conda] libcurand 10.3.5.147 he02047a_2 conda-forge
[conda] libcurand-dev 10.3.5.147 he02047a_2 conda-forge
[conda] libcusolver 11.6.1.9 he02047a_2 conda-forge
[conda] libcusolver-dev 11.6.1.9 he02047a_2 conda-forge
[conda] libcusparse 12.3.1.170 he02047a_2 conda-forge
[conda] libcusparse-dev 12.3.1.170 he02047a_2 conda-forge
[conda] libmagma 2.8.0 h0af6554_0 conda-forge
[conda] libmagma_sparse 2.8.0 h0af6554_0 conda-forge
[conda] libnvjitlink 12.4.127 he02047a_2 conda-forge
[conda] libnvjitlink-dev 12.4.127 he02047a_2 conda-forge
[conda] magma 2.8.0 h51420fd_0 conda-forge
[conda] mkl 2024.2.2 ha957f24_15 conda-forge
[conda] mkl-include 2024.2.2 ha957f24_15 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 py39h74842e3_0 conda-forge
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0a0+git5e189c7 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
| true
|
2,894,726,871
|
Add a couple config options to compiler bisector
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148450
These are commonly source of bugs/divergence (through bad interactions etc)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,894,706,134
|
[MPS][BE] Towards strided unary ops support
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148468
* #148471
* __->__ #148449
Add generic functors kernels and rewrite all existing implementations into functors
| true
|
2,894,705,986
|
[MPS] Add some useful utils
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148468
* #148449
* __->__ #148448
* #148399
* #148398
Like `is_compex_v`, `is_scalar_intergral_v`, `result_of` etc
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,894,627,498
|
Union type raise error when running python with argument "-O" for torch script.
|
hzhangxyz
|
open
|
[
"oncall: jit"
] | 1
|
NONE
|
### 🐛 Describe the bug
As discussed in #114755 , torch script has added support for the union type introduced in python 3.10, however, I find when "-O" added to the python command line, it fails sometimes, for example:
```python
import torch
class B(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(
self,
x: torch.Tensor,
cache: list[tuple[torch.Tensor, torch.Tensor]] | None,
) -> tuple[torch.Tensor, list[tuple[torch.Tensor, torch.Tensor]]]:
return x, []
class C(torch.nn.Module):
def __init__(self,) -> None:
super().__init__()
self.b: torch.nn.Module = B()
@torch.jit.export
def forward(self, x: torch.Tensor) -> torch.Tensor:
result, _ = self.b(x, None)
return result
c1 = C()
c2 = torch.jit.script(c1)
```
Save the above code to a file named `test.py`, and run `python test.py` works well, but `python -O test.py` failed with error message:
```
Traceback (most recent call last):
File "/home/hzhangxyz/Cloud/Desktop/qmb/test.py", line 30, in <module>
c2 = torch.jit.script(c1)
File "/home/hzhangxyz/Cloud/Desktop/qmb/env/lib/python3.13/site-packages/torch/jit/_script.py", line 1429, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/home/hzhangxyz/Cloud/Desktop/qmb/env/lib/python3.13/site-packages/torch/jit/_script.py", line 1147, in _script_impl
return torch.jit._recursive.create_script_module(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
obj, torch.jit._recursive.infer_methods_to_compile
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/hzhangxyz/Cloud/Desktop/qmb/env/lib/python3.13/site-packages/torch/jit/_recursive.py", line 557, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/hzhangxyz/Cloud/Desktop/qmb/env/lib/python3.13/site-packages/torch/jit/_recursive.py", line 634, in create_script_module_impl
create_methods_and_properties_from_stubs(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
concrete_type, method_stubs, property_stubs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/hzhangxyz/Cloud/Desktop/qmb/env/lib/python3.13/site-packages/torch/jit/_recursive.py", line 466, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
property_defs, property_rcbs, method_defs, method_rcbs, method_defaults
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
RuntimeError:
forward(__torch__.B self, Tensor x, (Tensor, Tensor)[] cache) -> ((Tensor, (Tensor, Tensor)[])):
Expected a value of type 'Tuple[Tensor, Tensor]' for argument '<varargs>' but instead found type 'NoneType'.
:
File "/home/hzhangxyz/Cloud/Desktop/qmb/test.py", line 25
@torch.jit.export
def forward(self, x: torch.Tensor) -> torch.Tensor:
result, _ = self.b(x, None)
~~~~~~ <--- HERE
return result
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20250207
Clang version: 19.1.7
CMake version: version 3.31.6
Libc version: glibc-2.41
Python version: 3.13.2 (main, Feb 5 2025, 08:05:21) [GCC 14.2.1 20250128] (64-bit runtime)
Python platform: Linux-6.6.59-1-lts-x86_64-with-glibc2.41
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 89%
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4993.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,894,587,598
|
[inductor][triton] Block ptr analysis fix assert on matched index expression
|
kundaMwiza
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 10
|
CONTRIBUTOR
|
If dynamic shapes are enabled, then block analysis may create new precomputed size replacements from the index which can lead to an assertion failure when the matched index is compared with the original index. For example the below assertion fails, despite the expressions being equivalent (ps2 = 3 * ps0). This can be resolved by updating the original index with the replacements, or simply removing the replacements when the expressions are tested to be equal - the latter option is implemented in this PR.
```
torch._inductor.exc.InductorError: AssertionError:
E Invalid match!
E Index: 3*ps0*((yindex//3)) + (ModularIndexing(yindex, 1, 3))
E Matched expression: ps2*((yindex//3)) + (ModularIndexing(yindex, 1, 3))
E
```
This PR fixes the test below when `config.triton.use_block_ptr=True`:
```
python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesCpuTests.test_conv3d_channels_last_dynamic_shapes_cpu
```
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,894,520,151
|
Fix test failures on non-x86 Linux
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
The cpp contexts are only supported on x86 Linux.
The tests requiring them are skipped on non-Linux but not if the architecture is not x86.
In most places it is checked for ARM64 which is not enough as a check for x86 is required instead.
Fix the test decorators and factor out a common one in test_cuda.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,894,195,146
|
Update s390x docker image
|
AlekseiNikiforovIBM
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
New releases of ml_dtypes successfully build on s390x, skip building patched old release.
Unpin grpcio version.
| true
|
2,894,156,074
|
DISABLED test_user_defined_binop (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: rocm, asan, linux, mac, macos, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_user_defined_binop&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38150168296).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_user_defined_binop`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,894,155,583
|
DISABLED test_capture_untracked_global_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 7
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_capture_untracked_global_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38153726224).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_capture_untracked_global_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,894,141,886
|
Let `CUDAExtension` to find stub libs
|
oraluben
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes: https://github.com/sgl-project/sglang/issues/4060
CUDA runtime sometimes provides stub libs rather than "real" libs (e.g. https://stackoverflow.com/questions/76988911/what-should-i-link-against-the-actual-cuda-driver-library-or-the-driver-library).
Currently `CUDAExtension` do not search for them so it may fail in some cases, e.g. https://github.com/sgl-project/sglang/issues/4060
This PR fixes it.
| true
|
2,894,095,789
|
torch.distributed hangs between 2 Mac Devices
|
weimiao1324
|
open
|
[
"oncall: distributed",
"triaged",
"module: macos"
] | 2
|
NONE
|
I want to use torch.distributed on 2 Mac Devices, but it hangs after start with torchrun command.
Here is the test Code:
```
import torch
import torch.distributed as dist
import os
import datetime
def main():
timeout = datetime.timedelta(seconds=10)
print("torch.distributed.is_available()", torch.distributed.is_available())
print("os.environ['MASTER_ADDR']", os.environ['MASTER_ADDR'])
print("os.environ['MASTER_PORT']", os.environ['MASTER_PORT'])
print("os.environ['LOCAL_RANK']", os.environ['LOCAL_RANK'])
print("os.environ['WORLD_SIZE']", os.environ['WORLD_SIZE'])
print("os.environ['RANK']", os.environ['RANK'])
dist.init_process_group(
backend="gloo",
init_method='env://',
timeout=timeout
)
dist.barrier()
print(f"[Rank {dist.get_rank()}] Process group initialized")
tensor = torch.tensor([1.0, 2.0, 3.0]).to('cpu')
if dist.get_rank() == 0:
try:
dist.send(tensor, dst=1)
print(f"Process 0 sent tensor: {tensor}")
except Exception as e:
print(f"Error sending tensor from process 0: {e}")
else:
received_tensor = torch.zeros(3).to('cpu')
try:
dist.recv(received_tensor, src=0)
print(f"Process 1 received tensor: {received_tensor}")
except Exception as e:
print(f"Error receiving tensor at process 1: {e}")
dist.barrier()
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
I also checked the network connectivity between the machines and there are no firewall issues.
### Pytorch Version
I build pytorch from source.
Here is the build command.
`USE_DISTRIBUTED=1 USE_OPENMP=1 MACOSX_DEPLOYMENT_TARGET=11.0 WERROR=1 BUILD_TEST=OFF USE_PYTORCH_METAL=1 BUILD_CAFFE2_OPS=0 USE_CUDA=0 USE_MKLDNN=OFF USE_QNNPACK=OFF python setup.py bdist_wheel`
The distributed commands are:
```
GLOO_SOCKET_IFNAME=en7 GLOG_logtostderr=1 GLOG_v=3 TORCH_CPP_LOG_LEVEL=INFO TORCH_DISTRIBUTED_DEBUG=DETAIL GLOO_LOG_LEVEL=DEBUG torchrun --nnodes=2 --nproc_per_node=1 --node_rank=0 --master_addr=192.168.101.14 --master_port=29500 test_distributed.py
GLOO_SOCKET_IFNAME=en6 GLOG_logtostderr=1 GLOG_v=3 TORCH_CPP_LOG_LEVEL=INFO TORCH_DISTRIBUTED_DEBUG=DETAIL GLOO_LOG_LEVEL=DEBUG torchrun --nnodes=2 --nproc_per_node=1 --node_rank=1 --master_addr=192.168.101.14 --master_port=29500 test_distributed.py
```
Two Devices's console log:
Host Device log
```
[I304 11:29:02.304194000 debug.cpp:50] [c10d] The debug level is set to DETAIL.
W0304 11:29:02.694000 54902 site-packages/torch/distributed/elastic/multiprocessing/redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
[I304 11:29:02.768617000 TCPStore.cpp:274] [c10d - debug] The server has started on port = 29500.
[I304 11:29:02.768622000 TCPStoreLibUvBackend.cpp:1178] [c10d - debug] Uv main loop running
[I304 11:29:02.768646000 socket.cpp:779] [c10d - debug] The client socket will attempt to connect to an IPv6 address of (192.168.101.14, 29500).
[I304 11:29:02.768876000 socket.cpp:850] [c10d - trace] The client socket is attempting to connect to [::ffff:192.168.101.14]:29500.
[I304 11:29:02.769352000 socket.cpp:946] [c10d] The client socket has connected to [::ffff:192.168.101.14]:29500 on SocketImpl(fd=13, addr=[::ffff:192.168.101.14]:54688, remote=[::ffff:192.168.101.14]:29500).
[I304 11:29:02.769404000 TCPStore.cpp:319] [c10d - debug] TCP client connected to host 192.168.101.14:29500
[I304 11:29:02.769634000 TCPStoreLibUvBackend.cpp:797] [c10d - trace] validate magic:1015412686 address:[::ffff:192.168.101.14]:54688
[I304 11:29:02.769644000 TCPStoreLibUvBackend.cpp:810] [c10d - trace] ping nonce:54902 address:[::ffff:192.168.101.14]:54688
[I304 11:29:02.769825000 TCPStoreLibUvBackend.cpp:879] [c10d - trace] add key:init/ val:1 address:[::ffff:192.168.101.14]:54688
[I304 11:29:02.769954000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.14]:54688
[I304 11:29:02.770194000 TCPStoreLibUvBackend.cpp:861] [c10d - trace] get key:init/ address:[::ffff:192.168.101.14]:54688
[I304 11:29:02.780911000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.14]:54688
[I304 11:29:02.780983000 TCPStoreLibUvBackend.cpp:861] [c10d - trace] get key:init/ address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.779435000 TCPStoreLibUvBackend.cpp:797] [c10d - trace] validate magic:1015412686 address:[::ffff:192.168.101.75]:58883
[I304 11:29:03.779547000 TCPStoreLibUvBackend.cpp:810] [c10d - trace] ping nonce:47408 address:[::ffff:192.168.101.75]:58883
[I304 11:29:03.780823000 TCPStoreLibUvBackend.cpp:879] [c10d - trace] add key:init/ val:1 address:[::ffff:192.168.101.75]:58883
[I304 11:29:03.785959000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.786095000 TCPStoreLibUvBackend.cpp:861] [c10d - trace] get key:init/ address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.794502000 TCPStoreLibUvBackend.cpp:827] [c10d - trace] set key:/none/torchelastic/role_info/0 address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.794520000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:2 address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.821271000 TCPStoreLibUvBackend.cpp:827] [c10d - trace] set key:/none/torchelastic/role_info/1 address:[::ffff:192.168.101.75]:58883
[I304 11:29:03.821583000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.75]:58883
[I304 11:29:03.821810000 TCPStoreLibUvBackend.cpp:1008] [c10d - trace] multi_get key_count:2 address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.822306000 TCPStoreLibUvBackend.cpp:1039] [c10d - trace] multi_set key_count:2 address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.822447000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.822616000 TCPStoreLibUvBackend.cpp:861] [c10d - trace] get key:/none/torchelastic/assigned_ranks/0 address:[::ffff:192.168.101.14]:54688
[I304 11:29:03.823602000 TCPStoreLibUvBackend.cpp:861] [c10d - trace] get key:/none/torchelastic/assigned_ranks/1 address:[::ffff:192.168.101.75]:58883
[I304 11:29:04.085345000 debug.cpp:50] [c10d] The debug level is set to DETAIL.
[I304 11:29:04.455333000 TCPStoreLibUvBackend.cpp:797] [c10d - trace] validate magic:1015412686 address:[::ffff:192.168.101.75]:58884
[I304 11:29:04.455356000 TCPStoreLibUvBackend.cpp:810] [c10d - trace] ping nonce:47409 address:[::ffff:192.168.101.75]:58884
[I304 11:29:04.456456000 TCPStoreLibUvBackend.cpp:879] [c10d - trace] add key:init/ val:1 address:[::ffff:192.168.101.75]:58884
torch.distributed.is_available() True
os.environ['MASTER_ADDR'] 192.168.101.14
os.environ['MASTER_PORT'] 29500
os.environ['LOCAL_RANK'] 0
os.environ['WORLD_SIZE'] 2
os.environ['RANK'] 0
[I304 11:29:04.457472000 socket.cpp:779] [c10d - debug] The client socket will attempt to connect to an IPv6 address of (192.168.101.14, 29500).
[I304 11:29:04.457727000 socket.cpp:850] [c10d - trace] The client socket is attempting to connect to [::ffff:192.168.101.14]:29500.
[I304 11:29:04.458331000 socket.cpp:946] [c10d] The client socket has connected to [::ffff:192.168.101.14]:29500 on SocketImpl(fd=3, addr=[::ffff:192.168.101.14]:54690, remote=[::ffff:192.168.101.14]:29500).
[I304 11:29:04.458390000 TCPStore.cpp:319] [c10d - debug] TCP client connected to host 192.168.101.14:29500
[I304 11:29:04.458782000 TCPStoreLibUvBackend.cpp:797] [c10d - trace] validate magic:1015412686 address:[::ffff:192.168.101.14]:54690
[I304 11:29:04.458796000 TCPStoreLibUvBackend.cpp:810] [c10d - trace] ping nonce:54904 address:[::ffff:192.168.101.14]:54690
[I304 11:29:04.459197000 TCPStoreLibUvBackend.cpp:879] [c10d - trace] add key:init/ val:1 address:[::ffff:192.168.101.14]:54690
[I304 11:29:04.459567000 TCPStoreLibUvBackend.cpp:827] [c10d - trace] set key:/default_pg/0//cpu//0/rank_1 address:[::ffff:192.168.101.75]:58884
[I304 11:29:04.459585000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.75]:58884
[I304 11:29:04.459751000 TCPStoreLibUvBackend.cpp:827] [c10d - trace] set key:/default_pg/0//cpu//0/rank_0 address:[::ffff:192.168.101.14]:54690
[I304 11:29:04.459765000 TCPStoreLibUvBackend.cpp:827] [c10d - trace] set key:/default_pg/0//cpu//0/0 address:[::ffff:192.168.101.14]:54690
[I304 11:29:04.459773000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.14]:54690
[I304 11:29:04.460760000 TCPStoreLibUvBackend.cpp:861] [c10d - trace] get key:/default_pg/0//cpu//0/rank_0 address:[::ffff:192.168.101.75]:58884
[I304 11:29:04.461870000 TCPStoreLibUvBackend.cpp:827] [c10d - trace] set key:/default_pg/0//cpu//0/1 address:[::ffff:192.168.101.75]:58884
[I304 11:29:04.461924000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.75]:58884
[I304 11:29:04.462091000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.14]:54690
[I304 11:29:04.462258000 TCPStoreLibUvBackend.cpp:861] [c10d - trace] get key:/default_pg/0//cpu//0/1 address:[::ffff:192.168.101.14]:54690
[I304 11:29:04.462724000 TCPStoreLibUvBackend.cpp:941] [c10d - trace] wait key_count:1 address:[::ffff:192.168.101.75]:58884
[I304 11:29:04.463359000 TCPStoreLibUvBackend.cpp:861] [c10d - trace] get key:/default_pg/0//cpu//0/0 address:[::ffff:192.168.101.75]:58884
```
Other log
```
[I304 11:29:03.708396000 debug.cpp:50] [c10d] The debug level is set to DETAIL.
W0304 11:29:03.625000 47408 site-packages/torch/distributed/elastic/multiprocessing/redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
[I304 11:29:03.198850000 socket.cpp:779] [c10d - debug] The client socket will attempt to connect to an IPv6 address of (192.168.101.14, 29500).
[I304 11:29:03.199099000 socket.cpp:850] [c10d - trace] The client socket is attempting to connect to [::ffff:192.168.101.14]:29500.
[I304 11:29:03.201066000 socket.cpp:946] [c10d] The client socket has connected to [::ffff:192.168.101.14]:29500 on SocketImpl(fd=3, addr=[::ffff:192.168.101.75]:58883, remote=[::ffff:192.168.101.14]:29500).
[I304 11:29:03.201204000 TCPStore.cpp:319] [c10d - debug] TCP client connected to host 192.168.101.14:29500
[I304 11:29:03.512257000 debug.cpp:50] [c10d] The debug level is set to DETAIL.
torch.distributed.is_available() True
os.environ['MASTER_ADDR'] 192.168.101.14
os.environ['MASTER_PORT'] 29500
os.environ['LOCAL_RANK'] 0
os.environ['WORLD_SIZE'] 2
os.environ['RANK'] 1
[I304 11:29:04.875068000 socket.cpp:779] [c10d - debug] The client socket will attempt to connect to an IPv6 address of (192.168.101.14, 29500).
[I304 11:29:04.875322000 socket.cpp:850] [c10d - trace] The client socket is attempting to connect to [::ffff:192.168.101.14]:29500.
[I304 11:29:04.877228000 socket.cpp:946] [c10d] The client socket has connected to [::ffff:192.168.101.14]:29500 on SocketImpl(fd=3, addr=[::ffff:192.168.101.75]:58884, remote=[::ffff:192.168.101.14]:29500).
[I304 11:29:04.877358000 TCPStore.cpp:319] [c10d - debug] TCP client connected to host 192.168.101.14:29500
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @malfet @albanD
| true
|
2,893,996,047
|
[cudagraph_trees]RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run
|
FY-Summer
|
closed
|
[
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
### 🐛 Describe the bug
Hello,I am trying to apply torch.compile(mode='reduce-overhead') to the topk_softmax_with_capacity function, which comes from Megatron-LM(Using a single machine with eight GPUs to run the Mixtral 8x7B training,https://github.com/NVIDIA/Megatron-LM/blob/core_r0.10.0/megatron/core/transformer/moe/moe_utils.py#L231)
and I encountered the following error:
```
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 360, in deferred_cudagraphify
[rank3] return fn(inputs)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 944, in run
[rank3] return model(new_inputs)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1842, in run
[rank3] out = self._run(new_inputs, function_id)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1973, in _run
[rank3] return self.record_function(new_inputs, function_id)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2004, in record_function
[rank3] node = CUDAGraphNode(
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 815, in __init__
[rank3] self.static_input_data_ptrs: InputList[Optional[int]] = [
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 817, in <listcomp>
[rank3] inputs[i].data_ptr()
[rank3] RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File "/megatron-lm/megatron/core/transformer/moe/moe_utils.py", line 386, in topk_softmax_with_capacity
[rank3] scores, top_indices = torch.topk(logits, k=topk, dim=1). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
I tried adding `torch.compiler.cudagraph_mark_step_begin()` at the place where topk_softmax_with_capacity is called, but I still encountered the following error:
```
[rank3] Traceback (most recent call last):
[rank3] File "/pretrain_gpt.py", line 303, in <module>
[rank3] pretrain(
[rank3] File "/megatron-lm/megatron/training/training.py", line 386, in pretrain
[rank3] iteration, num_floating_point_operations_so_far = train(
[rank3] File "/megatron-lm/megatron/training/training.py", line 1505, in train
[rank3] train_step(forward_step_func,
[rank3] File "/megatron-lm/megatron/training/training.py", line 766, in train_step
[rank3] losses_reduced = forward_backward_func(
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 467, in forward_backward_no_pipelining
[rank3] backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config)
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 366, in backward_step
[rank3] custom_backward(output_tensor[0], output_tensor_grad[0])
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 150, in custom_backward
[rank3] Variable._execution_engine.run_backward(
[rank3] RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File "/megatron-lm/megatron/core/transformer/moe/moe_utils.py", line 402, in topk_softmax_with_capacity
[rank3] tokens_per_expert = topk_map.sum(dim=0). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
The general forward and backward execution flow is shown in the figure below:

### Error logs
**without torch.compiler.cudagraph_mark_step_begin():**
```
[rank3]I0304 17:18:43.180000 140626396522304 torch/_inductor/cudagraph_trees.py:363] [__cudagraphs] recording cudagraph tree for graph without symints
[rank3]V0304 17:18:43.452000 140626396522304 torch/_inductor/cudagraph_trees.py:402] [__cudagraphs] cudagraphify=CompilationMode.FORWARD
[rank3]V0304 17:18:43.452000 140626396522304 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.NONE
[rank3]V0304 17:18:43.452000 140626396522304 torch/_inductor/cudagraph_trees.py:2035] [__cudagraphs] Running warmup of function 0
[rank3]V0304 17:18:43.453000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=23
[rank3]V0304 17:18:43.497000 140626396522304 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:44.751000 140626396522304 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:44.751000 140626396522304 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=23
[rank3]V0304 17:18:44.751000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=27
[rank3]V0304 17:18:44.751000 140626396522304 torch/_inductor/cudagraph_trees.py:2133] [__cudagraphs] can_start_new_generation 2
[rank3]V0304 17:18:44.752000 140626396522304 torch/_inductor/cudagraph_trees.py:2136] [__cudagraphs] can_start_new_generation running_forwards_with_pending_backwards=True
[rank3]V0304 17:18:44.752000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=27
[rank3]V0304 17:18:44.752000 140626396522304 torch/_inductor/cudagraph_trees.py:2037] [__cudagraphs] Running eager of function 0 because ancestor needed to warm up
[rank3]V0304 17:18:44.752000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=27
[rank3]V0304 17:18:44.753000 140626396522304 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:44.762000 140626396522304 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:44.762000 140626396522304 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=27
[rank3]V0304 17:18:44.762000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=31
[rank3]V0304 17:18:44.762000 140626396522304 torch/_inductor/cudagraph_trees.py:2133] [__cudagraphs] can_start_new_generation 2
[rank3]V0304 17:18:44.762000 140626396522304 torch/_inductor/cudagraph_trees.py:2136] [__cudagraphs] can_start_new_generation running_forwards_with_pending_backwards=True
[rank3]V0304 17:18:44.762000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=31
[rank3]V0304 17:18:44.763000 140626396522304 torch/_inductor/cudagraph_trees.py:2037] [__cudagraphs] Running eager of function 0 because ancestor needed to warm up
[rank3]V0304 17:18:44.763000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=31
[rank3]V0304 17:18:44.764000 140626396522304 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:44.773000 140626396522304 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:44.773000 140626396522304 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=31
[rank3]V0304 17:18:44.773000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=35
[rank3]V0304 17:18:44.774000 140626396522304 torch/_inductor/cudagraph_trees.py:2133] [__cudagraphs] can_start_new_generation 2
[rank3]V0304 17:18:44.774000 140626396522304 torch/_inductor/cudagraph_trees.py:2136] [__cudagraphs] can_start_new_generation running_forwards_with_pending_backwards=True
[rank3]V0304 17:18:44.774000 140626396522304 torch/_inductor/cudagraph_trees.py:2037] [__cudagraphs] Running eager of function 0 because ancestor needed to warm up
[rank3]V0304 17:18:44.774000 140626396522304 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=35
[rank3]V0304 17:18:44.774000 140626396522304 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]I0304 17:18:46.137000 140585784559168 torch/_inductor/cudagraph_trees.py:363] [__cudagraphs] recording cudagraph tree for graph without symints
[rank3]V0304 17:18:46.137000 140585784559168 torch/_inductor/cudagraph_trees.py:402] [__cudagraphs] cudagraphify=CompilationMode.BACKWARD
[rank3]V0304 17:18:46.137000 140585784559168 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.BACKWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:46.137000 140585784559168 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=35
[rank3]V0304 17:18:46.138000 140585784559168 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=39
[rank3]V0304 17:18:46.138000 140585784559168 torch/_inductor/cudagraph_trees.py:2133] [__cudagraphs] can_start_new_generation 2
[rank3]V0304 17:18:46.138000 140585784559168 torch/_inductor/cudagraph_trees.py:2136] [__cudagraphs] can_start_new_generation running_forwards_with_pending_backwards=True
[rank3]V0304 17:18:46.138000 140585784559168 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=39
[rank3]V0304 17:18:46.138000 140585784559168 torch/_inductor/cudagraph_trees.py:2035] [__cudagraphs] Running warmup of function 1
[rank3]V0304 17:18:46.138000 140585784559168 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=39
[rank3]V0304 17:18:46.139000 140585784559168 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.BACKWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:46.146000 140585784559168 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.BACKWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:18:46.146000 140585784559168 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=39
[rank3]V0304 17:18:46.146000 140585784559168 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=0 generation=40
[rank3]V0304 17:18:46.146000 140585784559168 torch/_inductor/cudagraph_trees.py:2133] [__cudagraphs] can_start_new_generation 2
[rank3]V0304 17:18:46.146000 140585784559168 torch/_inductor/cudagraph_trees.py:2136] [__cudagraphs] can_start_new_generation running_forwards_with_pending_backwards=False
[rank3]V0304 17:18:46.146000 140585784559168 torch/_inductor/cudagraph_trees.py:2229] [__cudagraphs] dealloc_current_path_weakrefs
[rank3]V0304 17:18:46.147000 140585784559168 torch/_inductor/cudagraph_trees.py:1998] [__cudagraphs] Recording function 1 of graph recording id 0
[rank3] Traceback (most recent call last):
[rank3] File "/megatron-lm/pretrain_gpt.py", line 303, in <module>
[rank3] pretrain(
[rank3] File "/megatron-lm/megatron/training/training.py", line 386, in pretrain
[rank3] iteration, num_floating_point_operations_so_far = train(
[rank3] File "/megatron-lm/megatron/training/training.py", line 1505, in train
[rank3] train_step(forward_step_func,
[rank3] File "/megatron-lm/megatron/training/training.py", line 766, in train_step
[rank3] losses_reduced = forward_backward_func(
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 467, in forward_backward_no_pipelining
[rank3] backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config)
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 366, in backward_step
[rank3] custom_backward(output_tensor[0], output_tensor_grad[0])
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 150, in custom_backward
[rank3] Variable._execution_engine.run_backward(
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/autograd/function.py", line 306, in apply
[rank3] return user_fn(self, *args)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1861, in backward
[rank3] out = call_compiled_backward()
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1809, in call_compiled_backward
[rank3] out = call_func_at_runtime_with_args(
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 120, in call_func_at_runtime_with_args
[rank3] out = normalize_as_list(f(args))
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 600, in _fn
[rank3] return fn(*args, **kwargs)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1131, in __call__
[rank3] return self.current_callable(inputs)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 993, in run
[rank3] return compiled_fn(new_inputs)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 360, in deferred_cudagraphify
[rank3] return fn(inputs)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 944, in run
[rank3] return model(new_inputs)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1842, in run
[rank3] out = self._run(new_inputs, function_id)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1973, in _run
[rank3] return self.record_function(new_inputs, function_id)
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2004, in record_function
[rank3] node = CUDAGraphNode(
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 815, in __init__
[rank3] self.static_input_data_ptrs: InputList[Optional[int]] = [
[rank3] File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 817, in <listcomp>
[rank3] inputs[i].data_ptr()
[rank3] RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File "/megatron-lm/megatron/core/transformer/moe/moe_utils.py", line 386, in topk_softmax_with_capacity
[rank3] scores, top_indices = torch.topk(logits, k=topk, dim=1). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
**with torch.compiler.cudagraph_mark_step_begin():**
```
[rank3]I0304 17:57:42.491000 140616558118720 torch/_inductor/cudagraph_trees.py:363] [__cudagraphs] recording cudagraph tree for graph without symints
[rank3]V0304 17:57:42.670000 140616558118720 torch/_inductor/cudagraph_trees.py:402] [__cudagraphs] cudagraphify=CompilationMode.FORWARD
[rank3]V0304 17:57:42.670000 140616558118720 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.NONE
[rank3]V0304 17:57:42.670000 140616558118720 torch/_inductor/cudagraph_trees.py:2035] [__cudagraphs] Running warmup of function 0
[rank3]V0304 17:57:42.670000 140616558118720 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-1 generation=23
[rank3]V0304 17:57:42.707000 140616558118720 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:57:43.881000 140616558118720 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:57:43.881000 140616558118720 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=-1
[rank3]V0304 17:57:43.881000 140616558118720 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-2 generation=27
[rank3]V0304 17:57:43.881000 140616558118720 torch/_inductor/cudagraph_trees.py:2133] [__cudagraphs] can_start_new_generation 2
[rank3]V0304 17:57:43.881000 140616558118720 torch/_inductor/cudagraph_trees.py:2229] [__cudagraphs] dealloc_current_path_weakrefs
[rank3]V0304 17:57:43.881000 140616558118720 torch/_inductor/cudagraph_trees.py:1998] [__cudagraphs] Recording function 0 of graph recording id 0
[rank3]V0304 17:57:44.049000 140616558118720 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-2 generation=27
[rank3]V0304 17:57:44.049000 140616558118720 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.RECORDING
[rank3]V0304 17:57:44.058000 140616558118720 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.RECORDING
[rank3]V0304 17:57:44.059000 140616558118720 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=-2
[rank3]V0304 17:57:44.059000 140616558118720 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-3 generation=31
[rank3]V0304 17:57:44.059000 140616558118720 torch/_inductor/cudagraph_trees.py:2133] [__cudagraphs] can_start_new_generation 2
[rank3]V0304 17:57:44.059000 140616558118720 torch/_inductor/cudagraph_trees.py:2229] [__cudagraphs] dealloc_current_path_weakrefs
[rank3]V0304 17:57:44.059000 140616558118720 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-3 generation=31
[rank3]V0304 17:57:44.059000 140616558118720 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.EXECUTION
[rank3]V0304 17:57:44.068000 140616558118720 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.EXECUTION
[rank3]V0304 17:57:44.068000 140616558118720 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=-3
[rank3]V0304 17:57:44.068000 140616558118720 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-4 generation=35
[rank3]V0304 17:57:44.068000 140616558118720 torch/_inductor/cudagraph_trees.py:2133] [__cudagraphs] can_start_new_generation 2
[rank3]V0304 17:57:44.068000 140616558118720 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.FORWARD path_state=ExecutionState.NONE
[rank3]V0304 17:57:44.068000 140616558118720 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-4 generation=35
[rank3]V0304 17:57:44.069000 140616558118720 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.EXECUTION
[rank3]V0304 17:57:44.069000 140616558118720 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.FORWARD path_state=ExecutionState.EXECUTION
[rank3]I0304 17:57:45.336000 140575946823232 torch/_inductor/cudagraph_trees.py:363] [__cudagraphs] recording cudagraph tree for graph without symints
[rank3]V0304 17:57:45.336000 140575946823232 torch/_inductor/cudagraph_trees.py:402] [__cudagraphs] cudagraphify=CompilationMode.BACKWARD
[rank3]V0304 17:57:45.337000 140575946823232 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.BACKWARD path_state=ExecutionState.EXECUTION
[rank3]V0304 17:57:45.337000 140575946823232 torch/_inductor/cudagraph_trees.py:2266] [__cudagraphs] Checkpointing cuda caching allocator state. Number of checkpoints 1
[rank3]V0304 17:57:45.337000 140575946823232 torch/_inductor/cudagraph_trees.py:2035] [__cudagraphs] Running warmup of function 1
[rank3]V0304 17:57:45.337000 140575946823232 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-4 generation=39
[rank3]V0304 17:57:45.337000 140575946823232 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.BACKWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:57:45.344000 140575946823232 torch/_inductor/cudagraph_trees.py:1840] [__cudagraphs] CUDAGraphTreeManager run mode=CompilationMode.BACKWARD path_state=ExecutionState.WARMUP
[rank3]V0304 17:57:45.344000 140575946823232 torch/_inductor/cudagraph_trees.py:2130] [__cudagraphs] can_start_new_generation 1 current_gen=-4
[rank3]V0304 17:57:45.344000 140575946823232 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-4 generation=40
[rank3]V0304 17:57:45.344000 140575946823232 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-4 generation=40
[rank3]V0304 17:57:45.344000 140575946823232 torch/_inductor/cudagraph_trees.py:2037] [__cudagraphs] Running eager of function 1 because ancestor needed to warm up
[rank3]V0304 17:57:45.344000 140575946823232 torch/_inductor/cudagraph_trees.py:2119] [__cudagraphs] get_curr_generation mark_step_counter=-4 generation=40
[rank3]V0304 17:57:45.356000 140575946823232 torch/_inductor/cudagraph_trees.py:1850] [__cudagraphs] CUDAGraphTreeManager end run mode=CompilationMode.BACKWARD path_state=ExecutionState.WARMUP
[rank3] Traceback (most recent call last):
[rank3] File "/pretrain_gpt.py", line 303, in <module>
[rank3] pretrain(
[rank3] File "/megatron-lm/megatron/training/training.py", line 386, in pretrain
[rank3] iteration, num_floating_point_operations_so_far = train(
[rank3] File "/megatron-lm/megatron/training/training.py", line 1505, in train
[rank3] train_step(forward_step_func,
[rank3] File "/megatron-lm/megatron/training/training.py", line 766, in train_step
[rank3] losses_reduced = forward_backward_func(
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 467, in forward_backward_no_pipelining
[rank3] backward_step(input_tensor, output_tensor, output_tensor_grad, model_type, config)
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 366, in backward_step
[rank3] custom_backward(output_tensor[0], output_tensor_grad[0])
[rank3] File "/megatron-lm/megatron/core/pipeline_parallel/schedules.py", line 150, in custom_backward
[rank3] Variable._execution_engine.run_backward(
[rank3] RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File "/megatron-lm/megatron/core/transformer/moe/moe_utils.py", line 402, in topk_softmax_with_capacity
[rank3] tokens_per_expert = topk_map.sum(dim=0). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
### Versions
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.1.25012
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 15.0.0
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 6 2025, 16:10:57) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-4.18.0-372.9.1.el8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
HIP runtime version: 6.1.25012
MIOpen runtime version: 2.16.0
Is XNNPACK available: True
cc @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,893,911,264
|
'CUDA error: an illegal memory access was encountered' when using forced_align on cuda device > 0
|
FredHaa
|
closed
|
[
"module: cuda"
] | 3
|
NONE
|
### 🐛 Describe the bug
The forced_align op fails when using a GPU other than cuda:0
This reproduces the error:
```python
import torch
import torchaudio
import torchaudio.functional as F
bundle = torchaudio.pipelines.MMS_FA
SPEECH_FILE = torchaudio.utils.download_asset("tutorial-assets/Lab41-SRI-VOiCES-src-sp0307-ch127535-sg0042.wav")
waveform, _ = torchaudio.load(SPEECH_FILE)
TRANSCRIPT = "i had that curiosity beside me at this moment".split()
LABELS = bundle.get_labels(star=None)
DICTIONARY = bundle.get_dict(star=None)
tokenized_transcript = [DICTIONARY[c] for word in TRANSCRIPT for c in word]
def align(emission, tokens, device):
emission = emission.to(device)
targets = torch.tensor([tokens], dtype=torch.int32, device=device)
alignments, scores = F.forced_align(emission, targets, blank=0)
alignments, scores = alignments[0], scores[0] # remove batch dimension for simplicity
scores = scores.exp() # convert back to probability
return alignments, scores
def unflatten(list_, lengths):
assert len(list_) == sum(lengths)
i = 0
ret = []
for l in lengths:
ret.append(list_[i : i + l])
i += l
return ret
for device in ["cpu", "cuda:0", "cuda:1"]:
print(f'Running on: {device}')
model = bundle.get_model(with_star=False).to(device)
with torch.inference_mode():
emission, _ = model(waveform.to(device))
aligned_tokens, alignment_scores = align(emission, tokenized_transcript, device=device)
token_spans = F.merge_tokens(aligned_tokens, alignment_scores)
word_spans = unflatten(token_spans, [len(word) for word in TRANSCRIPT])
print(word_spans)
print()
```
When running that file the output is:
```
frederik@dev-gpu-5 ~/p/b/l/text_alignment> CUDA_LAUNCH_BLOCKING=1 uv run scripts/test_alignment.py 1 feat/scripts-refine-dataset!?
Running on: cpu
[[TokenSpan(token=2, start=32, end=33, score=0.9994410872459412)], [TokenSpan(token=15, start=35, end=37, score=0.9638277292251587), TokenSpan(token=1, start=37, end=38, score=0.9997448325157166), TokenSpan(token=13, start=41, end=42, score=0.9991759657859802)], [TokenSpan(token=7, start=44, end=45, score=0.9984301924705505), TokenSpan(token=15, start=45, end=46, score=0.9998005032539368), TokenSpan(token=1, start=47, end=48, score=0.9992087483406067), TokenSpan(token=7, start=50, end=51, score=0.9994457364082336)], [TokenSpan(token=20, start=54, end=55, score=0.9999110698699951), TokenSpan(token=6, start=58, end=60, score=0.9818181395530701), TokenSpan(token=9, start=63, end=64, score=0.9998868703842163), TokenSpan(token=2, start=65, end=66, score=0.999768078327179), TokenSpan(token=5, start=72, end=73, score=0.9999557733535767), TokenSpan(token=8, start=79, end=80, score=0.9990529417991638), TokenSpan(token=2, start=83, end=84, score=0.9997182488441467), TokenSpan(token=7, start=85, end=86, score=0.9998111128807068), TokenSpan(token=16, start=88, end=89, score=0.9998619556427002)], [TokenSpan(token=17, start=93, end=94, score=0.9998992681503296), TokenSpan(token=3, start=95, end=96, score=0.9999145269393921), TokenSpan(token=8, start=101, end=102, score=0.9998581409454346), TokenSpan(token=2, start=110, end=111, score=0.9999159574508667), TokenSpan(token=13, start=113, end=114, score=0.9992969036102295), TokenSpan(token=3, start=114, end=115, score=0.8495671153068542)], [TokenSpan(token=10, start=116, end=117, score=0.9994267225265503), TokenSpan(token=3, start=119, end=120, score=0.999803364276886)], [TokenSpan(token=1, start=124, end=125, score=0.9973921775817871), TokenSpan(token=7, start=127, end=128, score=0.9990203380584717)], [TokenSpan(token=7, start=129, end=130, score=0.999548614025116), TokenSpan(token=15, start=130, end=131, score=0.9996023774147034), TokenSpan(token=2, start=132, end=133, score=0.9998055100440979), TokenSpan(token=8, start=136, end=137, score=0.9998652935028076)], [TokenSpan(token=10, start=141, end=142, score=0.9998605251312256), TokenSpan(token=5, start=144, end=145, score=0.9999039173126221), TokenSpan(token=10, start=148, end=149, score=0.9999473094940186), TokenSpan(token=3, start=151, end=152, score=0.9996374845504761), TokenSpan(token=4, start=153, end=154, score=0.9998714923858643), TokenSpan(token=7, start=155, end=156, score=0.9997850060462952)]]
Running on: cuda:0
[[TokenSpan(token=2, start=32, end=33, score=0.9994412064552307)], [TokenSpan(token=15, start=35, end=37, score=0.9638314247131348), TokenSpan(token=1, start=37, end=38, score=0.9997448325157166), TokenSpan(token=13, start=41, end=42, score=0.9991754293441772)], [TokenSpan(token=7, start=44, end=45, score=0.9984301328659058), TokenSpan(token=15, start=45, end=46, score=0.9998005628585815), TokenSpan(token=1, start=47, end=48, score=0.9992076754570007), TokenSpan(token=7, start=50, end=51, score=0.9994456768035889)], [TokenSpan(token=20, start=54, end=55, score=0.9999111294746399), TokenSpan(token=6, start=58, end=60, score=0.9818133115768433), TokenSpan(token=9, start=63, end=64, score=0.9998869299888611), TokenSpan(token=2, start=65, end=66, score=0.9997681379318237), TokenSpan(token=5, start=72, end=73, score=0.9999558329582214), TokenSpan(token=8, start=79, end=80, score=0.9990524649620056), TokenSpan(token=2, start=83, end=84, score=0.9997183084487915), TokenSpan(token=7, start=85, end=86, score=0.999811053276062), TokenSpan(token=16, start=88, end=89, score=0.9998618960380554)], [TokenSpan(token=17, start=93, end=94, score=0.99989914894104), TokenSpan(token=3, start=95, end=96, score=0.9999145269393921), TokenSpan(token=8, start=101, end=102, score=0.9998581409454346), TokenSpan(token=2, start=110, end=111, score=0.9999158382415771), TokenSpan(token=13, start=113, end=114, score=0.9992966651916504), TokenSpan(token=3, start=114, end=115, score=0.849567174911499)], [TokenSpan(token=10, start=116, end=117, score=0.9994264841079712), TokenSpan(token=3, start=119, end=120, score=0.9998034238815308)], [TokenSpan(token=1, start=124, end=125, score=0.9973923563957214), TokenSpan(token=7, start=127, end=128, score=0.9990203380584717)], [TokenSpan(token=7, start=129, end=130, score=0.9995485544204712), TokenSpan(token=15, start=130, end=131, score=0.9996023178100586), TokenSpan(token=2, start=132, end=133, score=0.9998055696487427), TokenSpan(token=8, start=136, end=137, score=0.9998653531074524)], [TokenSpan(token=10, start=141, end=142, score=0.999860405921936), TokenSpan(token=5, start=144, end=145, score=0.9999040365219116), TokenSpan(token=10, start=148, end=149, score=0.9999473094940186), TokenSpan(token=3, start=151, end=152, score=0.9996375441551208), TokenSpan(token=4, start=153, end=154, score=0.9998713731765747), TokenSpan(token=7, start=155, end=156, score=0.9997848868370056)]]
Running on: cuda:1
Traceback (most recent call last):
File "/home/frederik/production/bifrost/libs/text_alignment/scripts/test_alignment.py", line 39, in <module>
aligned_tokens, alignment_scores = align(emission, tokenized_transcript, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frederik/production/bifrost/libs/text_alignment/scripts/test_alignment.py", line 17, in align
alignments, scores = F.forced_align(emission, targets, blank=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frederik/production/bifrost/libs/text_alignment/.venv/lib/python3.12/site-packages/torchaudio/functional/_alignment.py", line 72, in forced_align
paths, scores = torch.ops.torchaudio.forced_align(log_probs, targets, input_lengths, target_lengths, blank)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frederik/production/bifrost/libs/text_alignment/.venv/lib/python3.12/site-packages/torch/_ops.py", line 1123, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: an illegal memory access was encountered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402P 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 68%
CPU max MHz: 2800,0000
CPU min MHz: 1500,0000
BogoMIPS: 5599,52
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @ptrblck @msaroufim @eqy
| true
|
2,893,893,383
|
[ROCm] Incorporate ROCm triton specific tuning parameters
|
jataylo
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: rocm",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 9
|
COLLABORATOR
|
Splitting https://github.com/pytorch/pytorch/pull/147315 into two PRs. This PR adds general support for kpack and waves_per_eu triton kernel args for AMD backend. More detail in the PR above.
A follow up PR will update the configs used by ROCm but this requires https://github.com/pytorch/pytorch/pull/147452 to land first
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,893,760,775
|
Expand docs for `nn.functional`, and make the wording consistent
|
olipinski
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 12
|
CONTRIBUTOR
|
Expands the docs for the loss functions, and makes the wording consistent.
Fixes #148353
| true
|
2,893,637,424
|
Do not crash when compiling quantized LORA models
|
Whadup
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 13
|
CONTRIBUTOR
|
Fixes #148072
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,893,626,920
|
ERROR: I got an error about FSDP, when I trained flux model of sparsity with NVIDIA TensorRT Model Optimizer
|
Vieeo
|
closed
|
[
"needs reproduction",
"oncall: distributed",
"module: fsdp"
] | 4
|
NONE
|
### 🐛 Describe the bug
I’m training flux-dev model of sparsity with accelerate FSDP. I think it is a FSDP problem. when I don't use FSDP, it can train.
when I use FSDP and forward, I print the shape:
weight.shape, mod._weight_mask.shape: torch.Size([6144, 3072]) torch.Size([6144, 3072])
but backward:
torch.Size([2360064]) torch.Size([6144, 3072])
Here, weight shape is not ok.
If I don't use FSDP, it goes well.
This is FSDP config with accelerator.
distributed_type: FSDP
fsdp_config:
fsdp_auto_wrap_policy: SIZE_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_min_num_params: 1000000
fsdp_offload_params: true
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
when I do:
flux = mto.restore(flux, sparse_ckpt)
flux = accelerator.prepare_model(flux)
### Errors as follows:
[rank1]: Traceback (most recent call last):
[rank1]: File “/data/train_flux.py”, line 438, in
[rank1]: main()
[rank1]: File “/data/train_flux.py”, line 374, in main
[rank1]: accelerator.backward(loss)
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/accelerate/accelerator.py”, line 2196, in backward
[rank1]: loss.backward(**kwargs)
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/_tensor.py”, line 581, in backward
[rank1]: torch.autograd.backward(
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/autograd/init.py”, line 347, in backward
[rank1]: _engine_run_backward(
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/autograd/graph.py”, line 825, in _engine_run_backward
[rank1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/utils/_contextlib.py”, line 116, in decorate_context
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py”, line 734, in _post_backward_hook
[rank1]: handle._use_unsharded_grad_views()
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/distributed/fsdp/_flat_param.py”, line 1982, in _use_unsharded_grad_views
[rank1]: hasattr(module, param_name),
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/opt/dynamic.py”, line 806, in getattr
[rank1]: return manager.get_da_cb(name)(self, value)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/opt/dynamic.py”, line 83, in call
[rank1]: val = cb(self_module, val)
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/sparsity/module.py”, line 34, in _get_weight
[rank1]: masked_weight = weight * mod._weight_mask
[rank1]: ~^~~~~~~~~~~~
[rank1]: RuntimeError: The size of tensor a (2360064) must match the size of tensor b (3072) at non-singleton dimension 1
### Versions
Basic version info:
python 3.12.0
pytorch 2.5.0
nvidia-modelopt 0.21.0
cuda: 12.6
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,893,574,084
|
[ROCm] Bump AOTriton to 0.9.2b
|
xinyazhang
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/rocm",
"ci-no-td"
] | 19
|
COLLABORATOR
|
Notable new features/optimizations for SDPA operators on AMD systems from AOTriton 0.9b:
* Optimize these Non-power-of-two head dimensions: 48, 80, 96, 160, 192, 224. Inputs with these head dimensions do not need padding to power-of-two anymore.
* `is_causal=True` cases are now supported with persistent dynamic algorithm, which requires an atomic tensor but does load balance between different CTAs
* `dropout_p > 0.0` cases now support full 64-bit offsets and use all i64x4 PRNG outputs
* The precise AOTriton shared library version can now be identified with `readelf -p .comment libaotriton_v2.so`
+ However, this does not guarantee the GPU images stored under `aotriton.images` have the same version, since they can be overwritten.
* The newly added fused backward kernel will be used for smaller workloads, due to less kernel invocation overhead.
* Support gfx1201 (RX 9070XT). Need to be enabled at runtime with `TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.