id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,997,294,179
|
[FSDP1] print fqns when debug FlatParamHandle
|
weifengpy
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151336
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,997,291,125
|
DISABLED test_load_from_bias_head_seq_batch_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_load_from_bias_head_seq_batch_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40594293880).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_load_from_bias_head_seq_batch_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1551, in test_load_from_bias_head_seq_batch
self.run_test(bias_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 72, in inner
return autograd_not_implemented_inner(op, deferred_error, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 45, in autograd_not_implemented_inner
result = operator(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 846, in sdpa_dense_backward
grad_softmax_scores - sum_scores + grad_logsumexp.unsqueeze(-1)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 766.12 MiB is free. Process 92950 has 21.20 GiB memory in use. Of the allocated memory 6.02 GiB is allocated by PyTorch, and 14.82 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_load_from_bias_head_seq_batch_float16_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,291,124
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE2_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE2_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40585278657).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE2_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,290,830
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE_128_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE_128_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40586885149).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE_128_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,290,742
|
DISABLED test_silu_on_score_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_silu_on_score_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40594293880).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_silu_on_score_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1659, in test_silu_on_score
self.run_test(silu_score, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 72, in inner
return autograd_not_implemented_inner(op, deferred_error, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 45, in autograd_not_implemented_inner
result = operator(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 869, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.819 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 7, in forward
empty_like = torch.ops.aten.empty_like.default(sigmoid, memory_format = torch.preserve_format)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 776, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 776, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 346.12 MiB is free. Process 126192 has 21.61 GiB memory in use. Of the allocated memory 6.67 GiB is allocated by PyTorch, and 14.62 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_silu_on_score_float16_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,290,663
|
DISABLED test_mask_mod_combiners_cuda (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mask_mod_combiners_cuda&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40585278657).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mask_mod_combiners_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2060, in test_mask_mod_combiners
self.run_test_with_call(attention, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 768, in run_test_with_call
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 72, in inner
return autograd_not_implemented_inner(op, deferred_error, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 45, in autograd_not_implemented_inner
result = operator(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 880, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 212.12 MiB is free. Process 143252 has 21.74 GiB memory in use. Of the allocated memory 6.87 GiB is allocated by PyTorch, and 14.60 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_mask_mod_combiners_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,290,554
|
[invoke_subgraph][inductor] Run pre and post grad passes on invoke_subgraph
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151410
* #151409
* #150704
* #150717
* #151357
* #151256
* __->__ #151330
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,280,368
|
[c10d][fr] Add counters for FR dump and reduce its timeout to finish dump before watchdog timeout
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151329
After https://github.com/pytorch/pytorch/pull/150652, we still see some ranks missing dumps. Upon looking further, the case is that FR dump timed out for its first attempt:
watchdog thread: notify FR dump -> wait for 1 mins -> throw watchdog timeout -> notify elastic to kill process
FR dump thread: received FR dump signal -> timeout after 1 mins with first attempt -> started 2nd attempt -> got killed.
So we want to make the FR dump timeout shorter, in reality, the log shows that the dump finished within one sec. Even if we consider a very slow speed like 200K/s the usual size FR (1MB at most) takes around 5 secs, so 15 secs is like 3 times buffer.
Also we still let watchdog sleep for 1 min so that we can wait enough time for two dump to timeout and the following check like GIL checker to execute.
Also, if we get stuck in getting GIL or cuda hang, 15 seconds should be enough to detect the hang.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,997,226,111
|
Can't call torch.compile inside of a custom op
|
zou3519
|
open
|
[
"feature",
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: dynamo",
"module: pt2-dispatcher"
] | 1
|
CONTRIBUTOR
|
```py
import torch
lib = torch.library.Library("mylib", "FRAGMENT")
lib.define("foo(Tensor x) -> Tensor")
def inner(x):
return x.sin().cos()
def foo_impl(x):
return torch.compile(inner, fullgraph=True)(x)
lib.impl("foo", foo_impl, "CompositeExplicitAutograd")
@torch.compile(fullgraph=True)
def f(x):
return torch.ops.mylib.foo.default(x)
x = torch.randn(3)
f(x)
"""
File ~/dev/misc_cpu11/pt-misc_cpu11/torch/_subclasses/meta_utils.py:894, in MetaConverter.meta_tensor(self, t, shape_env, callback_, source, symbolic_context)
886 source = ConstantSource(
887 f"__meta_utils_unknown_tensor{len(self.tensor_memo)}"
888 )
890 # This indicates you set no_dispatch() before calling into this
891 # function. This is an error: we may be creating fake tensors and
892 # will perform operations on them which need fake tensor mode to
893 # be active. You will segfault if you are in a no_dispatch() block.
--> 894 assert not torch._C._dispatch_tls_local_exclude_set().has(
895 torch._C.DispatchKey.Python
896 )
897 self.arg_cnt += 1
899 # When we make as_strided calls, we end up generating a guard
900 # that the new as_strided tensor is in bounds for the old storage
901 # for the base (since as_strided calls can "bust" out of their
(...)
921 # as we allocate variables, and we do need to register guards for
922 # these cases.
TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function mylib.foo.default(*(FakeTensor(..., size=(3,)),), **{}): got AssertionError('\n\nfrom user c
ode:\n File "<ipython-input-2-9e7ce20b02c0>", line 8, in inner\n return x.sin().cos()\n\nSet TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especial
ly if you\'re reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"\n')
from user code:
File "<ipython-input-2-9e7ce20b02c0>", line 17, in f
return torch.ops.mylib.foo.default(x)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dy
namo"
"""
```
motivation is that we want the custom op to be backed by a torch.compile implemetation?
cc @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @bdhirsh
| true
|
2,997,197,804
|
[ONNX] Implement scan
|
justinchuby
|
open
|
[
"module: onnx",
"triaged"
] | 1
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
Implement scan from higher ordered ops in https://github.com/pytorch/pytorch/blob/main/torch/onnx/_internal/exporter/_torchlib/ops/hop.py
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,997,189,657
|
[ROCm] replace miniconda with miniforge
|
BowenBao
|
closed
|
[
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm"
] | 2
|
COLLABORATOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
Related to: https://github.com/pytorch/pytorch/issues/148335
| true
|
2,997,074,321
|
[AOTInductor] Add interface for user managed buffer in package api.
|
muchulee8
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
https://github.com/pytorch/pytorch/pull/151141
We add interface for user managed buffer in the package api.
Test Plan:
Included in commit.]
Reviewed By: henrylhtsang
Differential Revision: D72985440
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,997,013,412
|
Testing compatibility with new sympy
|
malfet
|
closed
|
[
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
See https://github.com/pytorch/pytorch/issues/151312
| true
|
2,996,997,755
|
Gracefully handle optree less than minimum version, part 2
|
pytorchbot
|
closed
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151257
If optree is less than the minimum version, we should pretend it doesn't
exist.
The problem right now is:
- Install optree==0.12.1
- `import torch._dynamo`
- This raise an error "min optree version is 0.13.0"
The fix is to pretend optree doesn't exist if it is less than the min
version.
There are ways to clean up this PR more (e.g. have a single source of
truth for the version, some of the variables are redundant), but I am
trying to reduce the risk as much as possible for this to go into 2.7.
Test Plan:
I verified the above problem was fixed. Also tried some other things,
like the following, which now gives the expected behavior.
```py
>>> import torch
>>> import optree
>>> optree.__version__
'0.12.1'
>>> import torch._dynamo
>>> import torch._dynamo.polyfills.pytree
>>> import torch.utils._pytree
>>> import torch.utils._cxx_pytree
ImportError: torch.utils._cxx_pytree depends on optree, which is
an optional dependency of PyTorch. To u
se it, please upgrade your optree package to >= 0.13.0
```
I also audited all non-test callsites of optree and torch.utils._cxx_pytree.
Follow along with me:
optree imports
- torch.utils._cxx_pytree. This is fine.
- [guarded by check] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/_dynamo/polyfills/pytree.py#L29-L31
_cxx_pytree imports
- [guarded by check] torch.utils._pytree (changed in this PR)
- [guarded by check] torch/_dynamo/polyfills/pytree.py (changed in this PR)
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/distributed/_functional_collectives.py#L17
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/distributed/tensor/_op_schema.py#L15
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/distributed/tensor/_dispatch.py#L35
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/_dynamo/variables/user_defined.py#L94
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/distributed/tensor/experimental/_func_map.py#L14
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,996,984,495
|
[CI] No workflows scheduled on PRs
|
malfet
|
open
|
[
"module: ci",
"triaged",
"module: flaky-tests",
"module: third_party"
] | 11
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Few instances of PRs with no tests run on them were reported recently, for example:
- https://github.com/pytorch/pytorch/pull/146273
<img width="898" alt="Image" src="https://github.com/user-attachments/assets/7c40a29d-8901-4747-a980-2e7dfc5004ab" />
- https://github.com/pytorch/pytorch/pull/149271 (see https://github.com/pytorch/pytorch/pull/149271#issuecomment-2797687516 )
- https://github.com/pytorch/pytorch/pull/148436
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra @clee2000
| true
|
2,996,981,873
|
[inductor] Check NoneLayout in update_zero_dim_cpu_tensor
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary:
This fixes the error in https://fb.workplace.com/groups/1075192433118967/permalink/1640802133224658/
I tried really hard but I couldn't come up with a test case to repro the issue, but I confirmed with the OP that this issue has been fixed.
```
Traceback (most recent call last):
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/compile_fx.py", line 746, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/compile_fx.py", line 1343, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/compile_fx.py", line 1232, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/graph.py", line 2087, in compile_to_module
return self._compile_to_module()
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/graph.py", line 2095, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/graph.py", line 2002, in codegen
self._update_scheduler()
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/graph.py", line 1996, in _update_scheduler
self.scheduler = Scheduler(self.operations)
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/scheduler.py", line 1954, in __init__
self._init(nodes)
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/scheduler.py", line 1974, in _init
self.update_zero_dim_cpu_tensor()
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/scheduler.py", line 4433, in update_zero_dim_cpu_tensor
and buffer.get_size() == []
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/ir.py", line 3903, in get_size
return [*self.get_layout().size]
File "/dev/shm/uid-99/d2b830f6-seed-nspid4026547915_cgpid362302-ns-4026547912/torch/_inductor/ir.py", line 3914, in get_layout
raise NotImplementedError(type(self.layout).__name__)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
NotImplementedError: NoneLayout
```
Test Plan: OP said the issue is fixed
Differential Revision: D72575808
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,996,899,280
|
[DCP] Add logging for _stateful_to_state_dict(), stage_state_dict(), and synchronize_staging()
|
MeetVadakkanchery
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"release notes: hub",
"release notes: distributed (checkpoint)",
"ci-no-td",
"oncall: distributed checkpointing"
] | 18
|
CONTRIBUTOR
|
Summary: As titled.
Test Plan: CI
Differential Revision: D73040700
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @LucasLLC @pradeepfn
| true
|
2,996,880,944
|
update expected results for comptime benchmark
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
This PR https://github.com/pytorch/pytorch/pull/150594 bumped the benchmark up by ~1%, a bit under our 1.5% "regression" mark.
Modeled this PR after https://github.com/pytorch/pytorch/pull/144274
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151319
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,996,795,312
|
[dynamo] Add guard serialization for tensor matches.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 37
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151349
* #151343
* __->__ #151318
This is a proof-of-concept of how we could serialize a guard and deserialize it back from the bytes.
The main behavioral change introduced in this diff is on CheckFunctionManager:
```
check_fn_manager = CheckFunctionManager(code, output_graph, guards_serialization_mode="save")
guards_state: bytes = check_fn_manager.guards_state
```
Once `guards_serialization_mode` is set to `save`, CheckFunctionManager will return an addtional `bytes` object called `guards_state` which should contain all the information needed for deserializing guards later.
When we load back guards state, we will set `guards_serialization_mode` is set to `load`:
```
output_graph_state = pickle.loads(guards_state)
check_fn_manager = CheckFunctionManager(code, output_graph_state, guards_serialization_mode="load")
```
# TENSOR_MATCH
Since we have many types of guards to support, we will break the work into small diffs instead of a single diff to support every guards.
We kick off the work from TENSOR_MATCH from this diff.
# Testing
For each type of guard we will test it like the following:
1. Use guard_filter_fn to select 1 type of guard each time.
2. Call InstructionTranslator directly on an example function to get OutputGraph and CheckFunctionManager (reference guard manager)
3. Serialize->deserialize the output graph state and re-build the guards with a new CheckFunctionManager (loaded guard manager)
4. Throw a set of example inputs to both reference and loaded guard manager to see if their behavior match.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,996,705,437
|
Fix support of MixtureSameFamily [bugfix].
|
BenZickel
|
open
|
[
"open source",
"release notes: python_frontend",
"module: python frontend"
] | 7
|
NONE
|
Fixes https://github.com/pyro-ppl/pyro/issues/3419 which is actually a `torch` bug that can be replicated by the below code:
```
from torch import rand
from torch.distributions import MixtureSameFamily, Categorical, Binomial
max_count = 20
probs = rand(10, 5)
binom_probs = rand(10, 5)
d = MixtureSameFamily(Categorical(probs=probs), Binomial(max_count, binom_probs))
d.log_prob(d.sample())
```
which results in:
```
Traceback (most recent call last):
File "test.py", line 11, in <module>
d.log_prob(d.sample())
File "pytorch\torch\distributions\mixture_same_family.py", line 168, in log_prob
self._validate_sample(x)
File "pytorch\torch\distributions\distribution.py", line 315, in _validate_sample
valid = support.check(value)
^^^^^^^^^^^^^^^^^^^^
File "pytorch\torch\distributions\constraints.py", line 307, in check
(value % 1 == 0) & (self.lower_bound <= value) & (value <= self.upper_bound)
^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: The size of tensor a (10) must match the size of tensor b (5) at non-singleton dimension 1
```
### Fix explanation (only for cases when the component distribution contains parameters with batch dimenisons)
- The failure is due to sample validation taking place before padding in `MixtureSameFamily.log_prob`, and hence the fix is to pad before doing sample validation.
- The fix itself does not alter the calculations at all. It only affects the sample validation process.
- The failure does not occur with the component distribution set to the `Normal` distribution, as its validation is not defined elementwise (the validation itself is elementwise).
- I've split the `test_mixture_same_family_log_prob` test into two tests based on the `Normal` and `Binomial` distributions.
- Initially, the `Binomial` version of the test did not fail, but this was due to the component distribution having equal batch dimensions of (5, 5) so I changed it to (10, 5).
### Updated fix explanation (for all cases)
- The previous fix caused a bug in sample shape validation (which is done correctly) due to the padding taking place before the sample validation.
- The updated fix corrects the support to reflect the fact that the support of `MixtureSameFamily` is equal to the support of its components distribution with the first event dimension removed.
- This issue was already anticipated in the [code](https://github.com/pytorch/pytorch/blob/331423e5c24170b218e743b3392acbad4480340d/torch/distributions/mixture_same_family.py#L127).
cc @albanD
| true
|
2,996,613,582
|
Apple Clang 17 build error
|
adamjstewart
|
closed
|
[
"high priority",
"triage review",
"module: build",
"module: tensorpipe"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When building PyTorch 2.6.0 with Apple Clang 17.0.0, I see the following build error:
```
FAILED: third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/channel/helpers.cc.o
/Users/Adam/spack/opt/spack/darwin-m2/compiler-wrapper-1.0-cdasmd2yy77m4m6wp6mdpf72p6usoqcq/libexec/spack/clang/clang++ -I/private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe -I/private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/build/third_party/tensorpipe -I/private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/third_party/libnop/include -I/private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/third_party/libuv/include -F/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.4.sdk/System/Library/Frameworks -isystem /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/protobuf/src -isystem /Users/Adam/spack/opt/spack/darwin-m2/openblas-0.3.29-2vttv3y5thdu4gnqda3rypsjgt5hfike/include -isystem /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/XNNPACK/include -isystem /Users/Adam/spack/opt/spack/darwin-m2/eigen-3.4.0-yboqnztyk6kzxv3vnadzd2hwovg2hb73/include/eigen3 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -O3 -DNDEBUG -std=gnu++14 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.4.sdk -mmacosx-version-min=15.0 -fPIC -DTORCH_USE_LIBUV -MD -MT third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/channel/helpers.cc.o -MF third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/channel/helpers.cc.o.d -o third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/channel/helpers.cc.o -c /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/tensorpipe/channel/helpers.cc
In file included from /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/tensorpipe/channel/helpers.cc:9:
In file included from /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/tensorpipe/channel/helpers.h:15:
In file included from /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/tensorpipe/common/nop.h:11:
In file included from /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/third_party/libnop/include/nop/serializer.h:35:
In file included from /private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/third_party/libnop/include/nop/base/variant.h:21:
/private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:241:30: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
241 | index_ = value_.template Construct(std::forward<Args>(args)...);
| ^
/private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:258:26: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
258 | if (!value_.template Assign(TypeTag<T>{}, index_, std::forward<U>(value))) {
| ^
/private/var/folders/jv/cgkfvslj6nq1l7cw0c8c_8gm0000gn/T/Adam/spack-stage/spack-stage-py-torch-2.6.0-qcpp7ic3nurlnspjyivxwhzbiomf7bit/spack-src/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:265:26: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
265 | if (!value_.template Assign(index_, std::forward<T>(value))) {
| ^
3 errors generated.
```
Any suggestions on how to fix this? I would report this issue to tensorpipe or libnop, but both seem abandoned and PyTorch does not allow using an externally installed version anyway.
### Versions
Can't run collect_env.py since PyTorch doesn't build, but here are some relevant things:
* PyTorch version: 2.6.0
* CUDA: N/A
* ROCM: N/A
* OS: macOS 15.4
* Clang version: 17.0.0
* CMake version: 3.31.6
* Python version: 3.13.2
Also:
* [build log](https://github.com/user-attachments/files/19759874/spack-build-out.txt)
* [build env](https://github.com/user-attachments/files/19759876/spack-build-env-mods.txt)
Happy to provide additional reproducibility instructions, but the bug should be obvious to anyone with access to Apple Clang 17.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @osalpekar @jiayisuse @lw @beauby @pritamdamania87 @mrshenli @jjlilley @gqchen
| true
|
2,996,374,987
|
Fix skipIfXpu and skipIfHpu disables tests when used on class
|
EikanWang
|
open
|
[
"oncall: distributed",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"merging"
] | 25
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151315
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,996,336,599
|
Add inductor backend to device interface; make minifier_tests more device agnostic
|
charlie-wt
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
Tried to decouple the always cpu <=> c++, cuda <=> triton assumption. Tried to keep it relatively simple by just guarding things more specifically, at the moment.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,996,334,547
|
DISABLED test_parity__foreach_add_fastpath_inplace_cuda_complex64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_add_fastpath_inplace_cuda_complex64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40567936561).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_add_fastpath_inplace_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_add_', keys=('aten::_foreach_add_', 'Unrecognized', 'aten::result_type', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1161, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1173, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.complex64], Tensor[size=(19, 19), device="cuda:0", dtype=torch.complex64], Tensor[size=(18, 18), device="cuda:0", dtype=torch.complex64], Tensor[size=(17, 17), device="cuda:0", dtype=torch.complex64], Tensor[size=(16, 16), device="cuda:0", dtype=torch.complex64], Tensor[size=(15, 15), device="cuda:0", dtype=torch.complex64], Tensor[size=(14, 14), device="cuda:0", dtype=torch.complex64], Tensor[size=(13, 13), device="cuda:0", dtype=torch.complex64], Tensor[size=(12, 12), device="cuda:0", dtype=torch.complex64], Tensor[size=(11, 11), device="cuda:0", dtype=torch.complex64], Tensor[size=(10, 10), device="cuda:0", dtype=torch.complex64], Tensor[size=(9, 9), device="cuda:0", dtype=torch.complex64], Tensor[size=(8, 8), device="cuda:0", dtype=torch.complex64], Tensor[size=(7, 7), device="cuda:0", dtype=torch.complex64], Tensor[size=(6, 6), device="cuda:0", dtype=torch.complex64], Tensor[size=(5, 5), device="cuda:0", dtype=torch.complex64], Tensor[size=(4, 4), device="cuda:0", dtype=torch.complex64], Tensor[size=(3, 3), device="cuda:0", dtype=torch.complex64], Tensor[size=(2, 2), device="cuda:0", dtype=torch.complex64], Tensor[size=(1, 1), device="cuda:0", dtype=torch.complex64]], args=(TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.complex64], Tensor[size=(19, 19), device="cuda:0", dtype=torch.complex64], Tensor[size=(18, 18), device="cuda:0", dtype=torch.complex64], Tensor[size=(17, 17), device="cuda:0", dtype=torch.complex64], Tensor[size=(16, 16), device="cuda:0", dtype=torch.complex64], Tensor[size=(15, 15), device="cuda:0", dtype=torch.complex64], Tensor[size=(14, 14), device="cuda:0", dtype=torch.complex64], Tensor[size=(13, 13), device="cuda:0", dtype=torch.complex64], Tensor[size=(12, 12), device="cuda:0", dtype=torch.complex64], Tensor[size=(11, 11), device="cuda:0", dtype=torch.complex64], Tensor[size=(10, 10), device="cuda:0", dtype=torch.complex64], Tensor[size=(9, 9), device="cuda:0", dtype=torch.complex64], Tensor[size=(8, 8), device="cuda:0", dtype=torch.complex64], Tensor[size=(7, 7), device="cuda:0", dtype=torch.complex64], Tensor[size=(6, 6), device="cuda:0", dtype=torch.complex64], Tensor[size=(5, 5), device="cuda:0", dtype=torch.complex64], Tensor[size=(4, 4), device="cuda:0", dtype=torch.complex64], Tensor[size=(3, 3), device="cuda:0", dtype=torch.complex64], Tensor[size=(2, 2), device="cuda:0", dtype=torch.complex64], Tensor[size=(1, 1), device="cuda:0", dtype=torch.complex64]]), kwargs={'alpha': '(3+3j)'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_add_fastpath_inplace_cuda_complex64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,995,975,523
|
Compatibility with SymPy 1.14.0
|
oscarbenjamin
|
closed
|
[
"triaged",
"module: third_party",
"oncall: pt2"
] | 9
|
NONE
|
Hi PyTorch people.
I have just pushed a new prerelease of SymPy 1.14.0rc1 to PyPI:
https://pypi.org/project/sympy/#history
Previously PyTorch had some issues with SymPy and mpmath prereleases so I just want to check in whether this release is going to cause any problems for PyTorch.
In particular if I read this correctly then when the SymPy 1.14.0 final release is pushed to PyPI anyone doing `pip install torch` is going to get this new SymPy version:
https://github.com/pytorch/pytorch/blob/70e7b767079fcda178e11a81c4f8d8b416a107d9/setup.py#L1123
For now SymPy 1.13.3 would be installed because the release I just pushed is a prerelease (rc1) which will not be picked up by pip in a normal installation.
It would be great if someone could check whether or not SymPy 1.14.0rc1 is compatible with PyTorch and in particular whether it would be compatible with the current release torch==2.6.0:
https://pypi.org/project/torch/#history
Does PyTorch test this prerelease in CI? Is there some other way of checking this?
If any changes are needed in SymPy then it would be better to make those changes in another 1.14.0rc2 prerelease before making the final 1.14.0 release.
CC @rgommers @asmeurer
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,995,956,681
|
`repeat_interleave_cpu` is not implemented for short integers
|
ev-br
|
open
|
[
"triaged",
"actionable",
"module: python array api",
"module: python frontend"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
```
In [7]: import torch
In [8]: torch.repeat_interleave(torch.arange(5, dtype=torch.int8), 2)
Out[8]: tensor([0, 0, 1, 1, 2, 2, 3, 3, 4, 4], dtype=torch.int8)
In [9]: torch.repeat_interleave(torch.arange(5, dtype=torch.int8), torch.as_tensor(2, dt
...: ype=torch.int8))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[9], line 1
----> 1 torch.repeat_interleave(torch.arange(5, dtype=torch.int8), torch.as_tensor(2, dtype=torch.int8))
RuntimeError: "repeat_interleave_cpu" not implemented for 'Char'
In [14]: torch.repeat_interleave(torch.arange(5, dtype=torch.int16), torch.as_tensor(2,
...: dtype=torch.int16))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[14], line 1
----> 1 torch.repeat_interleave(torch.arange(5, dtype=torch.int16), torch.as_tensor(2, dtype=torch.int16))
```
It seems that what matters is the dtype of the `repeats` argument and not the `input` argument.
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (conda-forge gcc 13.3.0-1) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz
Stepping: 12
CPU MHz: 975.613
CPU max MHz: 3900,0000
CPU min MHz: 400,0000
BogoMIPS: 3600.00
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 6 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy_extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] numpydoc==1.8.0
[pip3] torch==2.6.0+cpu
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] numpy 2.2.3 py312h72c5963_0 conda-forge
[conda] numpydoc 1.8.0 pyhd8ed1ab_1 conda-forge
[conda] torch 2.6.0+cpu pypi_0 pypi
```
cc @mruberry @rgommers @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi @albanD
| true
|
2,995,794,797
|
cpp_extension.load 'sources' does not support Chinese paths
|
juntaosun
|
open
|
[
"needs reproduction",
"module: windows",
"module: cpp-extensions",
"triaged",
"actionable"
] | 3
|
NONE
|
### 🐛 Describe the bug
Does not support Chinese character paths
```
from torch.utils import cpp_extension
...
cpp_extension.load(
name=name,
sources=sources,
build_directory=buildpath,
extra_cflags=[
"-O3",
],
extra_cuda_cflags=[
"-O3",
"-gencode",
"arch=compute_70,code=sm_70",
"--use_fast_math",
]
+ extra_cuda_flags
+ cc_flag,
verbose=True,
)
```
If the sources path list contains Chinese characters, cpp_extension.load will fail.
https://github.com/NVIDIA/BigVGAN/tree/main/alias_free_activation/cuda
BigVGAN\alias_free_activation\cuda
build.ninja

### Versions
pip show torch
Version: 2.6.0+cu124
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @malfet @zou3519 @xmfan
| true
|
2,995,693,367
|
[inductor] type checking of `torch.linalg.inv` is not sufficient on inductor
|
shaoyuyoung
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: When the **determinant of a square matrix is zero**. `aot_eager` is the first backend that loses the check.
**device backend**: both CPP and triton
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x = torch.linalg.inv(x)
return x
model = Model()
x = torch.tensor([[0., 0.],
[0., 0.]])
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(output)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'aot_eager')
```
### Error logs
eager
```
linalg.inv: The diagonal element 1 is zero, the inversion could not be completed because the input matrix is singular.
```
aot_eager
```
tensor([[nan, nan],
[nan, nan]])
succeed on aot_eager
```
### Versions
nightly 20250414
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,995,552,513
|
[Intel GPU][UT failure] Depthwise conv related UT failures
|
ZhiweiYan-96
|
open
|
[
"triaged",
"module: xpu"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
For UTs that would call depthwise conv on XPU backend, following error would be raised.
`NotImplementedError: The operator 'aten::_conv_depthwise2d' is not currently implemented for the XPU device`
### Versions
Collecting environment information...
PyTorch version: 2.8.0a0+git84ac876
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 6
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 30 MiB (24 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] optree==0.15.0
[pip3] pytorch-triton-xpu==3.3.0+git83111ab2
[pip3] torchvision==0.22.0.dev20250408+xpu
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.15.0 pypi_0 pypi
[conda] pytorch-triton-xpu 3.3.0+git83111ab2 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250408+xpu pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,995,512,619
|
[Export] Remove to() from module generated form exported program
|
Eldalie
|
open
|
[
"triaged",
"open source",
"release notes: export"
] | 2
|
NONE
|
Prevent users from calling `to()` on module-generated form-exported programs, as this is not supported [[PyTorch issue #151010](https://github.com/pytorch/pytorch/issues/151010)]
| true
|
2,995,481,830
|
add Out Notes
|
ILCSFNO
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14
|
CONTRIBUTOR
|
Fixes #150181
@albanD Could you please have a check?
Build locally without pytorch build:

| true
|
2,995,427,732
|
Use Allocator API raw_allocate & raw_dealloc in CUDAAllocator
|
guangyey
|
open
|
[
"oncall: distributed",
"open source",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ciflow/rocm"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151305
# Motivation
While generalizing the device caching allocator in [#138222](https://github.com/pytorch/pytorch/pull/138222), I noticed that `raw_alloc` and `raw_delete` are redundant, as similar functionality is already provided by `raw_allocate` and `raw_deallocate` in [c10::Allocator](https://github.com/pytorch/pytorch/blob/ccfce9ae868131cc87dd99584ab79e316c14e7d4/c10/core/Allocator.h#L190).
In general, when an allocator defines both `allocate` and `raw_deleter`, the base `raw_allocate` and `raw_deallocate` methods become active and provide the necessary behavior. Therefore, I’ve removed the custom definitions of `raw_alloc` and `raw_delete` in `CUDAAllocator` to reduce duplication and simplify future changes in this area. Additionally, `raw_allocate` and `raw_deallocate` in `c10::Allocator` are now virtual to allow custom allocators (e.g., `CUDAPluggableAllocator`) to override them, particularly in cases where `data` and `ctx` in `DataPtr{data, ctx, deleter, device}` are distinct.
This cleanup also helps streamline the review process for the upcoming generalization PR. I am trying to break up common code changes into smaller PRs for easier review.
# Additional Context
`CUDAAllocator` is not a public API, so I removed the redundant `raw_alloc` and `raw_delete` methods from it. However, we rename `raw_alloc` and `raw_delete` in [CUDAPluggableAllocator](https://github.com/pytorch/pytorch/blob/ccfce9ae868131cc87dd99584ab79e316c14e7d4/torch/csrc/cuda/CUDAPluggableAllocator.h#L69), as it is part of the public API and may be relied upon externally. This change introduces some concern, as it may impact downstream users relying on the existing method names.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,995,427,625
|
Optimize `interpolate` saturate description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 7
|
CONTRIBUTOR
|
Fixes #108225
## Test Result
### Before

### After

| true
|
2,995,411,045
|
[Feature Request] Experimental support to Moore Threads GPU MUSA
|
jobs-git
|
open
|
[
"triaged",
"module: backend"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
This is to further democratize AI utilization by enable emerging hardware as CUDA hardware becomes more expensive while China invests to GPU technology. There is already an existing implementation: https://github.com/MooreThreads/torch_musa
While CUDA is still the gold standard, experimental support for emerging hardware ensures that PyTorch will function and continue to be used in new AI hardware in the future. Some information on general features can be seen here: https://wccftech.com/china-first-in-house-alternative-to-nvidias-cuda-emerges-online/
### Alternatives
https://github.com/MooreThreads/torch_musa
### Additional context
Support for emerging hardware ensures that PyTorch will function and continue to be used in new AI hardware in the future.
cc @bdhirsh @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD
| true
|
2,995,351,973
|
[Easy] Fix the compilation warning of BlasKernel.
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151302
* #151427
As the title stated.
Change Before:
```C++
[2/21] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/BlasKernel.cpp.o
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/BlasKernel.cpp:346:6: warning: ‘void at::native::blas_impl::gemv_fast_path(const char*, const int*, const int*, const scalar_t*, const scalar_t*, const int*, const scalar_t*, const int*, const scalar_t*, scalar_t*, const int*) [with scalar_t = c10::Half]’ defined but not used [-Wunused-function]
346 | void gemv_fast_path<at::Half>(
| ^~~~~~~~~~~~~~~~~~~~~~~~
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/BlasKernel.cpp:329:6: warning: ‘bool at::native::blas_impl::gemv_use_fast_path(char, int64_t, int64_t, scalar_t, int64_t, int64_t, scalar_t, int64_t) [with scalar_t = c10::Half]’ defined but not used [-Wunused-function]
329 | bool gemv_use_fast_path<at::Half>(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/BlasKernel.cpp:301:6: warning: ‘void at::native::blas_impl::gemv_fast_path(const char*, const int*, const int*, const scalar_t*, const scalar_t*, const int*, const scalar_t*, const int*, const scalar_t*, scalar_t*, const int*) [with scalar_t = c10::BFloat16]’ defined but not used [-Wunused-function]
301 | void gemv_fast_path<at::BFloat16>(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/BlasKernel.cpp:273:6: warning: ‘bool at::native::blas_impl::gemv_use_fast_path(char, int64_t, int64_t, scalar_t, int64_t, int64_t, scalar_t, int64_t) [with scalar_t = c10::BFloat16]’ defined but not used [-Wunused-function]
273 | bool gemv_use_fast_path<at::BFloat16>(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
| true
|
2,995,338,344
|
DISABLED test_fake_registration (__main__.TestOpProfiles)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: custom-operators",
"skipped",
"oncall: pt2",
"module: pt2-dispatcher"
] | 9
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_fake_registration&suite=TestOpProfiles&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40550182470).
Over the past 3 hours, it has been determined flaky in 37 workflow(s) with 74 failures and 37 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_fake_registration`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_custom_ops.py", line 4500, in test_fake_registration
torch.library.define(
File "/opt/conda/envs/py_3.10/lib/python3.10/functools.py", line 889, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/library.py", line 555, in define
lib.define(name + schema, alias_analysis="", tags=tags)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/library.py", line 172, in define
result = self.m.define(schema, alias_analysis, tuple(tags))
RuntimeError: Tried to register an operator (mylib::foo(Tensor a, Tensor b) -> Tensor) with the same name and overload name multiple times. Each overload's schema should only be registered with a single call to def(). Duplicate registration: registered at /dev/null:135. Original registration: registered at /dev/null:212
To execute this test, run the following from the base repo dir:
python test/test_custom_ops.py TestOpProfiles.test_fake_registration
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_custom_ops.py`
cc @clee2000 @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,995,338,214
|
DISABLED test_parity__foreach_add_fastpath_inplace_cuda_complex128 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 3
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_add_fastpath_inplace_cuda_complex128&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40550182917).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_add_fastpath_inplace_cuda_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_add_', keys=('aten::_foreach_add_', 'Unrecognized', 'aten::result_type', 'cudaLaunchKernel', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1161, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1173, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.complex128], Tensor[size=(19, 19), device="cuda:0", dtype=torch.complex128], Tensor[size=(18, 18), device="cuda:0", dtype=torch.complex128], Tensor[size=(17, 17), device="cuda:0", dtype=torch.complex128], Tensor[size=(16, 16), device="cuda:0", dtype=torch.complex128], Tensor[size=(15, 15), device="cuda:0", dtype=torch.complex128], Tensor[size=(14, 14), device="cuda:0", dtype=torch.complex128], Tensor[size=(13, 13), device="cuda:0", dtype=torch.complex128], Tensor[size=(12, 12), device="cuda:0", dtype=torch.complex128], Tensor[size=(11, 11), device="cuda:0", dtype=torch.complex128], Tensor[size=(10, 10), device="cuda:0", dtype=torch.complex128], Tensor[size=(9, 9), device="cuda:0", dtype=torch.complex128], Tensor[size=(8, 8), device="cuda:0", dtype=torch.complex128], Tensor[size=(7, 7), device="cuda:0", dtype=torch.complex128], Tensor[size=(6, 6), device="cuda:0", dtype=torch.complex128], Tensor[size=(5, 5), device="cuda:0", dtype=torch.complex128], Tensor[size=(4, 4), device="cuda:0", dtype=torch.complex128], Tensor[size=(3, 3), device="cuda:0", dtype=torch.complex128], Tensor[size=(2, 2), device="cuda:0", dtype=torch.complex128], Tensor[size=(1, 1), device="cuda:0", dtype=torch.complex128]], args=(TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.complex128], Tensor[size=(19, 19), device="cuda:0", dtype=torch.complex128], Tensor[size=(18, 18), device="cuda:0", dtype=torch.complex128], Tensor[size=(17, 17), device="cuda:0", dtype=torch.complex128], Tensor[size=(16, 16), device="cuda:0", dtype=torch.complex128], Tensor[size=(15, 15), device="cuda:0", dtype=torch.complex128], Tensor[size=(14, 14), device="cuda:0", dtype=torch.complex128], Tensor[size=(13, 13), device="cuda:0", dtype=torch.complex128], Tensor[size=(12, 12), device="cuda:0", dtype=torch.complex128], Tensor[size=(11, 11), device="cuda:0", dtype=torch.complex128], Tensor[size=(10, 10), device="cuda:0", dtype=torch.complex128], Tensor[size=(9, 9), device="cuda:0", dtype=torch.complex128], Tensor[size=(8, 8), device="cuda:0", dtype=torch.complex128], Tensor[size=(7, 7), device="cuda:0", dtype=torch.complex128], Tensor[size=(6, 6), device="cuda:0", dtype=torch.complex128], Tensor[size=(5, 5), device="cuda:0", dtype=torch.complex128], Tensor[size=(4, 4), device="cuda:0", dtype=torch.complex128], Tensor[size=(3, 3), device="cuda:0", dtype=torch.complex128], Tensor[size=(2, 2), device="cuda:0", dtype=torch.complex128], Tensor[size=(1, 1), device="cuda:0", dtype=torch.complex128]]), kwargs={'alpha': '(3+3j)'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_add_fastpath_inplace_cuda_complex128
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,995,214,765
|
[custom ops] Fix destroy function
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary:
D72906445 seemed to cause a SIGABRT when running the test in the test plan. The change I narrowed it down to was where in fake_impls the [`deregister_fake_kernel` no longer calls `lib.destroy`](https://github.com/pytorch/pytorch/pull/150806/files#diff-7fd3f4222276c63b91f3a895530bb5efe137fd23165b48f25afcf3c06a5d2a8fL65-L69).
Calling `lib.destroy` in that handle results in a maximum recursion error where someone calls library.destroy which calls the handle which calls back to library.destroy.
So I compared the implementation of this _del_library and lib.destroy and it seemed like the main thing different was deleting `self.m`. So adding that fixed my issue!
Side note, I feel like we can combine `_del_library` and `library._destroy`? But I won't do it in this diff to make sure we don't break too many things 😅
Test Plan:
`buck test 'fbcode//mode/opt' fbcode//aiplatform/gmpp/bulk_eval/reader/service/tests:reader_service_handler_tests -- --exact 'aiplatform/gmpp/bulk_eval/reader/service/tests:reader_service_handler_tests - aiplatform.gmpp.bulk_eval.reader.service.tests.reader_service_handler_tests.ReaderServiceHandlerTests: test_add_preproc_output_into_queue'`
https://www.internalfb.com/intern/testinfra/testrun/10977524170296078
Differential Revision: D73017613
| true
|
2,995,179,893
|
[WIP] Generalize device caching allocator
|
guangyey
|
open
|
[
"open source",
"release notes: cpp"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151298
* #152932
* #138222
| true
|
2,995,169,599
|
[3/N] Use internal linkage in C++ files
|
cyyever
|
closed
|
[
"oncall: jit",
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: jit",
"ciflow/periodic"
] | 6
|
COLLABORATOR
|
Follows #151070.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mingfeima @XiaobingSuper @ashokei @jingxu10 @jerryzh168
| true
|
2,995,120,499
|
[inductor] [assertion error] `torch.select_scatter` crashes on inductor but passes on eager
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: pt2-dispatcher"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `torch.select_scatter` crashes with `assertionerror`.
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x = torch.select_scatter(x, torch.tensor([0]), 1, 0)
return x
model = Model()
x = torch.randn(1, 10)
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
eager
```
succeed on eager
```
Inductor
```
LoweringException: AssertionError:
target: aten.select_scatter.default
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cpu', torch.float32, size=[1, 10], stride=[10, 1]))
))
args[1]: TensorBox(StorageBox(
Pointwise(
'cpu',
torch.int64,
def inner_fn(index):
_ = index
tmp0 = ops.constant(0, torch.int64)
return tmp0
,
ranges=[1],
origin_node=full_default,
origins=OrderedSet([full_default])
)
))
args[2]: 1
args[3]: 0
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
2,995,093,773
|
[Hierarchical compile] Ensure output nodes are sorted last
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151295
* #151294
* #151293
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,995,093,693
|
[Hierarchical Compile] Handle autocast ctx manager
|
mlazos
|
closed
|
[
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151295
* __->__ #151294
* #151293
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,995,093,627
|
[Hierarchical Compile] Fix small bug
|
mlazos
|
closed
|
[
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151295
* #151294
* __->__ #151293
This technically would never be exposed because we never check that a node is an ancestor of itself, but it is good for it to be correct.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,995,039,425
|
Remove outdated Android workarounds of nearbyintf
|
cyyever
|
open
|
[
"module: cpu",
"triaged",
"open source",
"oncall: mobile",
"ciflow/trunk",
"release notes: quantization",
"ciflow/periodic",
"ciflow/mps",
"test-config/executorch"
] | 3
|
COLLABORATOR
|
This PR uses std::nearbyint on all supported platforms.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
2,994,968,427
|
Improve error message when calling binary pointwise functions with two jagged nested tensors
|
shink
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #150252
### Changes
Raise an ValueError with detailed message when calling binary pointwise functions with two jagged nested tensors with different symint sizes. For example, `(B, j1, D)` - `(B, j2, D)` will get an error.
| true
|
2,994,951,511
|
[Inductor UT] Conflicting declaration found in create_block_mask with torch.compile in specific CI image.
|
jianan-gu
|
closed
|
[
"oncall: cpu inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Meeting Conflicting declaration issue when running following UT in specific CI ephemeral.linux.2xlarge image.
- Detailed error:
https://github.com/pytorch/pytorch/actions/runs/14444977199/job/40548513199
- UT code: test/inductor/test_flex_attention.py::TestFlexAttentionCPU::test_make_block_mask_cpu
```
def causal_mask(b, h, q_idx, kv_idx):
return q_idx >= kv_idx
block_mask_a = torch.compile(create_block_mask)(
causal_mask, 1, 1, 512, 512, device=device
)
block_mask_b = create_block_mask(causal_mask, 1, 1, 512, 512, device=device)
```
### Versions
CI ephemeral.linux.2xlarge image
Refer to the link of https://github.com/pytorch/pytorch/actions/runs/14444977199/job/40548513199
| true
|
2,994,928,426
|
Fix CosineAnnealingWarmRestarts reset T_cur
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 9
|
CONTRIBUTOR
|
Fixes #88791
## Test Result
```python
pytest test/optim/test_lrscheduler.py -k test_CosineAnnealingWarmRestarts
```

| true
|
2,994,844,580
|
[MPSInductor] Adjust memory format detection
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150821
* __->__ #151288
* #151282
* #151272
* #151246
* #151224
MPS conv implementation will only yield channels last if input is in channels_last format
Fixes `TestGPUTests.test_conv2d_backward_channels_last` on MacOS-15
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,994,810,057
|
There is a significant performance degradation in the Triton operator generated for scaled_dot_product_attention by TorchInductor's aotcompile.
|
sujuyu
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
I have created a simple demo using cross-attention.
```python
import os
os.environ['TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS'] = "ATEN,CPP"
os.environ['TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_SEARCH_SPACE'] = "EXHAUSTIVE"
os.environ['TORCHINDUCTOR_MAX_AUTOTUNE_GEMM'] = "1"
import torch
import torch.nn.functional as F
import time
os.makedirs("./torch251_target", exist_ok=True)
os.environ['TORCHINDUCTOR_CACHE_DIR'] = "./torch251_target"
q, k, v = (
torch.randn(1000, 4, 1, 32).cuda(),
torch.randn(1, 4, 2048, 32).cuda(),
torch.randn(1, 4, 2048, 32).cuda(),
)
class M(torch.nn.Module):
def forward(self, q, k, v):
x = F.scaled_dot_product_attention(q, k, v)
return x
```
The sequence length of q is 2048, while the sequence length for k and v is 1. The forward operation is repeated 1000 times in both the Python runtime and AotInductor, using amp.autocast for half-precision inference.
```python
if __name__ == "__main__":
model = M().cuda().eval()
exported_model = torch.export.export(
model,
(q, k, v),
dynamic_shapes={
"q": {0: torch.export.Dim("batch", min=1, max=2048)},
"k": None,
"v": None
},
)
# exported_model = exported_model.run_decompositions()
fx_model = torch.fx.symbolic_trace(exported_model.module()).cuda()
fx_model(q, k, v)
# repeat 100 times to test the cost
with torch.amp.autocast(device_type = "cuda", enabled=True, dtype=torch.float16):
start_time = time.perf_counter()
for _ in range(1000):
fx_model(q, k, v)
end_time = time.perf_counter()
print(f"fx model cost {end_time - start_time} s")
with torch.amp.autocast(device_type = "cuda", enabled=True, dtype=torch.float16):
dynamic_lib_path = torch._export.aot_compile(
fx_model,
(q, k, v),
dynamic_shapes = (
{0: torch.export.Dim("batch", min=1, max=2048)},
None,
None,
),
options={
"aot_inductor.output_path": os.path.join(
"./torch251_target_before_decompose.so"
),
"max_autotune": True,
},
)
aot_model = torch._export.aot_load(
dynamic_lib_path,
"cuda"
)
q, k, v = (
torch.randn(1000, 4, 1, 32).cuda(),
torch.randn(1, 4, 2048, 32).cuda(),
torch.randn(1, 4, 2048, 32).cuda(),
)
# warm up
for _ in range(100):
aot_model(q, k, v)
# repeat 1000 times to test the cost
start_time = time.perf_counter()
for _ in range(1000):
aot_model(q, k, v)
end_time = time.perf_counter()
print(f"aot_model cost {end_time - start_time} s")
```
the print info is:
```bash
fx model cost 11.029950745403767 s
aot_model cost 11.29344904050231 s
```
For the NVIDIA A10, this performance data is already very slow. If I add the line exported_model = exported_model.run_decompositions(), the performance even improves significantly.
```bash
fx model cost 14.736855018883944 s
aot_model cost 4.444696590304375 s
```
I tried to find the answer from the generated C++ code. AOTInductor did not use operators like aten::_scaled_dot_product_flash_attention; instead, it replaced them with a Triton operator.
```C++
// Topologically Sorted Source Nodes: [scaled_dot_product_attention_default], Original ATen: [aten.mul]
auto triton_poi_fused_mul_0_xnumel = 128L*s0;
if (kernels.triton_poi_fused_mul_0 == nullptr) {
kernels.triton_poi_fused_mul_0 = loadKernel("./torch251_target/./cjgeab4gltlrnrbe62w4fewx4ni35pmm265wyndlslk4ddfdocgc.cubin", "triton_", 0, this->cubin_dir_);
}
```
"I think using _scaled_dot_product_flash_attention could further improve the speed. Is there a switch in AOTInductor that allows me to disable the Triton optimization for SDPA?
I have some additional information. If the dimensions of my q, k, v are all (1000, 4, 256, 32), simulating a self-attention scenario, even though the computational load increases significantly, the C++ code directly uses aten::_scaled_dot_product_flash_attention, and the performance is much better compared to the current cross-attention.
```C++
auto buf21 = at::_ops::_scaled_dot_product_flash_attention::call(buf18, buf19, buf20, 0.0, false, false, 0.17677669529663687);
```
### Versions
$python collect_env.py
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Alibaba Cloud Linux 3 (Soaring Falcon) (x86_64)
GCC version: (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3.8 2.32)
Clang version: 17.0.6 (Alibaba Cloud Compiler 17.0.6.4-24.11.20.alios7)
CMake version: version 3.31.1
Libc version: glibc-2.32
Python version: 3.10.0 | packaged by conda-forge | (default, Nov 20 2021, 02:24:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.32
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 2899.992
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.5.1+cu121
[pip3] torch_tensorrt==2.5.0
[pip3] torchaudio==2.5.1+cu121
[pip3] torchmetrics==1.0.3
[pip3] torchrec==1.1.0a0+7e7819e
[pip3] torchvision==0.20.1+cu121
[pip3] torchx==0.7.0
[pip3] triton==3.1.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.5.1+cu121 pypi_0 pypi
[conda] torch-tensorrt 2.5.0 pypi_0 pypi
[conda] torchaudio 2.5.1+cu121 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchrec 1.1.0a0+7e7819e pypi_0 pypi
[conda] torchvision 0.20.1+cu121 pypi_0 pypi
[conda] torchx 0.7.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
| true
|
2,994,777,851
|
vmap- out of memory
|
GLCUI
|
open
|
[
"triaged",
"module: vmap"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
i find the torch.vmap will need several times more memory when i use vmap to Performing matrix multiplication with batch than normal neural network Linear operation(eg: Linear) need with batch is there any optimization method?
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,994,770,627
|
there is no cuda 12.1 of pytoch 2.6.0, when update pytorch,the cuda version is always not suitble for pytorch version
|
cqray1990
|
closed
|
[
"module: binaries",
"oncall: releng"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
there is no cuda 12.1 of pytoch 2.6.0, when update pytorch,the cuda version is always not suitble for pytorch version
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,994,714,740
|
[cutlass backend] "Fix" FlexibleLayout
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151284
So Horace was right, Triton does fix the layout when rendering the template (i.e. roughly at the same time).
You can double check that running the unit test with gemm backend as "TRITON,CUTLASS". You will notice that the layout is fixed if we have triton in gemm backend, but flexible if triton is not there.
code pointer: https://github.com/pytorch/pytorch/blob/main/torch/_inductor/select_algorithm.py#L927
In the future, we should remove `fix_op_layout` from class CUTLASSGemmTemplate. But maybe we can monitor it for a bit first.
Differential Revision: [D72996143](https://our.internmc.facebook.com/intern/diff/D72996143/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,994,708,131
|
Replace all random is_fbcode imports to environment
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151283
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,994,695,281
|
[MPS] Fix logit output for half/bfloat
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150821
* #151288
* __->__ #151282
* #151272
* #151246
* #151224
Which also fixes MPSInductor pointwise test
TODO: (as followup PRs): get rid of special native_function.yaml dispatches and use stub
| true
|
2,994,674,307
|
DISABLED test_duplicate_registration_impl (__main__.TestOpProfiles)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: custom-operators",
"skipped",
"oncall: pt2",
"module: pt2-dispatcher"
] | 7
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, win, windows, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_duplicate_registration_impl&suite=TestOpProfiles&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40542477311).
Over the past 3 hours, it has been determined flaky in 60 workflow(s) with 122 failures and 60 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_duplicate_registration_impl`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_custom_ops.py`
cc @clee2000 @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,994,630,358
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,994,620,812
|
[cutlass backend][ez] Ban FP32 output dtype from using CUTLASS GEMM backend
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151279
FP32 not supported: https://github.com/pytorch/pytorch/issues/145952
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,994,591,812
|
[dynamo] replace `unimplemented` with `unimplemented_v2` in `variables/torch_functions.py`
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151278
* #151277
This addresses part of #147913.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,994,591,707
|
[dynamo] replace `unimplemented` with `unimplemented_v2` in `variables/functions.py`
|
StrongerXi
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151278
* __->__ #151277
This addresses part of #147913.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,994,578,555
|
[Inductor] Modify persistent+TMA template for Triton mm and admm to use new TMA API
|
NikhilAPatel
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151276
Summary:
This PR modifies the Triton template for persisten+TMA mm and admm to use the new functional API for TMA introduced here: https://github.com/triton-lang/triton/pull/6248/
This also involves setting a global Triton allocator function to be called at kernel launch for any kernels that require additional global memory workspace. This is done in triton_heuristics.py directly before kernels are launched.
Test Plan:
contbuild & OSS CI
Reviewers: paulzhan
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,994,557,082
|
[sigmoid] memory planner C10 deps
|
dolpm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 28
|
CONTRIBUTOR
|
Summary: perf-sensitive util functions for use in our memory planner
Test Plan: CI
Differential Revision: D73002726
| true
|
2,994,511,729
|
[inductor] disable alignment asserts in fbcode
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151274
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,994,484,281
|
[AOTInductor] Add states for constant folding process
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151273
Summary:
We add states in the constant folding process for AOTInductor.
Basically, there's 3 states, which is
(1) None: The state when no constants are loaded and uninitialized.
(2) Initialized: The state when constants are loaded, but not yet
folded.
(3) Folded: The state where the model is fully ready with folded
constants.
Note that even if constant folding is not enabled, we still only run
when state is FOLDED, this is okay because without constant folding, the
transition from INITIALIZED to FOLDED is just a pass-throught.
Test Plan:
python test/inductor/test_aot_inductor.py -k test_constant_folding_with_update
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
Differential Revision: [D73002538](https://our.internmc.facebook.com/intern/diff/D73002538)
| true
|
2,994,473,488
|
[MPSInductor] Fix silent correctness in bitcast
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150821
* __->__ #151272
* #151246
* #151224
By using Metal `as_type` which according to documentation does exactly
that:
> Metal adds an as_type<type-id> operator to allow any scalar or vector data type (that is not
a pointer) to be reinterpreted as another scalar or vector data type of the same size. The bits in
the operand are returned directly without modification as the new type. The usual type
promotion for function arguments is not performed.
Using `reinterpret_cast` created a potential silent correctness error when dtypes of different sizes were bitcast to each other
Add expicit cast to src_type to avoid errors due to type promotion (i.e.
soemthing like (x+1).view(dtype=torch.float16) would work correctly in
eager mode for int16 dtype, but would fail in compile, as arithmetic
operations will promote int16 to int32
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,994,458,541
|
[WIP][dynamic shapes] lru cache bound_sympy
|
pianpwk
|
open
|
[
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,994,447,228
|
Fix score_mod.py dynamic max autotune for backward
|
fegin
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151270
Same as https://github.com/pytorch/pytorch/pull/148991 but this PR fixes the backward path.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,994,306,456
|
[PP] Hang when num_microbatches < stages for Interleaved1F1B
|
H-Huang
|
open
|
[
"oncall: distributed",
"triaged",
"module: pipelining"
] | 0
|
MEMBER
|
Seeing a hang when pp_rank=4 and num_stages=8 with num_microbatches=2 for `Interleaved1F1B`. This is reproducible in torchtitan.
Issue has not been root caused yet. First step is to write a unit test to reproduce this in PyTorch.
cc @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,994,300,904
|
DISABLED test_parity__foreach_add_fastpath_inplace_cuda_bool (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_add_fastpath_inplace_cuda_bool&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40526159287).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_add_fastpath_inplace_cuda_bool`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,994,300,838
|
DISABLED test_kineto_profiler_with_environment_variable (__main__.SimpleKinetoInitializationTest)
|
pytorch-bot[bot]
|
open
|
[
"module: flaky-tests",
"skipped",
"oncall: profiler"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_kineto_profiler_with_environment_variable&suite=SimpleKinetoInitializationTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40526253603).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_kineto_profiler_with_environment_variable`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/profiler/test_kineto.py", line 25, in test_kineto_profiler_with_environment_variable
subprocess.check_output(
File "/opt/conda/envs/py_3.10/lib/python3.10/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/opt/conda/envs/py_3.10/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/opt/conda/envs/py_3.10/bin/python', '-W', 'always', '-c', '\nimport torch\nif torch.cuda.is_available() > 0:\n torch.cuda.init()\n']' died with <Signals.SIGSEGV: 11>.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/profiler/test_kineto.py", line 31, in test_kineto_profiler_with_environment_variable
self.assertTrue(
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Kineto is not working properly with the Dynolog environment variable
To execute this test, run the following from the base repo dir:
python test/profiler/test_kineto.py SimpleKinetoInitializationTest.test_kineto_profiler_with_environment_variable
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `profiler/test_kineto.py`
cc @clee2000 @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,994,180,469
|
[Dynamo] Implement sourceless named tuple support
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 6
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/140903
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,994,131,512
|
[dynamo] Avoid unnecessary `.detach()` call in `_make_subclass` polyfill
|
StrongerXi
|
open
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151265
This brings down compilation time quite a bit for certain tensor
subclass + `torch.compile` use cases, see #150706.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,994,089,268
|
[reland] Make export._trace._WrapperModule work in strict mode (#146919)
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
Summary:
as title
`export._trace._WrapperModule` is used to wrap functions into a Module so we can export the function.
We add `export._wrapper_utils` to `dynamo`'s `MOD_INLINELIST` so dynamo traces into `_WrapperModule`
Fixes https://github.com/pytorch/pytorch/issues/146867
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test:test_export -- -r wrapper_module
```
Differential Revision: D72986826
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,994,085,358
|
Make torch._chunk_cat support non-contiguous inputs
|
yf225
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Currently, `torch._chunk_cat` only supports contiguous inputs (due to `.view()` usage in `_pad_chunk()` supporting only contiguous tensor). This doesn't work for internal models where there can be non-contiguous input tensors:
- size=[8192, 16416], stride=[16448, 1] # stride[0] is larger than size[1]
- size=[1152, 384], stride=[1, 1152] # column-major tensor
In this PR, we relax the assumption on contiguous input tensor, by switching from `.view()` to `.reshape()`. Note that since `.reshape()` will try to use `.view()` under the hood whenever possible, this should not cause regression to existing use cases.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151263
| true
|
2,994,060,562
|
[WIP][SymmMem] Add sendrecv op
|
kwen2501
|
open
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151993
* #151819
* #151498
* __->__ #151262
* #151261
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,994,060,432
|
[SymmMem] Experimental NVSHMEM integration
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151993
* #151819
* #151498
* __->__ #151261
Adding NVSHMEM as a backend for `SymmetricMemory`, implementation of which is in `NVSHMEMSymmetricMemory.cu`.
Moving some helper functions in `CUDASymmetricMemory.cu` to `CUDASymmetricMemoryUtils.cpp`, so that they can be shared by `NVSHMEMSymmetricMemory`. These functions are mostly side-band exchange helpers (`store_all_gather`, `IpcChannel`, etc).
Adding `TORCH_SYMMEM` to control which implementation to use for CUDA tensors, currently support: `CUDA` (in-house impl), `NVSHMEM`.
The NVSHMEM feature is gated by build-time flag: `USE_NVSHMEM=1`. And `NVSHMEM_HOME` setting is required (TODO).
Ported most code from #146593.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,994,060,320
|
clang-format CUDASymmetricMemory.cu
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151262
* #151261
* __->__ #151260
Ported from #146592
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,994,054,112
|
[ONNX] Produce correct dtypes for bf16/f8 in IR TorchTensor
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: bug fixes"
] | 4
|
COLLABORATOR
|
Split the changes from https://github.com/pytorch/pytorch/pull/151069 to address https://github.com/microsoft/onnxscript/issues/2187, where the output np arrays do not have the correct ml_dtypes types as expected.
| true
|
2,994,040,897
|
[AMD][FA] Block mem efficient attention if backward head_dim > 128 in CK backend
|
merengue171
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 9
|
NONE
|
Summary:
https://github.com/ROCm/flash-attention?tab=readme-ov-file
{F1977092246}
CK doesn't support bwd head_dim>128. We'll exclude mem eff attention and pick math if use CK backend and bwd_head_dim > 128.
Test Plan:
buck2 run mode/opt scripts/xdwang/example:sdpa -- --head_dim 256
fwd+bwd:
{F1977100595}
Differential Revision: D72973245
| true
|
2,994,014,000
|
Gracefully handle optree less than minimum version, part 2
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151257
If optree is less than the minimum version, we should pretend it doesn't
exist.
The problem right now is:
- Install optree==0.12.1
- `import torch._dynamo`
- This raise an error "min optree version is 0.13.0"
The fix is to pretend optree doesn't exist if it is less than the min
version.
There are ways to clean up this PR more (e.g. have a single source of
truth for the version, some of the variables are redundant), but I am
trying to reduce the risk as much as possible for this to go into 2.7.
Test Plan:
I verified the above problem was fixed. Also tried some other things,
like the following, which now gives the expected behavior.
```py
>>> import torch
>>> import optree
>>> optree.__version__
'0.12.1'
>>> import torch._dynamo
>>> import torch._dynamo.polyfills.pytree
>>> import torch.utils._pytree
>>> import torch.utils._cxx_pytree
ImportError: torch.utils._cxx_pytree depends on optree, which is
an optional dependency of PyTorch. To u
se it, please upgrade your optree package to >= 0.13.0
```
I also audited all non-test callsites of optree and torch.utils._cxx_pytree.
Follow along with me:
optree imports
- torch.utils._cxx_pytree. This is fine.
- [guarded by check] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/_dynamo/polyfills/pytree.py#L29-L31
_cxx_pytree imports
- [guarded by check] torch.utils._pytree (changed in this PR)
- [guarded by check] torch/_dynamo/polyfills/pytree.py (changed in this PR)
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/distributed/_functional_collectives.py#L17
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/distributed/tensor/_op_schema.py#L15
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/distributed/tensor/_dispatch.py#L35
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/_dynamo/variables/user_defined.py#L94
- [guarded by try-catch] https://github.com/pytorch/pytorch/blob/f76b7ef33cc30f7378ef71a201f68a2bef18dba0/torch/distributed/tensor/experimental/_func_map.py#L14
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,994,003,430
|
[aot autograd][logging] Profile large missing gaps in compile time tracing
|
anijain2305
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151410
* #151409
* #150704
* #150717
* #151357
* __->__ #151256
* #151330
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,957,837
|
[cutlass backend][experimental] Try out presets for cutlass instead of searching all configs
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151255
Differential Revision: [D72668861](https://our.internmc.facebook.com/intern/diff/D72668861/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,936,426
|
`torch.compile(mode="max-autotune-without-cudagraph")` errors in triton compiler
|
StrongerXi
|
closed
|
[
"oncall: pt2"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This was originally observed in https://github.com/city96/ComfyUI-GGUF/issues/250.
After 20250405 nightly, `torch.compile(mode="max-autotune-without-cudagraph")` started to error for ComfyUI Flux, when we graph break on the attention (so it's not specific to sageattention as the original issue suggested).
I can't really come up with a repro without ComfyUI, so after chatting with @exclamaforte I'm creating this issue for now.
The error part of the logs (see more raw logs below):
```verbatim
triton_heuristics.py:617] Triton compilation failed: Placeholder.DESCRIPTIVE_NAME
triton_heuristics.py:617] def triton_mm(in_ptr0, arg_A, arg_B, out_ptr0):
triton_heuristics.py:617] EVEN_K : tl.constexpr = True
triton_heuristics.py:617] ALLOW_TF32 : tl.constexpr = False
triton_heuristics.py:617] USE_FAST_ACCUM : tl.constexpr = False
triton_heuristics.py:617] ACC_TYPE : tl.constexpr = tl.float32
triton_heuristics.py:617] BLOCK_M : tl.constexpr = 16
triton_heuristics.py:617] BLOCK_N : tl.constexpr = 32
triton_heuristics.py:617] BLOCK_K : tl.constexpr = 16
triton_heuristics.py:617] GROUP_M : tl.constexpr = 8
triton_heuristics.py:617] A = arg_A
triton_heuristics.py:617] B = arg_B
triton_heuristics.py:617]
triton_heuristics.py:617] M = 1
triton_heuristics.py:617] N = 18432
triton_heuristics.py:617] K = 3072
triton_heuristics.py:617] if M * N == 0:
triton_heuristics.py:617] # early exit due to zero-size input(s)
triton_heuristics.py:617] return
triton_heuristics.py:617] stride_am = 0
triton_heuristics.py:617] stride_ak = 1
triton_heuristics.py:617] stride_bk = 1
triton_heuristics.py:617] stride_bn = 3072
triton_heuristics.py:617]
triton_heuristics.py:617] # based on triton.ops.matmul
triton_heuristics.py:617] pid = tl.program_id(0)
triton_heuristics.py:617] grid_m = (M + BLOCK_M - 1) // BLOCK_M
triton_heuristics.py:617] grid_n = (N + BLOCK_N - 1) // BLOCK_N
triton_heuristics.py:617]
triton_heuristics.py:617] # re-order program ID for better L2 performance
triton_heuristics.py:617] width = GROUP_M * grid_n
triton_heuristics.py:617] group_id = pid // width
triton_heuristics.py:617] group_size = min(grid_m - group_id * GROUP_M, GROUP_M)
triton_heuristics.py:617] pid_m = group_id * GROUP_M + (pid % group_size)
triton_heuristics.py:617] pid_n = (pid % width) // (group_size)
triton_heuristics.py:617] tl.assume(pid_m >= 0)
triton_heuristics.py:617] tl.assume(pid_n >= 0)
triton_heuristics.py:617]
triton_heuristics.py:617] rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
triton_heuristics.py:617] rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
triton_heuristics.py:617] if ((stride_am == 1 and stride_ak == M) or (stride_am == K and str
triton_heuristics.py:617] offs_a_m = tl.max_contiguous(tl.multiple_of(rm % M, BLOCK_M),
triton_heuristics.py:617] else:
triton_heuristics.py:617] offs_a_m = rm % M
triton_heuristics.py:617] if ((stride_bk == 1 and stride_bn == K) or (stride_bk == N and str
triton_heuristics.py:617] offs_b_n = tl.max_contiguous(tl.multiple_of(rn % N, BLOCK_N),
triton_heuristics.py:617] else:
triton_heuristics.py:617] offs_b_n = rn % N
triton_heuristics.py:617] offs_k = tl.arange(0, BLOCK_K)
triton_heuristics.py:617] acc = tl.zeros((BLOCK_M, BLOCK_N), dtype=ACC_TYPE)
triton_heuristics.py:617]
triton_heuristics.py:617] for k_idx in range(0, tl.cdiv(K, BLOCK_K)):
triton_heuristics.py:617]
triton_heuristics.py:617] a_k_idx_vals = offs_k[None, :] + (k_idx * BLOCK_K)
triton_heuristics.py:617] b_k_idx_vals = offs_k[:, None] + (k_idx * BLOCK_K)
triton_heuristics.py:617]
triton_heuristics.py:617] idx_m = offs_a_m[:, None]
triton_heuristics.py:617] idx_n = a_k_idx_vals
triton_heuristics.py:617] xindex = idx_n
triton_heuristics.py:617] a = tl.load(A + (xindex))
triton_heuristics.py:617]
triton_heuristics.py:617] idx_m = b_k_idx_vals
triton_heuristics.py:617] idx_n = offs_b_n[None, :]
triton_heuristics.py:617] xindex = idx_m + 3072*idx_n
triton_heuristics.py:617] b = tl.load(B + (xindex))
triton_heuristics.py:617]
triton_heuristics.py:617]
triton_heuristics.py:617] acc += tl.dot(a, b, allow_tf32=ALLOW_TF32, out_dtype=ACC_TYPE)
triton_heuristics.py:617]
triton_heuristics.py:617]
triton_heuristics.py:617] # rematerialize rm and rn to save registers
triton_heuristics.py:617] rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
triton_heuristics.py:617] rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
triton_heuristics.py:617] idx_m = rm[:, None]
triton_heuristics.py:617] idx_n = rn[None, :]
triton_heuristics.py:617] mask = (idx_m < M) & (idx_n < N)
triton_heuristics.py:617]
triton_heuristics.py:617] # inductor generates a suffix
triton_heuristics.py:617] xindex = idx_n + 18432*idx_m
triton_heuristics.py:617] tmp0 = tl.load(in_ptr0 + (tl.broadcast_to(idx_n, acc.shape)), mask
triton_heuristics.py:617] tmp1 = acc + tmp0
triton_heuristics.py:617] tl.store(out_ptr0 + (tl.broadcast_to(idx_n, acc.shape)), tmp1, mas
triton_heuristics.py:617]
triton_heuristics.py:617] metadata: {'signature': {'in_ptr0': '*bf16', 'arg_A': '*bf16', 'arg_B'
triton_heuristics.py:617] Traceback (most recent call last):
triton_heuristics.py:617] File "/home/ryanguo99/.conda/envs/comfyui/lib/python3.12/site-packag
triton_heuristics.py:617] return fn(*args, **kwargs)
triton_heuristics.py:617] ^^^^^^^^^^^^^^^^^^^
triton_heuristics.py:617] File "/home/ryanguo99/.conda/envs/comfyui/lib/python3.12/site-packag
triton_heuristics.py:617] return semantic.dot(input, other, acc, input_precision, max_num_im
triton_heuristics.py:617] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
triton_heuristics.py:617] File "/home/ryanguo99/.conda/envs/comfyui/lib/python3.12/site-packag
triton_heuristics.py:617] assert lhs.shape[-2].value >= min_dot_size[0] and lhs.shape[-1].va
triton_heuristics.py:617] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
triton_heuristics.py:617] AssertionError: Input shapes should have M >= 16, N >= 16 and K >= 16
triton_heuristics.py:617]
triton_heuristics.py:617] The above exception was the direct cause of the following exception:
triton_heuristics.py:617]
triton_heuristics.py:617] Traceback (most recent call last):
triton_heuristics.py:617] File "/home/ryanguo99/pt/pytorch/torch/_inductor/runtime/triton_heur
triton_heuristics.py:617] binary = triton.compile(*compile_args, **compile_kwargs)
triton_heuristics.py:617] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
triton_heuristics.py:617] File "/home/ryanguo99/.conda/envs/comfyui/lib/python3.12/site-packag
triton_heuristics.py:617] module = src.make_ir(options, codegen_fns, module_map, context)
triton_heuristics.py:617] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
triton_heuristics.py:617] File "/home/ryanguo99/.conda/envs/comfyui/lib/python3.12/site-packag
triton_heuristics.py:617] return ast_to_ttir(self.fn, self, context=context, options=options
triton_heuristics.py:617] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
triton_heuristics.py:617] triton.compiler.errors.CompilationError: at 67:15:
triton_heuristics.py:617] idx_m = offs_a_m[:, None]
triton_heuristics.py:617] idx_n = a_k_idx_vals
triton_heuristics.py:617] xindex = idx_n
triton_heuristics.py:617] a = tl.load(A + (xindex))
triton_heuristics.py:617]
triton_heuristics.py:617] idx_m = b_k_idx_vals
triton_heuristics.py:617] idx_n = offs_b_n[None, :]
triton_heuristics.py:617] xindex = idx_m + 3072*idx_n
triton_heuristics.py:617] b = tl.load(B + (xindex))
triton_heuristics.py:617]
triton_heuristics.py:617]
triton_heuristics.py:617] acc += tl.dot(a, b, allow_tf32=ALLOW_TF32, out_dtype=ACC_TYPE)
```
### Error logs
1. tlparse: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpYZ7xEV/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
2. `TORCH_LOGS="+inductor" logs: P1785426785
3. pdb log: P1785426374
### Versions
main 1a1a32ce5af, python 3.12, triton 3.2.0
cc @chauhang @penguinwu
| true
|
2,993,923,961
|
[CUDA][CUTLASS] CUTLASS 3.9 submodule upgrade
|
eqy
|
closed
|
[
"oncall: distributed",
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Originally authored by Jack Kosaian, likely needs #ifdefs if we want to preserve compat with 3.8
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @ptrblck @msaroufim @jerryzh168
| true
|
2,993,914,917
|
test_store: fix timeout for test_queues
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
MEMBER
|
Fixes #151216, #151215
Previously I forgot to revert the timeout after setting it for the timeout test.
To prevent this in the future I split the test into 3 different tests so timeout testing is isolated.
Test plan:
Stress tested
```
pytest test/distributed/test_store.py -k queue -v -s --minutes 10
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
2,993,899,910
|
Actually support LOAD_BUILD_CLASS
|
williamwen42
|
open
|
[
"feature",
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 0
|
MEMBER
|
Actually implement tracing rules for `LOAD_BUILD_CLASS`.
Followup to https://github.com/pytorch/pytorch/issues/128942, which was patched with a better error message in https://github.com/pytorch/pytorch/pull/150323.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @guilhermeleobas
| true
|
2,993,867,757
|
[provenance_tracking] Add node mapping support for ExternKernel type
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 30
|
CONTRIBUTOR
|
Summary:
As title.
Support on other case for ExternKernel type
Test Plan:
Test tlparse link output: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmptwbYfX/dedicated_log_torch_trace_opt1ka7p.log/-_0_0_0/inductor_triton_kernel_to_post_grad_nodes_15.json?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
Complete tlparse output link for test:
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmptwbYfX/dedicated_log_torch_trace_opt1ka7p.log/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,804,144
|
update visualizer with compare two schedules method
|
H-Huang
|
open
|
[
"oncall: distributed"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151249
* #151248
* #150359
* #150347
cc @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,993,803,706
|
Add get_pipeline_order() for Gpipe and 1F1B
|
H-Huang
|
open
|
[
"oncall: distributed"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151249
* __->__ #151248
* #150359
* #150347
| true
|
2,993,753,907
|
[c10d][fr] Enable FR analysis script for rest of all coalesce op
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"suppress-api-compatibility-check",
"suppress-bc-linter"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151238
* __->__ #151247
* #151243
We revisited how coalesced collective is working in https://github.com/pytorch/pytorch/pull/151243 and we now want to enable the script to work for slow path. The change is indeed bc-breaking but this is needed to make it work and the API is an internal use API. It is not user facing. For slow path the individual has input-sizes and output sizes recorded but no state. The final one has the state ready. We check the correctness of each individual collective one by one but we don't check the state match for these collectives, we can only check the state match for the last one which is the work item with coalesced label.
Added more unit test for slow path.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,993,748,670
|
[MPSInductor] Cast halfs to floats
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150821
* __->__ #151246
* #151224
To avoid accuracy issues when small reductions are unrolled, cast half to float during the `load` op
As `op_math_t<half>` is indeed float
This fixes `test_unroll_small_reduction` for reduced precision types
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,741,862
|
[c10d][fr] Record each individual collective being coalesced
|
fduwjj
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151245
* #151244
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,993,741,772
|
[c10d][fr] Enable FR analysis script for all fast-path coalesce op
|
fduwjj
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151245
* __->__ #151244
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,993,718,822
|
[c10d][fr] Enable FR analysis script for all fast-path coalesce op
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151238
* #151247
* __->__ #151243
This PR is to enable FR for all coalesce ops for fast path. (batch p2p is enabled in the current script, so we will mainly focus on non-P2P ops). To explain what is fast path, let's revisit how coalesced collective is working today:
For non-P2P coalesced ops, there are are several ways to call it (due to legendary reasons):
- Way one: Directly call python api like all_reduce_coalesced in python, this will be deprecated soon.
- Way two: Directly call api inside PGNCCL like allreduce_coalesced. The way case 1 will eventually call into this. This is not deprecated and will not be deprecated, IIUC.
- Way three: Using _coalescing_manager in python, like:
```
with _coalescing_manager():
for i in range(num_colls):
dist.all_reduce(tensors[i])
```
This way has two path:
- Fast path: when users call all-reduce, all-gather-into-tensor or reduce-scatter, we will only launch one big collective by calling the api from case 1.
- Slow path: we call startCoalescing() in the beginning and then a bunch of collectives (each one will generate a FR entry) and then endCoalescing(). Inside startCoalescing(), groupStart() is called and inside endCoalescing(), groupEnd() is then called. So although this is going to be one collective, we call into PGNCCL for each collective coalesced in the slow path case.
- For uneven all-gather (allgather_v) and reduce-scatter, it follows the pattern mention in slow path. It directly call cpp api inside PGNCCL.
This PR addressed the fast path because this is just an easy case, we store the collectives info on the python side, and we will only call into PGNCCL once so there will only be one work and one FR entry. We can just treat them as regular coalesced collective.
We add some e2e unit test for build_db function so that the change to FR is more thoroughly tested.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,993,717,540
|
[dynamic shapes] bound_sympy for size-oblivious min/max reasoning
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Differential Revision: D72978020
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,993,711,825
|
[BE][1/2] Move original_weights_lookup attribute to constant
|
bbeckca
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"fx",
"release notes: AO frontend"
] | 7
|
CONTRIBUTOR
|
Summary: As title. Cleaning usages by using global constant.
Test Plan: `buck test 'fbcode//mode/opt' fbcode//caffe2/test:quantization_fx -- --exact 'caffe2/test:quantization_fx - test_keep_original_weights (quantization.fx.test_quantize_fx.TestQuantizeFx)'`
Differential Revision: D72892815
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,993,709,224
|
[funcol] wait() breaks the chain for backwards
|
wconstab
|
open
|
[
"oncall: distributed",
"triaged"
] | 3
|
CONTRIBUTOR
|
As reported by @lw, and captured by this repro, the wait() operation appears to 'detach' its waited output tensor and prevent gradients from flowing during backwards. The result is that any upstream paramters will not receive the expected grads.
```python
import torch
import torch.distributed
import torch.distributed.tensor.parallel
torch.distributed.init_process_group(backend="nccl", rank=0, world_size=1, device_id=torch.device("cuda", 0), init_method="tcp://127.0.0.1:2743")
device_mesh = torch.distributed.device_mesh.DeviceMesh.from_group(torch.distributed.group.WORLD, "cuda")
emb = torch.nn.Embedding(128, 64).cuda()
emb = torch.distributed.tensor.parallel.parallelize_module(
emb, device_mesh, torch.distributed.tensor.parallel.RowwiseParallel()
)
w = torch.randn(64, device="cuda", requires_grad=True)
a = emb(torch.randint(0, 128, (1024,), device="cuda"))
b = a.wait()
c = b + w
c.pow(2).sum().backward()
print(f"{a.requires_grad=}")
print(f"{b.requires_grad=}")
print(f"{emb.weight.grad is None=}")
print(f"{w.grad is None=}")
# Output:
# a.requires_grad=True
# b.requires_grad=False
# emb.weight.grad is None=True
# w.grad is None=False
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
2,993,692,229
|
[inductor][take 2] Change minimum number of SMs to 58 to let L4 Ada use Triton GEMM backend
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148622
* __->__ #151239
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,689,827
|
[c10d][fr] Record each individual collective being coalesced
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151238
* #151247
* #151243
During the record of FR for coalesced collectives we are not consistent. For P2P ops, we log individual collectives into FR but for non-p2p ops, we don't do that. This PR is trying to make non-P2P also log individual collective into FR so that we can use script to check correctness of ops for each one of collectives coalesced.
Also the added unit test also address the unit test ask in the comment in https://github.com/pytorch/pytorch/pull/150863?fbclid=IwZXh0bgNhZW0CMTEAAR4a5Rd_JyJlrbKZcacbIv5WX5b4MqBRNn0hpgl-VTSD0eeXRlPZ9Ty_CPOYhQ_aem_ALEG1ibRajwie-rn1B4n5w#pullrequestreview-2751254224.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,993,689,389
|
[inductor][take 2] Change minimum number of SMs to 58 to let L4 Ada use Triton GEMM backend
|
henrylhtsang
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.