id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,009,359,765
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE3_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE3_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40883955911).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE3_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 588.12 MiB is free. Including non-PyTorch memory, this process has 21.46 GiB memory in use. Of the allocated memory 6.69 GiB is allocated by PyTorch, and 14.50 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE3_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,359,282
|
DISABLED test_matmul_layer_norm_dynamic_shapes_cpu (__main__.DynamicShapesCpuTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_matmul_layer_norm_dynamic_shapes_cpu&suite=DynamicShapesCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40881678238).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_matmul_layer_norm_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 5640, in test_matmul_layer_norm
self.common(foo, (inp, weight), check_lowp=False)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 489, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 662, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 483, in run
def run(*ex, **kwargs):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1217, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 318, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
compiled_fn, args_, disable_amp=disable_amp, steal_args=True
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/utils.py", line 100, in g
return f(*args)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
RuntimeError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesCpuTests.test_matmul_layer_norm_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,359,280
|
DISABLED test_cublas_addmm_size_1000_cuda_bfloat16 (__main__.TestMatmulCudaCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: linear algebra",
"skipped"
] | 5
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cublas_addmm_size_1000_cuda_bfloat16&suite=TestMatmulCudaCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40883373157).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cublas_addmm_size_1000_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_matmul_cuda.py", line 146, in test_cublas_addmm
self.cublas_addmm(size, dtype, False)
File "/var/lib/jenkins/workspace/test/test_matmul_cuda.py", line 132, in cublas_addmm
self.assertEqual(res_cpu, res_cuda)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 5523 / 1003002 (0.6%)
Greatest absolute difference: 7.25 at index (523, 18) (up to 0.1 allowed)
Greatest relative difference: 336.0 at index (321, 416) (up to 0.1 allowed)
To execute this test, run the following from the base repo dir:
python test/test_matmul_cuda.py TestMatmulCudaCUDA.test_cublas_addmm_size_1000_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_matmul_cuda.py`
cc @clee2000 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,009,358,174
|
[export] set is_exporting() for strict
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Helpful for upcoming work in figuring when to use stack trace in prettifying dynamic shapes errors
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,321,840
|
Graph Partition Issue Tracker
|
BoyuanFeng
|
open
|
[
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
This issue tracks work items for graph partition which is a [feature](https://github.com/pytorch/pytorch/issues/125864) to increase cudagraph coverage. It splits off non-cudagraphable ops and cudagraphifies the remaining ops.
Features:
- [x] Inductor graph partition #147038
- [x] Cudagraph partition #147648
- [x] Dynamic shape inputs & outputs support #149458
- [x] `cudagraph_unsafe` custom ops support #149782
- [x] random number generator state support #150958
- [x] reorder to reduce the number of partitions for simple dependencies #150814
- [ ] improved reordering to reduce the number of partitions and peak memory #151968
Robustness:
- [x] Pass all inductor tests under [test_torchinductor.py](https://github.com/pytorch/pytorch/blob/main/test/inductor/test_torchinductor.py)
- [ ] Pass all cudagraph tests under [test_cudagraph_trees.py](https://github.com/pytorch/pytorch/blob/main/test/inductor/test_cudagraph_trees.py) #152048
cc @mcarilli @ezyang @eellison @penguinwu @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,009,312,779
|
[ONNX] Update ONNX on CI
|
titaiwangms
|
closed
|
[
"module: onnx",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Update ONNX version on CI (split from #151694 )
| true
|
3,009,295,944
|
[CUDA][TF32] Account for TF32 in `test_corrcoef`
|
eqy
|
closed
|
[
"module: cuda",
"module: complex",
"open source",
"Merged",
"module: tf32",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
cc @ptrblck @msaroufim @jerryzh168 @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames @zasdfgbnm
| true
|
3,009,287,485
|
profile for torch.add(x, x) where x is a zero-sized tensor looks bogus
|
zou3519
|
open
|
[
"oncall: profiler"
] | 6
|
CONTRIBUTOR
|
```py
from torch.profiler import profile, record_function, ProfilerActivity
import torch
x = torch.randn(0)
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
with record_function("model_inference"):
x + x
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
```
Gives:
```
In [7]: print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::matmul 0.46% 8.994us 62.32% 1.213ms 606.382us 2
aten::dot 61.72% 1.201ms 61.86% 1.204ms 601.884us 2
model_inference 6.61% 128.555us 8.13% 158.251us 158.251us 1
aten::to 1.04% 20.242us 5.30% 103.077us 3.221us 32
aten::_to_copy 2.19% 42.586us 4.26% 82.835us 2.589us 32
aten::ones 2.08% 40.453us 2.87% 55.895us 13.974us 4
aten::add 2.32% 45.200us 2.59% 50.328us 12.582us 4
aten::abs 1.27% 24.757us 2.20% 42.744us 21.372us 2
aten::__lshift__ 0.67% 12.990us 1.76% 34.283us 34.283us 1
aten::pow 1.40% 27.282us 1.58% 30.817us 10.272us 3
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
```
which seems really bizarre
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
3,009,262,596
|
Add device check for inputs
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 8
|
CONTRIBUTOR
|
Summary: Generate device checks for inputs in AOTI. Enable with AOTI_RUNTIME_CHECK_INPUTS=1
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r test_runtime_checks_device_type_failed
```
Differential Revision: D73382824
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,247,165
|
[export] warn when Dim.AUTO 0/1 specializes
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 15
|
CONTRIBUTOR
|
Fixes #151582
example warning for Dim.AUTO:
```
torch/_export/non_strict_utils.py:499] dimension inputs['x'].shape[1] 0/1 specialized; Dim.AUTO was specified along with a sample input with hint = 1.
```
example error when Dim.DYNAMIC specializes:
```
- Received user-specified dim hint Dim.DYNAMIC(min=None, max=None), but export 0/1 specialized due to hint of 0 for dimension inputs['x'].shape[0].
```
| true
|
3,009,238,130
|
[ONNX] Update decomposition logic to loop over onnx registry
|
titaiwangms
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: bug fixes"
] | 8
|
COLLABORATOR
|
Fixes #150367
This PR makes decomposition table from onnx registry, which includes registered ops not only ATen and prim. This will help to keep the custom ops that are specified in the custom_translation table from decomposition during ONNX export.
| true
|
3,009,225,842
|
[cutlass backend] Move cutlass compiled cache to cache_dir
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 8
|
CONTRIBUTOR
|
Moved "compiled_cache.db" to cache folder.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,206,644
|
[Sana][HybridCache] Fix bug in detect_attr_assignment
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: AO frontend"
] | 6
|
CONTRIBUTOR
|
Summary: tree_flatten_with_map will internally call unflatten function with user supplied function. But this function was not returning anything causing the leaves to be None. This is wrong when the constructor is sensitive to this behaviour
Test Plan: CI
Differential Revision: D73388529
| true
|
3,009,151,254
|
Optimize printing sympy expressions during logging and cache key computation
|
laithsakka
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
repo:
```
import torch
def _cumsum(o):
ret = [0] * (len(o) + 1)
for i in range(len(o)):
ret[i + 1] = ret[i] + o[i]
return ret
@torch.compile(dynamic=True)
def func(o):
out = _cumsum(o)
return out
func([i for i in range(2000)])
```
We have a fast print implementation used in inductor here
https://github.com/pytorch/pytorch/blob/625b4edb975da25818eeae27cdbf9ba916973961/torch/_inductor/utils.py#L652-L667
maybe we can reuse it?
profile:
<img width="1490" alt="Image" src="https://github.com/user-attachments/assets/d2fb3148-c981-4365-ad0d-e75406bb45d2" />
https://fburl.com/scuba/pyperf_experimental/on_demand/vo6ru8ty
internal xref:
https://fb.workplace.com/groups/1075192433118967/permalink/23929961646604309/
Note this part is disabled from the model compilation even we can enable it after we fix this .
even though its not there we still see 10% cost for printing sympy expression in full model compilation
https://docs.google.com/document/d/1H-jueMz5VJuX6qVzyBl10OhlWWkxhAjp74JGtl7JhKg/edit?ouid=111904611073736927346&usp=docs_home&ths=true
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,009,147,832
|
Support more dtypes for input, indices in gather
|
isuruf
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151715
* __->__ #151822
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,138,412
|
Updates NCCLConfig with QOS variable
|
syed-ahmed
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151821
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,009,101,560
|
Pytorch aten::col2im not currently supported on the MPS backend
|
cats256
|
closed
|
[
"triaged",
"module: mps"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
The aten::im2col was implemented but the backward version aten::col2im is not.
```
import torch
import torch.nn.functional as F
device = "mps" if torch.backends.mps.is_available() else "cpu"
if __name__ == '__main__':
print("torch version:", torch.__version__)
tensor = torch.empty(4, 2, 40, 40, requires_grad=True).to(device)
unfolded_tensor = F.unfold(input=tensor, kernel_size=3, padding=1, stride=1)
loss = unfolded_tensor.sum()
loss.backward()
```
Output
```
torch version: 2.6.0
UserWarning: The operator 'aten::col2im' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at [/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:14](https://file+.vscode-resource.vscode-cdn.net/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:14).)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
```
PyTorch version: 2.6.0
Hardware: Apple M4 Air 10-core CPU 10-core GPU
### Alternatives
_No response_
### Additional context
aten::col2im (forward version) was implemented here https://github.com/pytorch/pytorch/issues/132711
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,009,086,925
|
[SymmMem] Add all_to_all_vdev
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151993
* __->__ #151819
* #151498
* #151261
Merge in/out splits into one tensor
Multi-block
Use sync instead of barrier
Use nvshmemx_collective_launch
Rotate blocks among peer
write back input splits
Parallel scan works
Use scan for output offsets
Use at most 16 blocks
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,009,078,319
|
use vectorized loads and stores for all datatypes in torch.cat
|
ngimel
|
open
|
[
"release notes: cuda"
] | 1
|
COLLABORATOR
|
Enable vectorized stores in cat whenever possible.
Unforunately, cat on the last dim still struggles to reach peak bw, when last dim sizes are small, so writes from the different threads are not coalesced. Still, it's about 15% gain for the shapes that are supported and where just vectorized reads weren't enough (where the catted slices are multiple of 16-byte alignment), dim0 cats remain approximately the same
The kernel is pretty much copy-paste of CatArrayBatchedCopy_contig kernel, with regular loads/stores replaced by vectorized loads/stores, and necessary adjustments done to the offset calculation to pretend that tensor consists of alignment-sized elements.
TODO: additional testing, the test failure where some intermediate tensor was of size [1,1] with strides [1024, 1024] and thus took vectorized path was pretty unexpected
| true
|
3,009,072,489
|
Save/load op profiles
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability",
"skip-url-lint"
] | 7
|
CONTRIBUTOR
|
Add ability to save/load op profiles into a yaml file:
```python
op_profile = self.get_sample_op_profile()
# Save
save_op_profiles(op_profile, "op_profile.yaml")
# Load
loaded = load_op_profiles("op_profile.yaml")
assert op_profile == loaded
```
| true
|
3,009,069,770
|
[easy] Fix test_dynamo_timed
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151816
Summary: The structured logging counter is a global that might have been affected by earlier tests. Clear it explicitly.
Fixes #148093
Test Plan: `pytest test/dynamo/test_utils.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,009,045,638
|
Ensure runners have the required prefix
|
ZainRizvi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Clone changes from https://github.com/pytorch/pytorch/pull/151696/ since that PR wouldn't merge
| true
|
3,009,042,979
|
[MergeBot] Update PullRequestResolved Regex
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
By copying an updated one from https://github.com/ezyang/ghstack/commit/cff091f3f3a598c36eb4ca99622833e1011d6fbc
| true
|
3,009,038,580
|
Back out "Do not propagate real tensor in extern kernel"
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary:
D73002775 breaks aot_compile for many draft exported models on PT2I dashboard. Revert.
Example error msg:
```
OrderedSet([]) >= OrderedSet([u1185, u1186, u1187]) (inductor >= fx)
fx node is: %embedding_bag_byte_prepack : [num_users=4] = call_function[target=torch.ops.quantized.embedding_bag_byte_prepack.default](args = (%view_10,), kwargs = {})
new operations are:
```
Differential Revision: D73381032
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,031,983
|
[CUDA][CPU] Bump system memory requirement for `test_cross_entropy_large_tensor`
|
eqy
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
`/usr/bin/time` seems to show max resident pages at 119GiB
cc @ptrblck @msaroufim @jerryzh168
| true
|
3,008,980,915
|
[CUDA][MXFP8] bump tolerances for `test_blockwise_mxfp8_nvfp4_numerics`
|
eqy
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"matrix multiplication",
"module: float8"
] | 5
|
COLLABORATOR
|
got a slightly lower sqnr on a smaller GPU
cc @ptrblck @msaroufim @jerryzh168 @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
3,008,977,635
|
StringCordView: make iterator fast when there is only one piece
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* __->__ #151810
* #151807
* #151806
* #151805
* #151804
* #151803
* #151802
* #151801
This makes the StringCordView iterator a variant holding
either the existing implementation (when there is more than one piece)
or a simple `std::string_view::iterator` (when there is only one
piece). The latter seems to be significantly cheaper.
Differential Revision: [D73379178](https://our.internmc.facebook.com/intern/diff/D73379178/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,008,941,848
|
[export] deserialization for unbacked ranges is wrong
|
pianpwk
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
ShapeEnv range info is wrong for unbacked symbols after we deserialize, with lower bound of 2:
```
import io
import torch
from torch.export import export, save, load
class Foo(torch.nn.Module):
def forward(self, x):
n = x.item()
return torch.empty(n)
ep = export(Foo(), (torch.tensor([5]),))
buffer = io.BytesIO()
save(ep, buffer)
buffer.seek(0)
loaded_ep = load(buffer)
# pre-serialize ep
print("pre-serialize")
shape_env = torch._guards.detect_fake_mode([
node.meta.get("val") for node in ep.graph.nodes
]).shape_env
print(shape_env.var_to_range)
# deserialized ep
print("deserialized")
shape_env = torch._guards.detect_fake_mode([
node.meta.get("val") for node in loaded_ep.graph.nodes
]).shape_env
print(shape_env.var_to_range)
```
we get:
```
pre-serialize
{u0: VR[0, int_oo]}
deserialized
{u0: VR[2, int_oo]}
```
This happens because we were blindly clamping lower bounds for all symbols (this was intended just for backed symbols, so users could specify min=0 or 1): https://github.com/pytorch/pytorch/blob/0f8613bf5cbdd7a2af5c46e6fa1adda35c69db8d/torch/_export/serde/serialize.py#L2171
But this was conveniently helping us get past 0/1 data-dependent errors when deserializing tensor values (in empty_strided calls), which were never exposed. A more correct fix could be to save and load size-like info, and deserialize node-by-node, storing runtime asserts in the ShapeEnv as needed. Or we could just start serializing the ShapeEnv in full, so we stop running into such information loss issues.
### Versions
Collecting environment information...
PyTorch version: 2.8.0a0+git1a48382
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 92
On-line CPU(s) list: 0-91
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 92
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 5.8 MiB (92 instances)
L1i cache: 5.8 MiB (92 instances)
L2 cache: 46 MiB (92 instances)
L3 cache: 1.4 GiB (92 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-91
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] adam-atan2-pytorch==0.1.1
[pip3] alphafold3-pytorch==0.6.6
[pip3] bert_pytorch==0.0.1a4
[pip3] ema-pytorch==0.7.3
[pip3] executorch==0.4.0.dev20240809+cpu
[pip3] flake8==7.1.1
[pip3] frame-averaging-pytorch==0.1.2
[pip3] lion-pytorch==0.2.2
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.18.1
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.0
[pip3] onnxscript==0.3.0.dev20250225
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.13.1
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-lightning==2.0.7
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] pytorch-triton==3.0.0+45fff310c8
[pip3] rotary-embedding-torch==0.8.5
[pip3] torch==2.8.0a0+git1a48382
[pip3] torch_geometric==2.4.0
[pip3] torch-mlir==20241017.255
[pip3] torch-stoi==0.2.1
[pip3] torch_tensorrt==2.6.0
[pip3] torchao==0.10.0+git7d879462
[pip3] torchaudio==2.6.0.dev20250131+cpu
[pip3] torchdiffeq==0.2.4
[pip3] torchmetrics==1.0.3
[pip3] torchrec==0.9.0a0+5e30669
[pip3] torchsde==0.2.6
[pip3] torchsr==1.0.4
[pip3] torchtext==0.18.0
[pip3] torchtune==0.5.0
[pip3] torchtyping==0.1.5
[pip3] torchvision==0.22.0a0+fab1188
[pip3] torchx==0.7.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,008,934,242
|
[BE] Move aarch64 docker build to larger node
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
They happen once a week or so, not sure why it needs to be on the slowest machine possible
| true
|
3,008,917,707
|
Fix missing moves in SchemaTypeParser::parseFakeAndRealType
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* __->__ #151807
* #151806
* #151805
* #151804
* #151803
* #151802
* #151801
Was seeing a small amount of shared_ptr traffic from these.
The std::move(text) at the top is just a piggyback.
Differential Revision: [D73376720](https://our.internmc.facebook.com/intern/diff/D73376720/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,008,917,624
|
Fix a missed c10::TypeFactory::create spot in function_schema_parser
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* #151807
* __->__ #151806
* #151805
* #151804
* #151803
* #151802
* #151801
Looks like we are supposed to be using TypeFactory instead of direct creation everywhere that might run on mobile.
Differential Revision: [D73376716](https://our.internmc.facebook.com/intern/diff/D73376716/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,008,917,530
|
Fix easy missing moves in function_schema_parser
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* #151807
* #151806
* __->__ #151805
* #151804
* #151803
* #151802
* #151801
Just some straightforward not-moving-upon-return.
Differential Revision: [D73376718](https://our.internmc.facebook.com/intern/diff/D73376718/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,008,917,434
|
Add & use Token::text_view() (which returns a string_view unlike text())
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* #151807
* #151806
* #151805
* __->__ #151804
* #151803
* #151802
* #151801
Sadly, I can't just fix text() because that might cause lifetime issues in somebody's code.
Differential Revision: [D73376715](https://our.internmc.facebook.com/intern/diff/D73376715/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,008,917,351
|
Fix return type of TypeFactoryBase<c10::DynamicType>::get
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* #151807
* #151806
* #151805
* #151804
* __->__ #151803
* #151802
* #151801
getBaseType() actually returns a reference. This was causing shared_ptr copies.
Differential Revision: [D73376717](https://our.internmc.facebook.com/intern/diff/D73376717/)
| true
|
3,008,917,238
|
Create and use DynamicTypes for check in DispatchKeyExtractor::makeBitsetForDispatchArgs
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* #151807
* #151806
* #151805
* #151804
* #151803
* __->__ #151802
* #151801
On mobile, many but not all things in the JIT type subsystem start using DynamicType. Not using DynamicType was imposing a startup time cost here, as explained in the comment.
Differential Revision: [D73129442](https://our.internmc.facebook.com/intern/diff/D73129442/)
| true
|
3,008,917,147
|
Don't copy DynamicType argument to DynamicType::create
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* #151807
* #151806
* #151805
* #151804
* #151803
* #151802
* __->__ #151801
This improves performance of DynamicType::isSubtypeOfExt.
Differential Revision: [D73129449](https://our.internmc.facebook.com/intern/diff/D73129449/)
| true
|
3,008,917,062
|
Fix extra heap allocation in Source constructor
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* #151807
* #151806
* #151805
* #151804
* #151803
* #151802
* #151801
* __->__ #151800
* #151682
This was a sneaky one: the StringCordView default constructor allocates.
Differential Revision: [D73129448](https://our.internmc.facebook.com/intern/diff/D73129448/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,008,913,954
|
Expanding subset of tensor reads wrong memory
|
martenlienen
|
open
|
[
"triaged",
"module: correctness (silent)",
"bug",
"oncall: pt2",
"module: dynamic shapes"
] | 7
|
NONE
|
### 🐛 Describe the bug
I have derived the following minimal failing example:
```python
import torch
def expand(x, n):
return x.expand((n,))
@torch.compile()
def f(n: int, device: str):
numbers = torch.arange(10, device=device)
for i in range(len(numbers)):
expanded = expand(numbers[i], n)
print(expanded[0])
device = "cuda"
f(1, device)
print()
f(2, device)
```
This should print the integers from 0 to 9 twice, but what you get instead is
```
tensor(0, device='cuda:0')
tensor(1, device='cuda:0')
tensor(2, device='cuda:0')
tensor(3, device='cuda:0')
tensor(4, device='cuda:0')
tensor(5, device='cuda:0')
tensor(6, device='cuda:0')
tensor(7, device='cuda:0')
tensor(8, device='cuda:0')
tensor(9, device='cuda:0')
tensor(0, device='cuda:0')
tensor(0, device='cuda:0')
tensor(2, device='cuda:0')
tensor(0, device='cuda:0')
tensor(4, device='cuda:0')
tensor(0, device='cuda:0')
tensor(6, device='cuda:0')
tensor(0, device='cuda:0')
tensor(8, device='cuda:0')
tensor(0, device='cuda:0')
```
The specific values of `n` are not important, only that they differ. If you use a `linspace` instead of an `arange`, the pattern is different. Then it prints the first value of the `linspace` in every iteration except every 5th, where it prints the correct value (at least with `dtype=torch.float32`). If I inline the definition of `expand`, the bug disappears. It only happens on CUDA devices. If you set `device = "cpu"`, it does not happen. If you don't compile `f`, it also does not happen. If we `.clone()` `numbers[i]`, it also does not happen.
While this example `print`s to show the bug, I have also observed it without `print` in my sampling code (only every 5th generated sample was not trash).
### Error logs
[dedicated_log_torch_trace_1jeap82o.log](https://github.com/user-attachments/files/19837453/dedicated_log_torch_trace_1jeap82o.log)
[tl_out.tar.gz](https://github.com/user-attachments/files/19837462/tl_out.tar.gz)
### Versions
I have confirmed this bug on 2.5.1, 2.6.0 and today's nightly.
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
MIG 3g.40gb Device 0:
Nvidia driver version: 535.230.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6336Y CPU @ 2.40GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 6
CPU max MHz: 2400.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
L1d cache: 2.3 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 60 MiB (48 instances)
L3 cache: 72 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-11,48-59
NUMA node1 CPU(s): 12-23,60-71
NUMA node2 CPU(s): 24-35,72-83
NUMA node3 CPU(s): 36-47,84-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] Could not collect
[conda] Could not collect
```
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,008,878,859
|
[c10d][fr] Fix another bug when we should continue when the op list is empty
|
fduwjj
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Differential Revision: D73375318
We shouldn't check the op list when it is empty. And later, when it is empty we pops it out from the queue we will check for collective matching. Added a unit test for this case and also covered the case fixed https://github.com/pytorch/pytorch/pull/151683 in the unit test as well.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
3,008,798,100
|
Rename register_fake_profile to unsafe_generate_fake_kernels
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Fixes https://docs.google.com/document/d/1BZsuUR1zJ-52Y7wP4yWX8beB4dwYbgdu5o1qKam_iWg/edit?disco=AAABiJdX1XU
| true
|
3,008,778,456
|
Update docs dependencies for local build
|
svekars
|
closed
|
[
"module: docs",
"Merged",
"ciflow/trunk",
"topic: docs",
"topic: not user facing"
] | 17
|
CONTRIBUTOR
|
Fixes #151786
- Changed requirements.txt to a symlink to .ci/docker/requirements-docs.txt
- Updated README.md with better doc build instructions.
cc @sekyondaMeta @AlannaBurke
| true
|
3,008,720,382
|
Deduplicate library deletion
|
angelayi
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/pull/151299#issuecomment-2807160080
| true
|
3,008,680,172
|
[BE]: Better cleanup optimized code from #151474
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
This change addresses the first/second time/mem "spike" observed Improves on #151474 by removing unnecessary stride calculations and unused arguments to the helper function
https://github.com/pytorch/pytorch/issues/151351
Fixes https://github.com/pytorch/pytorch/issues/151351
| true
|
3,008,647,092
|
Create decomp for searchsorted
|
justinchuby
|
open
|
[
"module: onnx",
"triaged"
] | 0
|
COLLABORATOR
|
In https://github.com/pytorch/pytorch/issues/151648#issuecomment-2817662679 the model cannot be exported to ONNX because a decomp was missing for searchsorted. Looks like a decomp can be created according to the comments.
| true
|
3,008,571,754
|
Add NCCL trafficClass option for QoS support
|
x41lakazam
|
closed
|
[
"oncall: distributed",
"open source",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,008,534,436
|
[MPS] Enable log1p and sigmoid for int64
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151791
* #151790
It works on MacOS-15, but likely will need a skip for MacOS-13
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,008,534,331
|
[Testing] Unskip expm1 log1p for MPS
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151791
* __->__ #151790
But don't test them for unsupported dtypes (which is float64 for MPS)
- Skip int64 for log1p for now (next PR will fix that)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,008,337,473
|
[Dynamo] Replace `unimplemented` with `unimplemented_v2` in `torch/_dynamo/variables/iter.py`
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 9
|
CONTRIBUTOR
|
Part of #147913
Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/iter.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,008,309,210
|
[standalone_compile] Dynamic shape handling
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: AO frontend"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151788
standalone_compile needs to get dynamic shape information from
somewhere. We add a new `dynamic_shapes` argument with three options:
1. from the passed-in graph (dynamic="from_graph"). This is the default.
2. from the example inputs, thereby specializing on them. (dynamic="from_example_inputs")
3. from the current tracing context (dynamic="from_tracing_context")
1 and 3 are not exactly the same. 2 can also be used for more advanced
things... (specialize on one input but not the other).
Most of this PR is tests.
Test Plan:
- a lot of new tests.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,008,229,330
|
Fix doc requirements install error
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fixes #151786
Change version in requirements of docs consistent with version in [CI version file](https://github.com/pytorch/pytorch/blob/main/.ci/docker/requirements-docs.txt), which changed in #149331
### Test Result

| true
|
3,008,223,002
|
Fail to install document dependency locally
|
zeshengzong
|
closed
|
[
"module: docs",
"module: ci",
"triaged"
] | 4
|
CONTRIBUTOR
|
### 📚 The doc issue
Install dependency of docs has following errors
```bash
# pytorch/doc
pip install -r requirements.txt
```

### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,008,202,259
|
Optimize register_full_backward_hook description when all input no grad
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: autograd"
] | 4
|
CONTRIBUTOR
|
Fixes #100528
## Test Result
### Before

### After

| true
|
3,008,197,361
|
Fix the Inconsistency and Description of `device_type` in `torch.random.fork_rng()`
|
ILCSFNO
|
closed
|
[
"triaged",
"module: backend"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The doc of [torch.random.fork_rng()](https://pytorch.org/docs/stable/random.html#torch.random.fork_rng) shows its description as below:
https://github.com/pytorch/pytorch/blob/bf28d1cafc6ab3ea94856e5891be1b5e8a37d83c/torch/random.py#L146-L147
There are 2 issues that I wonder:
First, less site is noted after `[Note: support the custom device with privateuse1]`, something related is found [here](https://pytorch.org/docs/stable/torch.html#accelerators), maybe there is some notes but not linked?
### Suggestions 1
* If there actually exists some notes about privateuse1, link it to `[Note: support the custom device with privateuse1]` like `[Note: support the custom device with privateuse1](Website Here)`
* If not, add some notes [here](https://github.com/pytorch/pytorch/tree/main/docs/source/notes) or other relative path and link it to docs like `[Note: support the custom device with privateuse1](Website Here)`
Second is also about `device_type`, may add description to show other devices which can be used, like issue #149722 and its PR https://github.com/pytorch/pytorch/pull/149770
Some repros here, but can't check actually, wondering whether no error is shown:
### Repro 1
```python
import torch
# torch.random.fork_rng(device_type='cpu')
torch.random.fork_rng(device_type='aaaaa')
```
### Output 1
```text
<contextlib._GeneratorContextManager at 0x7f6dd9c2fa90>
```
Some deeper codes here:
https://github.com/pytorch/pytorch/blob/bf28d1cafc6ab3ea94856e5891be1b5e8a37d83c/torch/random.py#L125-L160
So tried another repro:
### Repro 2
```python
import torch
# def
device_type='aaaaa'
# code deeper
if device_type == "meta":
pass
device_type = torch.device(device_type).type
device_mod = getattr(torch, device_type, None)
if device_mod is None:
raise RuntimeError(
f"torch has no module of `{device_type}`, you should register "
+ "a module by `torch._register_device_module`."
)
```
### Output 2
```text
RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, maia, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone device type at start of device string: aaaaa
```
### Suggestions 2
* To solve the mismatch between `Repro 1` and `Repro 2` (To tell the truth, I don't know what to do, because the reason why the status of `Repro 1` differs from `Repro 2` is confusing for me)
* Add description of available `device_type` in docs
Thanks for noting!
### Versions
Nightly
cc @bdhirsh
| true
|
3,008,038,581
|
We could not debug inside the backward function with pdb
|
BraveDrXuTF
|
closed
|
[
"module: autograd",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
Even if we use detect_anomaly,
```
loss = output.mean()
with torch.autograd.detect_anomaly():
loss.backward()
print("Backward pass completed.")
```
we can only get such an abstract error info,
```
with torch.autograd.detect_anomaly():
Traceback (most recent call last):
File "test.py", line 48, in <module>
loss.backward()
File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 525, in backward
torch.autograd.backward(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 267, in backward
_engine_run_backward(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
and it does not tell me which line of code I write goes wrong.
### Versions
PyTorch version: 2.3.0a0+6ddf5cf85e.nv24.04
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] cudnn==1.1.2
[pip3] numpy==1.24.4
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.11.0
[pip3] pynvjitlink==0.1.13
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==3.0.0+a9bc1a364
[pip3] torch==2.3.0a0+6ddf5cf85e.nv24.4
[pip3] torch-tensorrt==2.3.0a0
[pip3] torchdata==0.7.1a0
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.18.0a0
[conda] No relevant packages
cc @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan
| true
|
3,007,955,984
|
JVP: Option to Disable Gradient Caching for Tangents
|
qsh-zh
|
open
|
[
"triaged",
"module: functorch"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
I'm requesting a new option for `torch.func.jvp` to disable gradient caching and tracking specifically for the tangent output without affecting the primal output.
Currently, when using `torch.func.jvp(fn, primals, tangents)`, the JVP output requires gradients by default, which causes it to cache activations unnecessarily. In many use-cases like mine, the JVP tangent vectors are used for auxiliary calculations but are not part of the computation graph for backpropagation. This leads to:
1. Unnecessary memory usage from cached activations for the tangent calculations
2. No way to selectively disable gradient tracking for just the tangent part
I propose adding an optional parameter to `torch.func.jvp`, perhaps named `track_tangent_grad=True` (defaulting to True for backward compatibility), that would allow users to disable gradient tracking specifically for the JVP output without affecting the primal output and without requiring multiple forward passes.
Example of desired usage:
```python
# Current behavior
output, jvp = func.jvp(layer, (x, ), (v, ))
assert output.requires_grad == True
assert jvp.requires_grad == True # Caches activations unnecessarily
# Proposed behavior
output, jvp = func.jvp(layer, (x, ), (v, ), track_tangent_grad=False)
assert output.requires_grad == True
assert jvp.requires_grad == False # No activation caching
```
### Alternatives
Currently, I have to use workarounds like:
1. Detaching the JVP output after calculation (`jvp.detach()`), which works but is less efficient than not caching in the first place
2. Using separate forward passes (`layer(x)` and then a separate JVP calculation with `no_grad`), which increases computation
3. Complex wrappers around the JVP function, which reduce code clarity
None of these solutions are ideal, as they either increase computation or don't fully prevent the unnecessary caching during the forward pass.
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
3,007,912,715
|
[MPS] Move ops modifiers to testing utils so other tests can reuse
|
qqaatw
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151177
* __->__ #151781
Test collection check on macOS 13.7.1:
```
python -m pytest test/test_mps.py --collect-only
python -m pytest -v test/test_mps.py::TestConsistencyCPU
```
Before:
```
6390 tests collected in 8.34s
3936 passed, 205 skipped, 1306 xfailed in 1570.34s (0:26:10)
```
After:
```
6390 tests collected in 7.71s
3936 passed, 205 skipped, 1306 xfailed in 1631.11s (0:27:12)
```
| true
|
3,007,897,385
|
Horizontal
|
sunjiweiswift
|
open
|
[
"triaged",
"open source",
"module: inductor"
] | 2
|
NONE
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,833,710
|
[Indcutor Remote Cache] Raise an exception if redis module is required but not available
|
ChuanqiXu9
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 13
|
CONTRIBUTOR
|
If we need redis but redis is not available, it is better to tell the user to install redis instead of continue silently.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,766,608
|
Normalize dynamic size symbols in template codegen cache key.
|
laithsakka
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150869
* __->__ #151778
* #151773
* #151764
if we have the following tensors (s0, 1)*( 1, s0) and (s1, 1)*( 1, s1), then currently we generate the same code
for during mm auto-tuning when expanding the mm_template. Eventhough the generated code is NOT dependent on
the input symbol names, we cache miss right now because we have the input size as part of the cache key.
This diff normalize the input sizes in the cache key such that (s0, 1) and (s1, 1) would appear as
(normalized_0, 1) hence we would cache hit.
This patter exist in the compiled full model
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,713,127
|
enable windows inductor UT in CI
|
yuchengliu1
|
open
|
[
"open source",
"ciflow/trunk",
"release notes: releng",
"module: dynamo",
"ciflow/inductor",
"ciflow/xpu"
] | 4
|
NONE
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,007,694,022
|
[dynamo] Some inefficiencies around handling __torch_function__
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I was looking at reducing compile time for a GGUF SD model (https://github.com/pytorch/pytorch/issues/150706), and I found some inefficiencies around `__torch_function__`. The model heavily relies on torch function.
Testing on a single transformer layer, I was able to reduce Dynamo time from 3 seconds to below 2 seconds with this patch. I am not confident that this patch is good, it will be good if someone can take over.
I tried to reduce the number of times `__torch_function__` was called. I saw that for torch.compile, it was getting called 3x more times. A majority of these calls were not getting traced by Dynamo, which hinted that they were either coming from compiler themselves (like calling x._is_view to check something can call `__torch_function__`).
```
diff --git a/torch/_dynamo/output_graph.py b/torch/_dynamo/output_graph.py
index 2dc2209a2c6..6228ec0d949 100644
--- a/torch/_dynamo/output_graph.py
+++ b/torch/_dynamo/output_graph.py
@@ -1471,7 +1471,8 @@ class OutputGraph:
self.tracing_context.fake_mode = backend_fake_mode
with self.restore_global_state():
- compiled_fn = self.call_user_compiler(gm)
+ with torch._C.DisableTorchFunction():
+ compiled_fn = self.call_user_compiler(gm)
from torch.fx._lazy_graph_module import _LazyGraphModule
diff --git a/torch/_dynamo/polyfills/tensor.py b/torch/_dynamo/polyfills/tensor.py
index 002ccf5d1d4..b3d81036ab3 100644
--- a/torch/_dynamo/polyfills/tensor.py
+++ b/torch/_dynamo/polyfills/tensor.py
@@ -11,25 +11,26 @@ from ..decorators import substitute_in_graph
def make_subclass(
cls: type[Any], data: torch.Tensor, requires_grad: bool = False, **kwargs: Any
) -> Any:
- # This is a rough approximation of `THPVariable_make_subclass`. It should
- # suffice for most of Dynamo tracing purposes.
- # https://github.com/pytorch/pytorch/blob/ccfde4dadfa3c342076a1ee387017f84dd4ad2f7/torch/csrc/autograd/python_variable.cpp#L597-L650
- assert len(kwargs) == 0, "_make_subclass only supports requires_grad as keyword arg"
- data = data.detach()
-
- # Avoid unnecessary `requires_grad` mutation, which isn't supported in Dynamo.
- if data.requires_grad != requires_grad:
- data.requires_grad = requires_grad
-
- # Dynamo can't yet handle upcasting to base tensor type via `as_subclass`.
- if cls is torch.Tensor:
- return torch.Tensor(data)
-
- # Calling `as_subclass` because
- # 1. Dynamo knows how to handle it
- # 2. the C impls match at this point -- both `THPVariable_make_subclass` and
- # `THPVariable_as_subclass` calls `THPVariable_NewWithVar`.
- return data.as_subclass(cls)
+ with torch._C.DisableTorchFunctionSubclass():
+ # This is a rough approximation of `THPVariable_make_subclass`. It should
+ # suffice for most of Dynamo tracing purposes.
+ # https://github.com/pytorch/pytorch/blob/ccfde4dadfa3c342076a1ee387017f84dd4ad2f7/torch/csrc/autograd/python_variable.cpp#L597-L650
+ assert len(kwargs) == 0, "_make_subclass only supports requires_grad as keyword arg"
+ data = data.detach()
+
+ # Avoid unnecessary `requires_grad` mutation, which isn't supported in Dynamo.
+ if data.requires_grad != requires_grad:
+ data.requires_grad = requires_grad
+
+ # Dynamo can't yet handle upcasting to base tensor type via `as_subclass`.
+ if cls is torch.Tensor:
+ return torch.Tensor(data)
+
+ # Calling `as_subclass` because
+ # 1. Dynamo knows how to handle it
+ # 2. the C impls match at this point -- both `THPVariable_make_subclass` and
+ # `THPVariable_as_subclass` calls `THPVariable_NewWithVar`.
+ return data.as_subclass(cls)
__all__ = [
diff --git a/torch/_dynamo/variables/builder.py b/torch/_dynamo/variables/builder.py
index 6e80e1ef563..b05f2403f56 100644
--- a/torch/_dynamo/variables/builder.py
+++ b/torch/_dynamo/variables/builder.py
@@ -425,7 +425,8 @@ class VariableBuilder:
if cached_vt:
return cached_vt
- vt = self._wrap(value)
+ with torch._C.DisableTorchFunctionSubclass():
+ vt = self._wrap(value)
vt.source = self.source
if (
self._can_lift_attrs_to_inputs(vt)
diff --git a/torch/_dynamo/variables/torch_function.py b/torch/_dynamo/variables/torch_function.py
index dd7f6fa1f53..c376e2a745f 100644
--- a/torch/_dynamo/variables/torch_function.py
+++ b/torch/_dynamo/variables/torch_function.py
@@ -628,7 +628,7 @@ class TensorWithTFOverrideVariable(TensorVariable):
# Handle non-overriden attributes inherited from `torch.Tensor`.
attr_is_overriden = _is_attr_overidden(tx, self, name)
- if hasattr(torch.Tensor, name) and not attr_is_overriden:
+ if hasattr(torch.Tensor, name) and not attr_is_overriden and not inspect.ismethoddescriptor(getattr(torch.Tensor, name)):
if tx.output.torch_function_enabled:
if self.source:
install_guard(
```
Changing my "eager" backend to do this might have helped as well
```
def my_backend(gm, *args):
def inner(*n_args):
with torch._C.DisableTorchFunction():
return gm.forward(*n_args)
return inner
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @StrongerXi @mlazos
### Error logs
_No response_
### Versions
NA
| true
|
3,007,684,847
|
[Inductor] Modify TritonTemplate store_output function to support TMA stores
|
NikhilAPatel
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151775
* #151774
Summary:
The `store_output` macro -- used in Triton templates to generate triton kernel code for storing output using `tl.store` -- has been modified to support TMA based stores.
This now allows functions using TMA stores to benefit from inductor epilogue fusion. Additionally, it is now a lot easier to add TMA stores to existing kernels.
The persistent + TMA template mm template was updated to use this logic.
Test Plan:
contbuild and OSS CI
Reviewers: paulzhan
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,684,760
|
[Inductor] Modify persistent+TMA template for Triton mm and admm to use new TMA API
|
NikhilAPatel
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151775
* __->__ #151774
Summary:
This PR modifies the Triton template for persisten+TMA mm and admm to use the new functional API for TMA introduced here: https://github.com/triton-lang/triton/pull/6248/
This also involves setting a global Triton allocator function to be called at kernel launch for any kernels that require additional global memory workspace. This is done in triton_heuristics.py directly before kernels are launched.
Test Plan:
contbuild & OSS CI
Reviewers: paulzhan
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,662,223
|
Cache code generation during triton template expansion and enable it for mm_template.
|
laithsakka
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151773
In a model, we see ~~ 40% of the time in mm/addmm tuning. The model have 2000 mm,
many of which receives the same input shapes.
with autotune enabled, this become expensive, while we already cache auto tuning results, we
did not used to cache the generation of the python code and the loading for each config that we autotune on.
This diff handles the code generation part (template expansions) a previous diff handled the loading part.
This is expected to save 20% of the model I am working on.
How do we do the caching?
For a given configurations and input layout, the generated code is always the same. One caveat is that
some other information collected during code generation are input dependent (namely depends on inputs
names and symbol names in inputs). and not just layout. !
To handle those we use a record and replay approach, where we record the functions that are called during
code generation that effect those outputs and replay them at a cache hit.
Effect on the current benchmark on a local run on dev server.
mm_loop. 24115830838 -> 18362098019
mm_loop_dynamic 30506097176-> 25697270062
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,660,921
|
[Inductor] Modify persistent+TMA template for Triton mm and admm to use new TMA API
|
NikhilAPatel
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151772
Summary:
This PR modifies the Triton template for persisten+TMA mm and admm to use the new functional API for TMA introduced here: https://github.com/triton-lang/triton/pull/6248/
This also involves setting a global Triton allocator function to be called at kernel launch for any kernels that require additional global memory workspace. This is done in triton_heuristics.py directly before kernels are launched.
Test Plan:
contbuild & OSS CI
Reviewers: paulzhan
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,629,416
|
Graph break on .t() when Tensor._make_subclass
|
KareemMusleh
|
open
|
[
"triaged",
"oncall: pt2",
"dynamo-triage-jan2025"
] | 2
|
NONE
|
### 🐛 Describe the bug
this is similar to #150265
```python
from torch import nn
import torch
torch_compile_options = {
"epilogue_fusion" : True,
"max_autotune" : True,
"shape_padding" : True,
"trace.enabled" : True,
"triton.cudagraphs" : False,
}
class a(nn.Linear):
def __init__(self, b):
super().__init__(128, 128)
self.b = b
class b(nn.Parameter):
def __new__(cls, data):
self = torch.Tensor._make_subclass(cls, data)
return self
A = a(b(torch.randn(12, 12)))
@torch.compile(fullgraph = True, dynamic = True, options = torch_compile_options)
def test():
out = 3 * A.b.t()
return out
test()
```
### Versions
PyTorch version: 2.8.0.dev20250420+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.12 (main, Apr 9 2025, 08:55:54) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.123+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] nvtx==0.2.11
[pip3] optree==0.15.0
[pip3] pynvjitlink-cu12==0.5.2
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250420+cu126
[pip3] torchaudio==2.6.0.dev20250420+cu126
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.22.0.dev20250420+cu126
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu
| true
|
3,007,575,819
|
[2/n][Optimus][Auto-AC] Support activation quantization with scaling
|
mengluy0125
|
open
|
[
"fb-exported",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Summary:
Previously, we only support non-scaling quantization, which may lead to overflow, here we support scaling quantization, and set it as the default version.
Here, we quantize activation nodes based on the size_in_mb, the default value is 100, i.e., as long as the node has at least 100MB size, we will quantize it.
Test Plan:
### how to enable
```
torch._inductor.config.post_grad_fusion_options = {
"activation_quantization_aten_pass": {
"quant_type": "torch.float8_e5m2", -> default is this type to quantize, you can change the type
"use_scaling": False, -> default is False, if you want to use scaling verison, set it to True
"size_in_mb": 0.0, -> default is 100, you can tune the value.
"exclude_primals": False, -> whether want to exclude quantize parameters, default is False
"allowed_dtypes": "torch.float16;torch.bfloat16;torch.float32", -> dtype you consider to quant, use ";" to separate, default is torch.bfloat16
},
}
```
### toy model
```
buck2 run mode/opt //scripts/qyz/autoac:quantization
```
```
Epoch [80/200], Loss: 19227.2109
Epoch [100/200], Loss: 1353.5272
Epoch [120/200], Loss: 38630.6758
Epoch [140/200], Loss: 6239.9155
Epoch [160/200], Loss: 6039.1567
Epoch [180/200], Loss: 3994.3569
Epoch [200/200], Loss: 146.3966
```
Differential Revision: D73015996
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,487,842
|
Add adaptive_avg_pool2d input and output_size check
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes #126673
## Test Result
```python
import torch
import torch.nn as nn
batch_size = 10
channels = 3
length = 32
input_tensor = torch.randn([batch_size, channels, length])
adaptive_avg_pool = nn.AdaptiveAvgPool2d(output_size=16)
output_tensor = adaptive_avg_pool(input_tensor)
print(output_tensor.shape)
UserWarning: Input dimensions [10, 3, 32] different with output_size [16, 16] (Triggered internally at /home/zong/code/pytorch/aten/src/ATen/native/AdaptiveAveragePooling.cpp:39.)
return torch._C._nn.adaptive_avg_pool2d(input, _output_size)
torch.Size([10, 16, 16])
batch_size = 10
channels = 3
length = 32
input_tensor = torch.randn([batch_size, channels, length])
adaptive_avg_pool = nn.AdaptiveAvgPool2d(output_size=[3, 32]) # no warning
output_tensor = adaptive_avg_pool(input_tensor)
print(output_tensor.shape)
torch.Size([10, 3, 32])
```
| true
|
3,007,410,439
|
Run standalone compile tests on cpu/gpu
|
oulgen
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151603
* #151609
* __->__ #151768
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,362,018
|
[Don't merge] Upgrade oneDNN to v3.8 for XPU build
|
mengfei25
|
open
|
[
"module: mkldnn",
"open source",
"ciflow/binaries_wheel",
"ciflow/xpu"
] | 7
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
3,007,360,311
|
Support regexes in dynamic sources allowlist
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151885
* __->__ #151766
As requested by Shuai. I also included an additional refactor to capture
changes in the whitelist over time since previously the first time it
was set, it was impossible override when a new config was set.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,007,358,085
|
Upgrade oneDNN to v3.8 for XPU build
|
mengfei25
|
closed
|
[
"module: mkldnn",
"open source"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
3,007,348,292
|
Refactor TritonTemplate.generate and move codgen part to generate_and_load
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151764
Splitting https://github.com/pytorch/pytorch/pull/149267/ .
This first PR just refactor the code without adding any caching functionality.
The logic of generating the code and loading it is moved to generate_and_load() + some typing
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,007,338,023
|
Eagerly guard when dealing with float32 scalar tensor item calls
|
bobrenjc93
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151763
* #151766
Fixes #151470
SymFloats implicitly only supports float64 as we can see in code like
this:
https://github.com/pytorch/pytorch/blob/main/torch/_subclasses/fake_tensor.py#L479.
This PR fixes the above issue by eagerly guarding when dealing with float32 scalar tensor item calls
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,007,173,059
|
Support for grouped query attention in ONNX export
|
cyanic-selkie
|
open
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
Hi, when using `enabled_gqa` with `scaled_dot_product_attention`, the ONNX export fails - this is documented.
However, since QGA is very popular currently, and the Attention ONNX op already supports it, I was wondering if there is any plan to add support for it in the exporter, and if so, how soon, thanks.
### Alternatives
_No response_
### Additional context
_No response_
| true
|
3,007,117,746
|
Inconsistent `sum`/`dot`/`norm` behavior
|
melnikovsky
|
open
|
[
"triaged",
"module: linear algebra"
] | 10
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Summation of huge `float32` arrays is admittedly a sensitive subject, but different routines use inconsistent (and seemingly undocumented?) approaches. Particularly, `torch.sum` is the most precise, while `linalg.norm` on 10 CPU cores is as slow but has inferior accuracy. Would it be possible to normalize these somehow? How do I get consistent results even in future `pytorch` versions?
Below is the output for three functions, the right answer is `1e9`
```
1 thread:
torch.linalg.norm(x)**2=tensor(6.5533e+08), timeit: [1.0846478752791882, 1.0839017871767282, 1.0842528380453587]
torch.dot(x,x)=tensor(9.7329e+08), timeit: [1.0753544569015503, 1.075887642800808, 1.075775207951665]
(x*x).sum()=tensor(1.0000e+09), timeit: [4.653062522411346, 4.647735759615898, 4.65124611929059]
10 threads:
torch.linalg.norm(x)**2=tensor(6.5533e+08), timeit: [1.0826967414468527, 1.0804776344448328, 1.078405149281025]
torch.dot(x,x)=tensor(9.9902e+08), timeit: [0.2012637760490179, 0.2010939735919237, 0.20179643481969833]
(x*x).sum()=tensor(1.0000e+09), timeit: [1.0688033681362867, 1.0729365721344948, 1.0708447061479092]
```
The code itself:
```python
import torch
import timeit
def play(x):
print(f'{torch.linalg.norm(x)**2=}, timeit:', timeit.repeat('torch.linalg.norm(x)**2', number=4, repeat=3, globals=globals() ))
print(f'{torch.dot(x,x)=}, timeit:', timeit.repeat('torch.dot(x,x)', number=4, repeat=3, globals=globals()))
print(f'{(x*x).sum()=}, timeit:', timeit.repeat('(x*x).sum()', number=4, repeat=3, globals=globals()))
x=torch.randn(1_000_000_000, dtype=torch.float32, device='cpu')
torch.set_num_threads(1)
print('\t 1 thread:')
play(x)
torch.set_num_threads(10)
print('\n\t 10 threads:')
play(x)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.10 (Ootpa) (x86_64)
GCC version: (GCC) 12.2.0
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.28
Python version: 3.13.3 | packaged by conda-forge | (main, Apr 14 2025, 20:44:03) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-553.27.1.el8_10.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A40
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 5317 CPU @ 3.00GHz
Stepping: 6
CPU MHz: 3000.000
CPU max MHz: 3600.0000
CPU min MHz: 800.0000
BogoMIPS: 6000.00
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 18432K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[pip3] triton==3.2.0+git576374f8
[conda] cuda-cudart 12.8.90 h5888daf_1 conda-forge
[conda] cuda-cudart_linux-64 12.8.90 h3f2d84a_1 conda-forge
[conda] cuda-cupti 12.8.90 h5888daf_1 conda-forge
[conda] cuda-nvrtc 12.8.93 h5888daf_1 conda-forge
[conda] cuda-nvtx 12.8.90 h5888daf_1 conda-forge
[conda] cudnn 9.8.0.87 h81d5506_1 conda-forge
[conda] libblas 3.9.0 31_hfdb39a5_mkl conda-forge
[conda] libcblas 3.9.0 31_h372d94f_mkl conda-forge
[conda] libcublas 12.8.4.1 h9ab20c4_1 conda-forge
[conda] libcufft 11.3.3.83 h5888daf_1 conda-forge
[conda] libcurand 10.3.9.90 h9ab20c4_1 conda-forge
[conda] libcusolver 11.7.3.90 h9ab20c4_1 conda-forge
[conda] libcusparse 12.5.8.93 h5888daf_1 conda-forge
[conda] liblapack 3.9.0 31_hc41d3b0_mkl conda-forge
[conda] libmagma 2.9.0 h19665d7_1 conda-forge
[conda] libnvjitlink 12.8.93 h5888daf_1 conda-forge
[conda] libtorch 2.6.0 cuda126_mkl_h99b69db_304 conda-forge
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] nccl 2.26.2.1 ha44e49d_1 conda-forge
[conda] numpy 2.2.5 py313h17eae1a_0 conda-forge
[conda] optree 0.14.1 py313hdb19cb5_0
[conda] pytorch 2.6.0 cuda126_mkl_py313_he20fe19_304 conda-forge
[conda] pytorch-gpu 2.6.0 cuda126_mkl_ha999a5f_304 conda-forge
[conda] triton 3.2.0 cuda126py313h46f6bd1_1 conda-forge
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,007,061,212
|
[MPS] Implement upsample_nearest3d_vec operator
|
donghao1393
|
open
|
[
"triaged",
"open source",
"release notes: mps"
] | 3
|
NONE
|
# MPS Implementation of upsample_nearest3d_vec
This PR adds a Metal Performance Shaders (MPS) implementation of the `upsample_nearest3d_vec` operator for PyTorch on macOS. This implementation enables 3D nearest neighbor upsampling to run natively on Apple Silicon GPUs.
## Changes
- Added MPS implementation of `upsample_nearest3d_vec` in `aten/src/ATen/native/mps/operations/UpSample.mm`
- Added tests in `test/test_mps_upsample_nearest3d.py`
- Requires macOS 13.1 or newer due to Metal API requirements
## Implementation Details
The implementation uses a custom Metal compute shader to perform 3D nearest neighbor upsampling. The shader calculates the source coordinates for each output voxel and samples the nearest input voxel.
Key features:
- Supports both `scale_factor` and `size` parameters
- Handles non-contiguous tensors
- Supports empty tensors
- Supports both float32 and float16 data types
## Limitations
- Backward pass is not yet implemented
- Only supports upsampling (scale factors >= 1.0)
- Integer data types are not supported (Metal limitation)
## Testing
The implementation has been tested with various input shapes, scale factors, and data types. All tests pass on macOS 13.1 and newer.
## Performance
The MPS implementation provides significant performance improvements over the CPU implementation, especially for larger tensors.
## Future Work
- Implement backward pass
- Support downsampling (scale factors < 1.0)
- Optimize performance further
## Related Issues
This PR addresses the need for native MPS implementation of 3D upsampling operations, which was previously falling back to CPU.
This PR relies on https://github.com/pytorch/pytorch/pull/149378 based on https://github.com/pytorch/pytorch/releases/tag/v2.7.0-rc10
| true
|
3,007,034,055
|
"_get_pg_default_device" deprecated warning in "Getting Started with Distributed Checkpoint (DCP)"
|
michael080808
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
NONE
|
### 📚 The doc issue
I tried both the "[Saving](https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html#saving)" and "[Loading](https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html#loading)" code from "[Getting Started with Distributed Checkpoint (DCP)](https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html)" on torch2.6+cu126.
Both `save` and `load` in `torch.distributed.checkpoint` seem to use "_get_pg_default_device" and give me the warning:
```
/opt/anaconda3/envs/Holo/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py:863: UserWarning: `_get_pg_default_device` will be deprecated, it only stays for backward-compatiblity reason. If you need to find a device for object collectives, please use `_get_object_coll_device`. If you need to query the device types supported by group, please use `_device_capability(group)`.
```
I have noticed that there is [#136790 pull request](https://github.com/pytorch/pytorch/pull/136790) about this warning. I'm not sure whether this is a doc issue or not.
### Suggest a potential alternative/fix
Maybe `torch.distributed.checkpoint` needs a further cleanup about `_get_pg_default_device`.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,006,946,347
|
torch.testing._internal.optests - MPS Support
|
goldfishsound
|
open
|
[
"open source",
"topic: not user facing"
] | 3
|
NONE
|
# autograd_registration_check
## Adding support for MPS device
1. Why this PR
The generated test by optests.generate_opcheck_tests() for the "test_autograd_registration " test case will fail for tensors on the mps device.
2. Reason for failure
The current implementation of the function autograd_registration_check() in torch/testing/_internal/optests/autograd_registration.py is missing a dispatch key for the mps device.
3. Solution
Adding the "AutogradMPS" dispatch key for the mps device.
| true
|
3,006,793,070
|
[logging] Put "everything" WaitCounters in dynamo_timed
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151757
* #151749
Summary: The main motivation is to capture the cudagraphs overhead in a WaitCounter. We'll combine that with Triton autotuning, and therefore rename to "compile_runtime_overheads". Since we have a couple WaitCounters where we want to capture all runtime and compile overheads, let's put the accounting in dynamo_timed so we'll automatically capture any toplevel timed regions that get added in the future. Also, dynamo_timed already has to figure out if we're timing a runtime vs. compile-time event, so we can reuse some of that logic.
Test Plan:
Ran an internal model with `TORCHINDUCTOR_BENCHMARK_FUSION=1` (to get benchmarking at compile time in addition to runtime).
Overall compile time from various sources matches up:
* tlparse: https://fburl.com/9fgsstkr. Eyeballing, total time should be 32 ranks x 2175 = ~69.6k s
* ods: https://fburl.com/canvas/r4clhnb7. Right on.
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/ax71aqox. Right on.
* pt2_compile_events: https://fburl.com/scuba/pt2_compile_events/shcjd9ql. Right on.
And the runtime overhead:
* ods: https://fburl.com/canvas/nvgjb282
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/f2dtv0qh
If we compare that to a run of the same model without the changes in this stack, results can mismatch by a lot:
* tlparse: https://fburl.com/cchxwd1s. Eyeballing, total time should be 32 ranks x 2300s = ~73.5k s
* ods: https://fburl.com/canvas/x1i3wvf4. It's kinda close
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/l7sgxdxd. Waaay too high.
* pt2_compile_events: https://fburl.com/scuba/pt2_compile_events/jb4s9z1u. This is the only one that's actually correct.
The discrepancy is even worse if we focus on the runtime events:
* ods: https://fburl.com/canvas/a4o9f7ou
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/95izaes1
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,006,756,813
|
[dynamo] Call __torch_function__ on only overridable tensor methods or attrs
|
anijain2305
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151620
* #150704
* #151410
* #151409
* __->__ #151756
* #151633
* #151477
* #151357
* #151256
* #151330
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,006,747,720
|
[ez] fix typo in comment
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151755
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,006,660,272
|
[MPS] Add support for hermite_polynomial_he (inductor/eager).
|
dcci
|
closed
|
[
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,006,638,675
|
reroute index to fast implementation for indexing on 0th dimension
|
ngimel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cuda",
"ci-no-td"
] | 6
|
COLLABORATOR
|
Per title, improve x[index] cuda perf for the common case of indexing along the first dim, using vectorized gather kernel
| true
|
3,006,629,397
|
Refactor duplicate code into a utility function in pytorch/torch/nn/functional.py
|
aaiss0927
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
NONE
|
Description:
This PR refactors duplicate code for validating dropout probability values into a utility function `probability_checking()` in pytorch/torch/nn/functional.py.
Changes:
- Created a new utility function `probability_checking(p)` that validates if the dropout probability parameter is within valid range (0.0 to 1.0)
- Replaced identical validation code in six dropout-related functions with calls to this utility function
The changes improve code maintainability by eliminating duplicate logic while preserving the exact same validation behavior.
| true
|
3,006,592,786
|
Update __init__.py
|
Mazgagzam
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
NONE
|
Refactor `factory_kwargs` to simplify key validation and merging
- Replaced the manual key checks and dictionary updates with a more efficient and readable approach.
- Simplified the handling of unexpected kwargs using set operations.
- Ensured no conflicts between `kwargs` and `factory_kwargs` using intersection checks.
- Improved readability and maintainability of the code.
Fixes #ISSUE_NUMBER
| true
|
3,006,588,469
|
add min/max_seqlen to non_differentiable
|
sumantro93
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: nested tensor"
] | 9
|
CONTRIBUTOR
|
Fixes #148988
| true
|
3,006,536,462
|
[logging] Fix duration logging for dynamo_compile
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151757
* __->__ #151749
Summary: There are a few issues I'm solving:.
1. It's too hard to measure total pt2 overhead using the dynamo_compile table because users need to know the columns representing all the top-level events (dynamo_cumulative_compile_time_us, etc.). Instead, let's populate the existing duration_us field for all top-level events. The complication is that runtime events in particular (Triton autotuning, cudagraphify) can be collapsed into a single row, with gaps in between, so we can't simply use `end_time - start_time` in all cases. Instead, we'll sum durations for all outer events when updating the compile-time or runtime metrics context. Introduce a 'depth' counter in TLS to track the nesting of CompilationMetrics events.
2. The existing implementation relies on callers of dynamo_timed to specify whether the event is a runtime or compile-time event. That doesn't work because some methods can be called in both situations, e.g., `CachingAutotuner.benchmark_all_configs`. For example `TORCHINDUCTOR_BENCHMARK_FUSION=1` enables benchmarking during compile-time. Instead, we can figure out automatically whether we're measuring a compile-time or runtime event and log accordingling.
3. If `log_compilation_events` were to throw an exception, we'd fail to clear the aggregated counters for runtime logs and they could be attributed to the wrong compile ID. I didn't actually find evidence of this in practice, but I added exception handling for extra safety.
Test Plan:
Ran internal models and compared dynamo_compile to pt2_compile_events:
`TORCHINDUCTOR_BENCHMARK_FUSION=0`
* tlparse: https://fburl.com/itciwnxc
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/yvkif5vb
* pt2_compile_events: https://fburl.com/scuba/pt2_compile_events/segijet7
`TORCHINDUCTOR_BENCHMARK_FUSION=1`
* tlparse: https://fburl.com/jgurcvkw
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/uum91ceb
* pt2_compile_events: https://fburl.com/scuba/pt2_compile_events/x4xnisez
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,006,462,091
|
[Benchmarking] Add sam and stable_diffusion to MPS benchmarked models
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151748
| true
|
3,006,461,263
|
[Benchmarking] Run MPS benchmarks for [b]float16
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151748
* __->__ #151747
And implicitly pass `--float32` when collecting results for "notset" option. Speedups for some models are much higher for float16 dtype, but it's important to track accuracy
| true
|
3,006,435,215
|
[AotInductor][Export][Triton] how to export custom triton kernels when use torch.export.export
|
zzq96
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor",
"module: user triton"
] | 2
|
NONE
|
### 🐛 Describe the bug
our framework is based on torch, and includes some custom triton kernels.
in inference phase, we try use different gpu type(such as training on H100, inference on L40). so we should load exported model and call aoti_compile_and_package to generate aot model based on inference gpu, but error with below msg when call torch.load:
```
torch._export.serde.serialize.SerializeError: Unsupported target type for node Node(target='torch.ops.triton_kernel.add.default', inputs=[NamedArgument(name='x', arg=Argument(as_tensor=TensorArgument(name='linear')), kind=1), NamedArgument(name='y', arg=Argument(as_tensor=TensorArgument(name='mul')), kind=1)], outputs=[Argument(as_tensor=TensorArgument(name='add'))], metadata={'stack_trace': ' File "/usr/local/app/torch_learn/export/model_export.py", line 72, in forward\n output = triton_add(dense_output, bias)\n File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/_library/custom_ops.py", line 671, in __call__\n return self._opoverload(*args, **kwargs)\n', 'nn_module_stack': 'L__self__,,__main__.SimpleModel', 'source_fn_stack': 'add_default,torch.ops.triton_kernel.add.default',
'torch_fn': 'add.default_1;OpOverload.add.default'}, is_hop_single_tensor_return=None): <class 'str'>
```
In my understanding, torch need source code of triton kernels when load exported_model.
but our framwork is big, and in some cases, user may define their custom triton kernels.
it's diffcult for us to obtain user source code and download this big framework in inference gpu machine.
any suggestions?
the simple model code is:
```python
import torch
import torch.nn as nn
import torch
import triton
import triton.language as tl
@triton.jit
def add_kernel(
x_ptr, y_ptr, output_ptr,
n_elements,
BLOCK_SIZE: tl.constexpr,
):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(x_ptr + offsets, mask=mask)
y = tl.load(y_ptr + offsets, mask=mask)
output = x + y
tl.store(output_ptr + offsets, output, mask=mask)
@torch.library.triton_op("triton_kernel::add", mutates_args={})
def triton_add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
n_elements = x.numel()
output = torch.empty_like(x)
BLOCK_SIZE = 1024
grid = (triton.cdiv(n_elements, BLOCK_SIZE),)
torch.library.wrap_triton(add_kernel)[grid](
x, y, output,
n_elements,
BLOCK_SIZE,
)
return output
class SimpleModel(nn.Module):
def __init__(self, input_dim, hidden_dim):
super(SimpleModel, self).__init__()
self.dense = nn.Linear(input_dim, hidden_dim)
def forward(self, x):
dense_output = self.dense(x)
bias = torch.ones_like(dense_output) * 0.5
output = triton_add(dense_output, bias)
return output
def main():
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_dim = 10
hidden_dim = 20
batch_size = 16
model = SimpleModel(input_dim, hidden_dim).to(device)
x = torch.randn(batch_size, input_dim, device=device)
with torch.no_grad():
output = model(x)
exported_model = torch.export.export(
model,
(x,),
)
torch.export.save(exported_model, "exported_model.pt")
if __name__ == "__main__":
main()
```
run this code, a exported_model is in `./exported_model.pt`
then run aot export code:
```python
import torch
torch.set_default_device("cuda")
saved_exported_program = torch.export.load(f"exported_model.pt")
torch._inductor.aoti_compile_and_package(
saved_exported_program,
package_path=f"aot_model.pt2",
)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
GCC version: (GCC) 10.3.1 20210422 (Red Hat 10.3.1-1)
Clang version: 9.0.1 (Red Hat 9.0.1-2.module_el8.2.0+309+0c7b6b03)
CMake version: version 3.19.0
Libc version: glibc-2.28
Python version: 3.9.16 (main, Dec 11 2024, 20:47:20) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] (64-bit runtime)
Python platform: Linux-5.4.119-1-tlinux4-0010.3-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10
GPU 1: NVIDIA A10
GPU 2: NVIDIA A10
GPU 3: NVIDIA A10
Nvidia driver version: 470.141.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.7
/usr/lib/libcudnn_adv_infer.so.8.9.7
/usr/lib/libcudnn_adv_train.so.8.9.7
/usr/lib/libcudnn_cnn_infer.so.8.9.7
/usr/lib/libcudnn_cnn_train.so.8.9.7
/usr/lib/libcudnn_ops_infer.so.8.9.7
/usr/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7K83 64-Core Processor
Stepping: 0
CPU MHz: 2545.218
BogoMIPS: 5090.43
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-111
NUMA node1 CPU(s): 112-223
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 erms rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0
[pip3] tf2onnx==1.9.3
[pip3] torch==2.7.0+cu118
[pip3] triton==3.0.0
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1 @oulgen @aakhundov @davidberard98
| true
|
3,006,395,711
|
[inductor] [cuda] [silent incorrectness] `F.softmax-torch.argsort` output silent incorrectness when tensor input is very large
|
shaoyuyoung
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"topic: fuzzer"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `F.softmax-torch.argsort` output silent incorrectness when tensor input is very large
**device backend**: only triton
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = F.softmax(x, dim=1)
return torch.argsort(x, dim=1)
model = Model()
x = torch.randn([1, 30000]) # tensor input should large enough
inputs = [x]
def run_test(model, inputs, device, backend):
torch.manual_seed(0)
model = model.to(device)
inputs = [x.to(device) for x in inputs]
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
device = 'cuda'
output = run_test(model, inputs, device, 'eager')
c_output = run_test(model, inputs, device, 'inductor')
print(torch.allclose(output, c_output, rtol=1e-3, atol=1e-3))
print(torch.max(torch.abs(c_output - output)))
fp64 = run_test(model.to(dtype=torch.float64), [x.to(dtype=torch.float64) for x in inputs], device, 'eager')
print(torch._dynamo.utils.same(output, c_output, fp64))
```
### Error logs
triton
```
False
tensor(18221, device='cuda:0')
E0419 20:06:50.456000 1653871 site-packages/torch/_dynamo/utils.py:2955] Accuracy failed: allclose not within tol=0.0001
False
```
CPP
```
True
tensor(0)
True
```
### Versions
nightly 20250418
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,006,385,994
|
[inductor] [silent incorrectness] [dtype processing] `torch.clamp` can't implicitly covert `int64`
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: inductor",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: It's a very interesting edge case. When the range of `torch.clamp` is set to **(-0.5, 0.5)**, given an initial `int64` input, it can be **implicitly converted** into `f32` in eager, but inductor loses this mechanism and still outputs `int64`, subsequently resulting silent incorrectness.
**device backend**: both CPP and triton
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = torch.clamp(x, min=-0.5, max=0.5)
return x
model = Model()
x = torch.tensor(1)
print('input:')
print(x)
print(x.dtype)
inputs = [x]
def run_test(model, inputs, device, backend):
torch.manual_seed(0)
model = model.to(device)
inputs = [x.to(device) for x in inputs]
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
device = 'cpu'
output = run_test(model, inputs, device, 'eager')
c_output = run_test(model, inputs, device, 'aot_eager_decomp_partition')
print("eager output:")
print(output)
print(output.dtype)
print("inductor output:")
print(c_output)
print(c_output.dtype)
```
### Error logs
```
input:
tensor(1)
torch.int64
eager output:
tensor(0.5000)
torch.float32
inductor output:
tensor(0)
torch.int64
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov @bdhirsh
| true
|
3,006,331,537
|
[Inductor] Dynamo hangs when processing an operator, seemingly depending on a logical argument value
|
alexsamardzic
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
Here is a reproducer:
```Python
import torch
device = "cuda"
group_size = 4
M, N, K = 16, 32, 64
dtype_AB = torch.float8_e4m3fn
dtype_scale = torch.float32
dtype_offset = torch.int32
dtype_C = torch.bfloat16
A = torch.ones(M, K * group_size, device=device).to(dtype_AB)
B = torch.ones(N, K * group_size, device=device).to(dtype_AB)
A_scale = torch.ones(group_size * M, device=device, dtype=dtype_scale)
B_scale = torch.ones(group_size * N, device=device, dtype=dtype_scale)
offs = torch.arange(K, group_size * K + 1, K, device=device, dtype=dtype_offset)
f_ref = torch._scaled_grouped_mm
f = torch.compile(
f_ref,
)
torch.compiler.allow_in_graph(f_ref)
for use_fast_accum in [False, True]:
print("use_fast_accum =", use_fast_accum)
C_ref = f_ref(
A,
B.transpose(-2, -1),
A_scale,
B_scale,
offs,
out_dtype=dtype_C,
use_fast_accum=use_fast_accum,
)
C = f(
A,
B.transpose(-2, -1),
A_scale,
B_scale,
offs,
out_dtype=dtype_C,
use_fast_accum=use_fast_accum,
)
assert torch.allclose(C, C_ref, atol=1e-3, rtol=1e-3)
```
The first iteration of the loop, when `use_fast_accum` argument of `_scaled_grouped_mm` operator is set to `False`, goes fine, but in the second iteration, when the argument set to `True`, the compilation hangs. If a breakpoint set [here](https://github.com/pytorch/pytorch/blob/92d0c40c4921abf01a01a173453815b975781d85/torch/_dynamo/output_graph.py#L1617), and then trying to step over and return from this function, it seems that the hang happens at this place.
(Note: the `_scaled_grouped_mm` operator works on Hopper only.)
Background: Initial support for auto-tuning of this operator is added through #150421, and I've encountered the issue while working on extending it through #150944. However, the problem is not related to auto-tuning, it could be reproduced with c3bc6b3, that was before #150421.
### Error logs
Here is a backtrace from gdb, when reproducer stopped after being hang for some time. Apparently, it hangs in a `cudaStreamSynchronize()`.
<details>
<summary>Gdb backtrace</summary>
```
#0 0x00007f95417203bf in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#1 0x00007f95413d368c in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#2 0x00007f954149699a in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#3 0x00007f95416f0029 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#4 0x00007f954153d89d in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#5 0x00007f95b12143a5 in ?? () from /scratch/pytorch-dev/lib/libcudart.so.12
#6 0x00007f95b12757d8 in cudaStreamSynchronize () from /scratch/pytorch-dev/lib/libcudart.so.12
#7 0x00007f959a673f3c in at::native::_local_scalar_dense_cuda(at::Tensor const&)::{lambda()#1}::operator()() const [clone .isra.0] ()
from /scratch/pytorch/torch/lib/libtorch_cuda.so
#8 0x00007f959a675995 in at::native::_local_scalar_dense_cuda(at::Tensor const&) () from /scratch/pytorch/torch/lib/libtorch_cuda.so
#9 0x00007f959c298788 in at::(anonymous namespace)::(anonymous namespace)::wrapper_CUDA___local_scalar_dense(at::Tensor const&) ()
from /scratch/pytorch/torch/lib/libtorch_cuda.so
#10 0x00007f959c298810 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<c10::Scalar (at::Tensor const&), &at::(anonymous namespace)::(anonymous namespace)::wrapper_CUDA___local_scalar_dense>, c10::Scalar, c10::guts::typelist::typelist<at::Tensor const&> >, c10::Scalar (at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) () from /scratch/pytorch/torch/lib/libtorch_cuda.so
#11 0x00007f95a5b5d93a in at::_ops::_local_scalar_dense::call(at::Tensor const&) () from /scratch/pytorch/torch/lib/libtorch_cpu.so
#12 0x00007f95a512eff3 in at::native::item(at::Tensor const&) () from /scratch/pytorch/torch/lib/libtorch_cpu.so
#13 0x00007f95a624cb31 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<c10::Scalar (at::Tensor const&), &at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__item>, c10::Scalar, c10::guts::typelist::typelist<at::Tensor const&> >, c10::Scalar (at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) () from /scratch/pytorch/torch/lib/libtorch_cpu.so
#14 0x00007f95a599133a in at::_ops::item::call(at::Tensor const&) () from /scratch/pytorch/torch/lib/libtorch_cpu.so
#15 0x00007f95a6808057 in unsigned char at::Tensor::item<unsigned char>() const () from /scratch/pytorch/torch/lib/libtorch_cpu.so
#16 0x00007f95a51d2899 in at::native::allclose(at::Tensor const&, at::Tensor const&, double, double, bool) () from /scratch/pytorch/torch/lib/libtorch_cpu.so
#17 0x00007f95a79742df in torch::autograd::VariableType::(anonymous namespace)::allclose(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, double, double, bool) ()
from /scratch/pytorch/torch/lib/libtorch_cpu.so
#18 0x00007f95a577cceb in at::_ops::allclose::call(at::Tensor const&, at::Tensor const&, double, double, bool) () from /scratch/pytorch/torch/lib/libtorch_cpu.so
#19 0x00007f95b0634f2d in torch::autograd::THPVariable_allclose(_object*, _object*, _object*) () from /scratch/pytorch/torch/lib/libtorch_python.so
#20 0x000055bf35f0e4b6 in cfunction_call (func=<built-in method allclose of type object at remote 0x7f95b1187fe0>, args=<optimized out>, kwargs=<optimized out>)
at /usr/local/src/conda/python-3.9.22/Objects/methodobject.c:543
#21 0x000055bf35ef6d4c in _PyObject_MakeTpCall (tstate=0x55bf36327ca0, callable=callable@entry=<built-in method allclose of type object at remote 0x7f95b1187fe0>,
args=<optimized out>, nargs=<optimized out>, keywords=keywords@entry=('atol', 'rtol')) at /usr/local/src/conda/python-3.9.22/Objects/call.c:191
#22 0x000055bf35ef3488 in _PyObject_VectorcallTstate (kwnames=('atol', 'rtol'), nargsf=<optimized out>, args=<optimized out>,
callable=<built-in method allclose of type object at remote 0x7f95b1187fe0>, tstate=<optimized out>) at /usr/local/src/conda/python-3.9.22/Include/cpython/abstract.h:116
#23 _PyObject_VectorcallTstate (kwnames=('atol', 'rtol'), nargsf=<optimized out>, args=<optimized out>,
callable=<built-in method allclose of type object at remote 0x7f95b1187fe0>, tstate=<optimized out>) at /usr/local/src/conda/python-3.9.22/Include/cpython/abstract.h:103
#24 PyObject_Vectorcall (kwnames=('atol', 'rtol'), nargsf=<optimized out>, args=<optimized out>, callable=<built-in method allclose of type object at remote 0x7f95b1187fe0>)
at /usr/local/src/conda/python-3.9.22/Include/cpython/abstract.h:127
#25 call_function (kwnames=('atol', 'rtol'), oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=<optimized out>) at /usr/local/src/conda/python-3.9.22/Python/ceval.c:5077
#26 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=Frame 0x55bf36384a90, for file /scratch/pytorch/repro.py, line 34, in <module> (), throwflag=<optimized out>)
at /usr/local/src/conda/python-3.9.22/Python/ceval.c:3537
#27 0x000055bf35eed685 in _PyEval_EvalFrame (throwflag=0, f=Frame 0x55bf36384a90, for file /scratch/pytorch/repro.py, line 34, in <module> (), tstate=0x55bf36327ca0)
at /usr/local/src/conda/python-3.9.22/Include/internal/pycore_ceval.h:40
#28 _PyEval_EvalCode (tstate=0x55bf36327ca0, _co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=argcount@entry=0, kwnames=0x0,
kwargs=0x0, kwcount=<optimized out>, kwstep=2, defs=0x0, defcount=<optimized out>, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0)
at /usr/local/src/conda/python-3.9.22/Python/ceval.c:4329
#29 0x000055bf35eed338 in _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=argcount@entry=0,
kwnames=<optimized out>, kwargs=0x0, kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0)
at /usr/local/src/conda/python-3.9.22/Python/ceval.c:4361
#30 0x000055bf35eed2e9 in PyEval_EvalCodeEx (_co=_co@entry=<code at remote 0x7f95b84a45b0>,
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated),
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated), args=args@entry=0x0,
argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at /usr/local/src/conda/python-3.9.22/Python/ceval.c:4377
#31 0x000055bf35f97ddb in PyEval_EvalCode (co=co@entry=<code at remote 0x7f95b84a45b0>,
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated),
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated))
at /usr/local/src/conda/python-3.9.22/Python/ceval.c:828
#32 0x000055bf35fc4eaa in run_eval_code_obj (tstate=tstate@entry=0x55bf36327ca0, co=co@entry=0x7f95b84a45b0,
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated),
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated))
at /usr/local/src/conda/python-3.9.22/Python/pythonrun.c:1221
#33 0x000055bf35fc1353 in run_mod (mod=mod@entry=0x55bf363fe360, filename=filename@entry='/scratch/pytorch/repro.py',
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated),
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated),
flags=flags@entry=0x7ffce610ce08, arena=arena@entry=0x7f95b855b950) at /usr/local/src/conda/python-3.9.22/Python/pythonrun.c:1242
#34 0x000055bf35e5c347 in pyrun_file (fp=fp@entry=0x55bf363602f0, filename=filename@entry='/scratch/pytorch/repro.py', start=start@entry=257,
globals=globals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated),
locals=locals@entry={'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <SourceFileLoader(name='__main__', path='/scratch/pytorch/repro.py') at remote 0x7f95b857dc10>, '__spec__': None, '__annotations__': {}, '__builtins__': <module at remote 0x7f95b8568ae0>, '__file__': '/scratch/pytorch/repro.py', '__cached__': None, 'torch': <module at remote 0x7f95b835a220>, 'device': 'cuda', 'group_size': 4, 'M': 16, 'N': 32, 'K': 64, 'dtype_AB': <torch.dtype at remote 0x7f94a02574b0>, 'dtype_scale': <torch.dtype at remote 0x7f94a02cbdb0>, 'dtype_offset': <torch.dtype at remote 0x7f94a02cbc90>, 'dtype_C': <torch.dtype at remote 0x7f94a0257150>, 'A': <Tensor() at remote 0x7f949dd4c9f0>, 'B': <Tensor at remote 0x7f95b835a860>, 'A_scale': <Tensor() at remote 0x7f95b82fa040>, 'B_scale': <Tensor() at remote 0x7f95b835a810>, 'offs': <Tensor() at remote 0x7f95b835a8b0>, 'f_ref': <built-in method _scaled_grouped_mm of type object at remote 0x7f95b1187fe0>, 'f': <function at remote 0x7f9533bea820>, 'use_f...(truncated),
closeit=closeit@entry=1, flags=0x7ffce610ce08) at /usr/local/src/conda/python-3.9.22/Python/pythonrun.c:1140
#35 0x000055bf35fbb270 in pyrun_simple_file (flags=0x7ffce610ce08, closeit=1, filename='/scratch/pytorch/repro.py', fp=0x55bf363602f0)
at /usr/local/src/conda/python-3.9.22/Python/pythonrun.c:450
#36 PyRun_SimpleFileExFlags (fp=0x55bf363602f0, filename=<optimized out>, closeit=1, flags=0x7ffce610ce08) at /usr/local/src/conda/python-3.9.22/Python/pythonrun.c:483
#37 0x000055bf35fb88a4 in pymain_run_file (cf=0x7ffce610ce08, config=0x55bf363266e0) at /usr/local/src/conda/python-3.9.22/Modules/main.c:377
#38 pymain_run_python (exitcode=0x7ffce610ce00) at /usr/local/src/conda/python-3.9.22/Modules/main.c:606
#39 Py_RunMain () at /usr/local/src/conda/python-3.9.22/Modules/main.c:685
#40 0x000055bf35f8bc57 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at /usr/local/src/conda/python-3.9.22/Modules/main.c:1105
#41 0x00007f95b865cd90 in ?? () from /usr/lib/x86_64-linux-gnu/libc.so.6
#42 0x00007f95b865ce40 in __libc_start_main () from /usr/lib/x86_64-linux-gnu/libc.so.6
#43 0x000055bf35f8bb6e in _start ()
```
</details>
### Versions
<details>
<summary>The <tt>collect_env.py</tt> output</summary>
```
Collecting environment information...
PyTorch version: 2.8.0a0+git92d0c40
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (conda-forge gcc 13.3.0-2) 13.3.0
Clang version: 20.1.3 (https://github.com/conda-forge/clangdev-feedstock 3e9dfa811865fe27bcd95c0004d27603f2ec4a73)
CMake version: version 4.0.1
Libc version: glibc-2.35
Python version: 3.9.22 | packaged by conda-forge | (main, Apr 14 2025, 23:35:59) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6448Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0a0+git92d0c40
[conda] cuda-cudart 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-dev 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-dev_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cudart-static 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-static_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cudart_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cupti 12.6.80 hbd13f7d_0 conda-forge
[conda] cuda-cupti-dev 12.6.80 h5888daf_0 conda-forge
[conda] cuda-libraries-dev 12.6.3 ha770c72_0 conda-forge
[conda] cuda-nvrtc 12.6.85 hbd13f7d_0 conda-forge
[conda] cuda-nvrtc-dev 12.6.85 h5888daf_0 conda-forge
[conda] cuda-nvtx 12.6.77 hbd13f7d_0 conda-forge
[conda] cuda-nvtx-dev 12.6.77 ha770c72_0 conda-forge
[conda] cuda-opencl 12.6.77 hbd13f7d_0 conda-forge
[conda] cuda-opencl-dev 12.6.77 h5888daf_0 conda-forge
[conda] cudnn 9.8.0.87 h81d5506_1 conda-forge
[conda] libcublas 12.6.4.1 h5888daf_1 conda-forge
[conda] libcublas-dev 12.6.4.1 h5888daf_1 conda-forge
[conda] libcufft 11.3.0.4 hbd13f7d_0 conda-forge
[conda] libcufft-dev 11.3.0.4 h5888daf_0 conda-forge
[conda] libcurand 10.3.7.77 hbd13f7d_0 conda-forge
[conda] libcurand-dev 10.3.7.77 h5888daf_0 conda-forge
[conda] libcusolver 11.7.1.2 h5888daf_1 conda-forge
[conda] libcusolver-dev 11.7.1.2 h5888daf_1 conda-forge
[conda] libcusparse 12.5.4.2 hbd13f7d_0 conda-forge
[conda] libcusparse-dev 12.5.4.2 h5888daf_0 conda-forge
[conda] libmagma 2.9.0 h19665d7_1 conda-forge
[conda] libmagma_sparse 2.9.0 h19665d7_0 conda-forge
[conda] libnvjitlink 12.6.85 hbd13f7d_0 conda-forge
[conda] libnvjitlink-dev 12.6.85 h5888daf_0 conda-forge
[conda] magma 2.9.0 h3d470c8_0 conda-forge
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] mkl-include 2025.1.0 hf2ce2f3_808 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0a0+git92d0c40 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,006,319,425
|
Implement avg_pool3d for MPS backend
|
donghao1393
|
open
|
[
"triaged",
"open source",
"release notes: mps"
] | 7
|
NONE
|
This PR implements the avg_pool3d operation for the MPS backend using a custom Metal shader. This will allow users with Apple Silicon GPUs to use 3D average pooling operations without falling back to CPU.
## Implementation Details
The implementation includes:
1. A custom Metal shader for 3D average pooling
2. C++ interface to integrate with PyTorch
3. Support for both forward and backward passes
4. Comprehensive test cases
5. macOS version compatibility check (requires macOS 13.2+)
6. Special case handling for Long data type with divisor_override
## Development Challenges and Solutions
During the development process, we encountered several challenges:
1. **Metal Shader Compilation**: Initially, we faced issues with Metal shader compilation due to missing Xcode tools. We resolved this by ensuring proper Xcode installation and configuration.
2. **Command Buffer Conflicts**: When processing non-contiguous tensors, we encountered Metal command buffer errors with the message: 'A command encoder is already encoding to this command buffer'. This occurred because:
- Metal requires that only one command encoder can be active for a command buffer at a time
- When processing non-contiguous tensors, the tensor conversion operations were creating their own command encoders without properly ending previous ones
- The error would occur when our code tried to create a new command encoder while another was still active
We solved this by:
- Adding explicit MPS stream synchronization before and after our operations using mpsStream->synchronize()
- Creating a separate code path for non-contiguous tensors that first converts them to contiguous format
- Ensuring proper command encoder lifecycle management by ending encoding before synchronizing the stream
3. **Version Compatibility**: We added explicit checks to ensure the implementation only runs on macOS 13.2 or newer, as earlier versions may not support all the required Metal features:
```cpp
TORCH_CHECK(is_macos_13_or_newer(MacOSVersion::MACOS_VER_13_2_PLUS),
"avg_pool3d is only supported on MPS for MacOS_13_2 or newer");
```
4. **Special Case Handling**: For certain edge cases (Long data type with divisor_override), we implemented a CPU fallback as MPS doesn't support these combinations efficiently.
5. **Improved Error Handling**: We added comprehensive dimension and data type checks to provide better error messages and ensure correct usage:
- Checking input and output tensor dimensions
- Verifying data type compatibility
- Validating parameter values
## Implementation Approach
We chose to implement a custom Metal shader rather than using multiple 2D pooling operations or other approaches because:
1. It provides better performance for 3D data
2. It allows for more precise control over the pooling operation
3. It's consistent with how other 3D operations are implemented in PyTorch
## Alternative Approaches Considered
1. **Multiple 2D Pooling Operations**: We initially considered implementing avg_pool3d using multiple avg_pool2d operations, but this would have been less efficient and more complex to maintain.
2. **Using MPSCNNPooling**: We explored using the built-in Metal Performance Shaders for pooling, but they don't directly support 3D pooling operations.
3. **CPU Fallback**: The simplest approach would have been to fall back to CPU implementation, but this would have defeated the purpose of MPS acceleration.
## Update History
**April 19, 2025:**
Fixed issues with Metal command buffer handling when processing non-contiguous tensors. The solution ensures proper synchronization of MPS streams and correct handling of command encoders, avoiding the 'A command encoder is already encoding to this command buffer' error.
**April 20, 2025:**
- Marked PR as ready for review after comprehensive testing and verification
- Fixed linting issues and improved documentation
- Added comprehensive dimension and data type checks for better error handling
- Verified compatibility with various use cases, including those with the .out variant
This addresses issue #141287.
Fixes #151741 #141044
| true
|
3,006,318,804
|
Implement avg_pool3d for MPS backend
|
donghao1393
|
closed
|
[] | 1
|
NONE
|
This PR implements the avg_pool3d operation for the MPS backend using a custom Metal shader. This will allow users with Apple Silicon GPUs to use 3D average pooling operations without falling back to CPU.
The implementation includes:
1. A custom Metal shader for 3D average pooling
2. C++ interface to integrate with PyTorch
3. Support for both forward and backward passes
4. Comprehensive test cases
This addresses issue #141287.
| true
|
3,006,271,921
|
mps and cpu backends produce different training results with FFT and Adam
|
ChenkaiMao97
|
open
|
[
"needs reproduction",
"triaged",
"module: correctness (silent)",
"module: fft",
"module: mps"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hi, I have a model that uses 2d FFT operations, and I'm seeing convergent training results on Cuda and cpu, while getting divergent results on mps (loss drops for the first few steps and then explodes).
I'm not sure where the error is coming from, but I've created this minimal example below with a simple model with a fourier layer and trained on some random data. I observe different behaviors as well. Especially,
(1) with FFT and Adam, on cpu backend the loss drops but on mps backend it explodes.
(2) If I change FFT to Conv2d, or change adam to SGD, it seems the loss is dropping on both cpu and mps.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
################ model definition ##################
class SpectralConv2d(nn.Module):
def __init__(self, in_channels, out_channels, hidden_freq, modes1, modes2):
super().__init__()
scale = (1 / in_channels / out_channels)
self.weights = nn.Parameter(scale * torch.rand(in_channels, out_channels, modes1, modes2, 2, dtype=torch.float32))
def compl_mul2d(self, input, weights):
return torch.einsum("bixy,ioxy->boxy", input, weights)
def forward(self, x):
batchsize = x.shape[0]
x_ft = torch.fft.rfftn(x, dim=[-2,-1])
weights = torch.view_as_complex(self.weights)
weights_r = F.interpolate(weights.real, size=(x.size(-2), x.size(-1)//2+1))
weights_i = F.interpolate(weights.imag, size=(x.size(-2), x.size(-1)//2+1))
weights = torch.view_as_complex(torch.stack((weights_r, weights_i), dim=-1))
out_ft = self.compl_mul2d(x_ft, weights)
x = torch.fft.irfftn(out_ft, s=(x.size(-2), x.size(-1)))
return x
#################### training with different backends ################
batch_size = 8
in_c = 2
out_c = 4
hidden_freq = 8
sizex, sizey = (128, 128)
modes1, modes2 = (16, 16)
def train(backend, seed = 42):
torch.manual_seed(seed)
if backend=='cpu':
device = torch.device("cpu")
elif backend=='mps':
device = torch.device("mps")
model = SpectralConv2d(in_c, out_c, hidden_freq, modes1, modes2).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.MSELoss()
x_train = torch.randn(batch_size, in_c, sizex, sizey)
y_train = torch.randn(batch_size, out_c, sizex, sizey)
x_train = x_train.to(device)
y_train = y_train.to(device)
for step in range(1000):
out = model(x_train)
loss = criterion(out, y_train)
loss.backward()
optimizer.step()
optimizer.zero_grad()
if (step+1) % 100 == 0:
print(f"Step {(step+1):03d} | Loss: {loss.item():.6f}")
train('cpu')
train('mps')
```
output for `train('cpu')`:
> Step 100 | Loss: 0.995368
Step 200 | Loss: 0.992208
Step 300 | Loss: 0.991863
Step 400 | Loss: 0.991827
Step 500 | Loss: 0.991824
Step 600 | Loss: 0.991824
Step 700 | Loss: 0.991824
Step 800 | Loss: 0.991824
Step 900 | Loss: 0.991824
Step 1000 | Loss: 0.991824
output for `train('mps')`:
> Step 100 | Loss: 1.058992
Step 200 | Loss: 1.172400
Step 300 | Loss: 1.356889
Step 400 | Loss: 1.608124
Step 500 | Loss: 1.922639
Step 600 | Loss: 2.297220
Step 700 | Loss: 2.729716
Step 800 | Loss: 3.218872
Step 900 | Loss: 3.761904
Step 1000 | Loss: 4.357483
With smaller learning rate (e.g. 1e-5), the trends are the same.
I'm using python version 3.10.17, torch version 2.6.0 on a mac studio (M2 Ultra) with Sequoia 15.2. Could you please check if you can reproduce the error, and if you have suggestions on how to debug? Thanks a lot.
### Versions
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.17 | packaged by conda-forge | (main, Apr 10 2025, 22:23:34) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Ultra
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] optree==0.15.0
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] torchvision-extra-decoders==0.0.2
[conda] Could not collect
cc @mruberry @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,006,251,332
|
[Dynamo][Easy] Remove unreachable code
|
shink
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: dynamo"
] | 18
|
CONTRIBUTOR
|
This line is unreachable:
https://github.com/pytorch/pytorch/blob/f6c1cf04b5158bac7263e4708f22dab63e7456ad/torch/_dynamo/output_graph.py#L275
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,006,192,906
|
[inductor] [triton] the generated triton code throws `NameError('rindex is not defined')` when using `torch.cummin`
|
shaoyuyoung
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: The triton kernel code generated by inductor throws **variable name undefined error**. I am not sure whether this is the inductor bug or triton bug?
**device backend**: only triton has this issue.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
import os
os.environ['TORCHDYNAMO_VERBOSE'] = '1'
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.register_buffer('special_values', torch.tensor([1.0, -2.0, 3.0, float('nan'), float('inf')]))
def forward(self, x):
x = torch.complex(x, self.special_values)
x = torch.prod(x, dim=-1)
x = x.unsqueeze(0)
abs_x = x.abs()
values, indices = torch.cummin(abs_x, dim=0)
return indices
model = Model()
x = torch.tensor([1.0, float('inf'), -1.5, 0.0, float('nan')])
inputs = [x]
def run_test(model, inputs, device, backend):
torch.manual_seed(0)
model = model.to(device)
inputs = [x.to(device) for x in inputs]
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
device = "cuda"
run_test(model, inputs, device, 'eager')
run_test(model, inputs, device, 'inductor')
```
### Error logs
```
succeed on eager
E0419 13:01:41.174000 1612172 site-packages/torch/_inductor/runtime/triton_heuristics.py:617] [0/0] NameError('rindex is not defined')
CompilationError: at 6:58:
def triton_poi_fused_cummin_0(out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 1
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = tl.full([XBLOCK], True, tl.int1)
tl.store(out_ptr0 + (tl.full([XBLOCK], 0, tl.int32)), rindex, None)
^
NameError('rindex is not defined')
```
### Versions
nightly 20250418
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,006,123,877
|
[inductor] [cuda] [fake tensor] `torch.triu_indices` throws `pointer argument` error when using `[0, 0]`
|
shaoyuyoung
|
open
|
[
"triaged",
"actionable",
"oncall: pt2",
"module: fakeTensor",
"module: dynamo",
"dynamo-triage-jan2025"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: if not using `[0, 0]` silice, eager will throw `Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!`. However, if we use `[0, 0]` to get the first element, eager can pass the check, but inductor throws the error.
**device backend**: triton
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
import os
os.environ['TORCHDYNAMO_VERBOSE'] = '1'
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x = x + torch.triu_indices(1, 1)[0, 0] # [0, 0] is the trigger condition
return x
model = Model()
x = torch.randn(1)
inputs = [x]
def run_test(model, inputs, device, backend):
torch.manual_seed(0)
model = model.to(device)
inputs = [x.to(device) for x in inputs]
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
device = "cuda"
run_test(model, inputs, device, 'eager')
run_test(model, inputs, device, 'inductor')
```
### Error logs
```
succeed on eager
Pointer argument (at 1) cannot be accessed from Triton (cpu tensor?)
```
### Versions
nightly 20250414
cc @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @bdhirsh
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.