id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,950,254,513
|
[aotd] Support saved tensors hooks in aot_autograd
|
IvanKobzarev
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 26
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150032
https://github.com/pytorch/pytorch/issues/148222
Goal:
At the moment autograd saved tensors hooks are run in eager after compiled forward.
They are executed at the same time for all saved tensors.
Hooks can be used to reduce amout of memory used for saved tensors, doing quantization or offloading to cpu.
This is suboptimal for optimization of peak memory.
Better solution will be to put the hooks in the graph, as close as possible to the last usage of the tensor.
To get user specified autograd saved tensors hooks in the graph.
Logic:
UX:
If user specifies with torch.autograd.graph.saved_tensors_hooks(pack_gm, unpack_gm).
Where pack_gm and unpack_gm are torch.fx.GraphModule.
Then AotAutograd will retrace those graph modules, doing decompositions and functionalization in aot_autograd, inlining the result graphs in forward epilogue and backward prologue.
User may want to use control logic in the hooks, for example applying quantization only for specific dtypes and sizes.
This is also possible, user can put it into torch.fx.wrap function and use symbolic trace to make a GraphModule.
In that case AotAutograd cahing will work only in case when user explicitly set to the torch.fx.wrap call_function node "user_cache_hash" metadata.
If this metadata set - then aot_autograd cache can use saved cache artifact.
If metadata is not set - then cache is bypassed.
Dynamo:
Dynamo traces pack and unpack hooks and installs them as subgraph and explicitly adds to the output_graph. (As those subgraphs are not used and will not be copied in the result by default).
The complexity here is that at this moment we do not have example of inputs for the hooks.
We trace pack_hook with some Tensor from the inputs.
The result subgraphs are added to the hashing of AotAutograd Cache.
In AotAutograd we retrace the graph with the true saved tensors coming from partitioner.
Backwards Compatibility:
As current hooks are executed in eager mode and not all of them will be traceable - we only try to put in the graph hooks, explicitly marked by user with annotation (@_inlineable_saved_tensors_hooks).
For other hooks or if compiled autograd is enabled - keep the same logic.
Recompilations:
Hooks are guarded with lambda guard matching function id to cause recompilation if user reruns compiled function.
Aot_autograd:
After partitioner prepared forward and backward module - we trace prepared at Dynamo graphs for pack and unpack hooks and inline them in epilogue of forward and prologue of backward. Forward outputs and backward inputs are changed, transparently for user.
We do not try to put it close the last usage etc., relying on inductor to do this optimization.
```
INFO: TRACED GRAPH
===== Forward graph pre saved_tensors_hooks inlining 3 =====
/data/users/ivankobzarev/a/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "Sym(s0)", primals_2: "Sym(s1)", primals_3: "f32[s0, s1][s1, 1]cuda:0"):
# File: /data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py:6660 in simple_fn, code: x = x + 1
add: "f32[s0, s1][s1, 1]cuda:0" = torch.ops.aten.add.Tensor(primals_3, 1); primals_3 = None
# File: /data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py:6661 in simple_fn, code: x = SAF.apply(x)
view: "f32[s0, s1][s1, 1]cuda:0" = torch.ops.aten.view.default(add, [primals_1, primals_2])
return (view, add, primals_1, primals_2)
INFO: TRACED GRAPH
===== Backward graph pre saved_tensors_hooks inlining 3 =====
/data/users/ivankobzarev/a/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "Sym(s0)", primals_2: "Sym(s1)", primals_3: "f32[s0, s1][s1, 1]cuda:0"):
# File: /data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py:6660 in simple_fn, code: x = x + 1
add: "f32[s0, s1][s1, 1]cuda:0" = torch.ops.aten.add.Tensor(primals_3, 1); primals_3 = None
# File: /data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py:6661 in simple_fn, code: x = SAF.apply(x)
view: "f32[s0, s1][s1, 1]cuda:0" = torch.ops.aten.view.default(add, [primals_1, primals_2])
return (view, add, primals_1, primals_2)
INFO: TRACED GRAPH
===== saved_tensors_pack_hook add 3 =====
/data/users/ivankobzarev/a/pytorch/torch/fx/_lazy_graph_module.py class pack_float8(torch.nn.Module):
def forward(self, x_1: "f32[s0, s1][s1, 1]cuda:0"):
# No stacktrace found for following nodes
_to_copy: "f8e4m3fn[s0, s1][s1, 1]cuda:0" = torch.ops.aten._to_copy.default(x_1, dtype = torch.float8_e4m3fn); x_1 = None
return (torch.float32, _to_copy)
INFO: TRACED GRAPH
===== saved_tensors_unpack_hook add 3 =====
<eval_with_key>.22 from /data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py:1225 in wrapped class pack_float8(torch.nn.Module):
def forward(self, x_1: "f32[s0, s1][s1, 1]cuda:0"):
# No stacktrace found for following nodes
_to_copy: "f8e4m3fn[s0, s1][s1, 1]cuda:0" = torch.ops.aten._to_copy.default(x_1, dtype = torch.float8_e4m3fn); x_1 = None
return (torch.float32, _to_copy)
INFO: TRACED GRAPH
===== Forward graph 3 =====
/data/users/ivankobzarev/a/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "Sym(s0)", primals_2: "Sym(s1)", primals_3: "f32[s0, s1][s1, 1]cuda:0"):
# File: /data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py:6660 in simple_fn, code: x = x + 1
add: "f32[s0, s1][s1, 1]cuda:0" = torch.ops.aten.add.Tensor(primals_3, 1); primals_3 = None
# No stacktrace found for following nodes
_to_copy: "f8e4m3fn[s0, s1][s1, 1]cuda:0" = torch.ops.aten._to_copy.default(add, dtype = torch.float8_e4m3fn)
# File: /data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py:6661 in simple_fn, code: x = SAF.apply(x)
view: "f32[s0, s1][s1, 1]cuda:0" = torch.ops.aten.view.default(add, [primals_1, primals_2]); add = None
return (view, _to_copy, primals_1, primals_2)
INFO: TRACED GRAPH
===== Backward graph 3 =====
<eval_with_key>.21 class GraphModule(torch.nn.Module):
def forward(self, primals_1: "Sym(s0)", primals_2: "Sym(s1)", add_packed_2: "f8e4m3fn[s0, s1][s1, 1]cuda:0", tangents_1: "f32[s0, s1][s1, 1]cuda:0"):
# No stacktrace found for following nodes
_to_copy: "f32[s0, s1][s1, 1]cuda:0" = torch.ops.aten._to_copy.default(add_packed_2, dtype = torch.float32); add_packed_2 = None
# File: /data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py:6661 in simple_fn, code: x = SAF.apply(x)
add_7: "f32[s0, s1][s1, 1]cuda:0" = torch.ops.aten.add.Tensor(tangents_1, _to_copy); tangents_1 = _to_copy = None
return (None, None, add_7)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72187044](https://our.internmc.facebook.com/intern/diff/D72187044)
| true
|
2,950,234,906
|
Fixes detection of ArmPL on Linux platform
|
milpuz01
|
closed
|
[
"triaged",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 16
|
CONTRIBUTOR
|
On Linux it failed to detect that there is bin directory as it wasn't looking for armpl-info which is the only file that is in that directory on Linux and also adding link to math library as it is required to link against when checking for LAPACK functions.
Fixes #149610
cc @malfet @snadampal @aditew01 @nikhil-arm @fadara01
| true
|
2,950,164,384
|
[export] Save unflattened gm
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary: Reland of D71082652
Test Plan:
https://www.internalfb.com/intern/testinfra/testrun/8444249558423545
https://www.internalfb.com/intern/testinfra/testrun/7318349652864293
https://www.internalfb.com/intern/testinfra/testrun/13229323980143778
https://www.internalfb.com/intern/testinfra/testrun/11540474119884081
Differential Revision: D71902033
| true
|
2,950,149,537
|
[DRAFT] PR to regenerate docker images for nccl release
|
atalman
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
This PR should regenerate docker images
| true
|
2,950,144,369
|
[TD] Enable TD on distributed cpu
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Enable TD on distributed cpu, I think the only reason it's not is because I forgot to enable it
Get rid of some of the statements that are no ops:
* asan uses default shard
* nogpu got moved to periodic
* no windows cuda testing anymore
Only thing on pull and trunk that doesn't use TD is dynamo_wrapped but I think it's fast enough to be ok for now, we can take another look after this
| true
|
2,950,135,729
|
torch.nextafter(0, 1) returns 0 on MPS device
|
ogrisel
|
open
|
[
"module: printing",
"triaged",
"module: NaNs and Infs",
"module: third_party",
"module: mps"
] | 5
|
NONE
|
### 🐛 Describe the bug
The `torch.nextafter` function seems to return invalid results on the "mps" device using an Apple M1 processor on macOS 15.2 (24C101).
```python
>>> import torch
>>> torch.nextafter(torch.zeros(1, device="cpu"), torch.ones(1, device="cpu"))
tensor([1.4013e-45])
>>> torch.nextafter(torch.zeros(1, device="mps"), torch.ones(1, device="mps"))
tensor([0.], device='mps:0')
```
I also tried with CUDA on another machine, and it returns the same as the CPU device (and NumPy). So this seems to be an MPS-specific bug.
### Versions
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:19:53) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] mypy_extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] numpydoc==1.8.0
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[pip3] torchtext==0.18.0
[conda] libtorch 2.6.0 cpu_generic_h6100933_3 conda-forge
[conda] nomkl 1.0 h5ca1d4c_0 conda-forge
[conda] numpy 2.1.3 py312h94ee1e1_0 conda-forge
[conda] numpydoc 1.8.0 pyhd8ed1ab_1 conda-forge
[conda] optree 0.14.1 py312hb23fbb9_1 conda-forge
[conda] pytorch 2.6.0 cpu_generic_py312_heeb16a7_3 conda-forge
[conda] torchtext 0.18.0 pypi_0 pypi
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,950,069,930
|
DISABLED test_foreach_check_stride_ignore_dims_of_one_cuda_float32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_check_stride_ignore_dims_of_one_cuda_float32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39431608801).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_check_stride_ignore_dims_of_one_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 566, in test_foreach_check_stride_ignore_dims_of_one
out = foreach_add_check_(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_add', keys=('aten::_foreach_add', 'Unrecognized', 'aten::result_type', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_foreach_check_stride_ignore_dims_of_one_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,950,056,965
|
Add a param for save format in Storage Writer
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (checkpoint)"
] | 13
|
CONTRIBUTOR
|
Summary: add a param to specify to the storage writer how to save tensors. Write now the only options are safetensors and torch.save.
Test Plan:
(lintrunner) [ankitageorge@devgpu003.cco3 /data/users/ankitageorge/fbsource/fbcode/caffe2 (1d57cb27b)]$ buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/distributed/checkpoint:test_hf_storage
File changed: fbcode//caffe2/torch/distributed/checkpoint/filesystem.py
Buck UI: https://www.internalfb.com/buck2/e80cc963-e34a-4876-b6f4-7ce2794e48dd
Test UI: https://www.internalfb.com/intern/testinfra/testrun/3659174965882569
Network: Up: 32KiB Down: 1.9KiB (reSessionID-ef9fa764-a40a-451b-ab58-08eabe7a9422)
Executing actions. Remaining 0/4 3.4s exec time total
Command: test. Finished 2 local
Time elapsed: 19.6s
Tests finished: Pass 4. Fail 0. Fatal 0. Skip 0. Build failure 0
Reviewed By: saumishr
Differential Revision: D70271943
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,949,904,279
|
[MPS] Preserve in/out dtypes in binary_op name
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150024
To be consistient with unary op and avoid silent correctness problems if someone will try to invoke the op with unexpected out dtype
| true
|
2,949,748,340
|
Fix sparse CUTLASS-based kernels
|
alexsamardzic
|
closed
|
[
"module: sparse",
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150023
* #149978
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @ptrblck @msaroufim @eqy
| true
|
2,949,678,966
|
Dynamic Shapes with **kwargs
|
xadupre
|
open
|
[
"module: regression",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
This used to work two weeks ago.
```python
import torch
class Model(torch.nn.Module):
def forward(self, **kwargs):
return kwargs["x"] + kwargs["y"]
x, y = torch.randn(2, 3), torch.randn(2, 3)
Model()(x=x, y=y)
ds = {
"kwargs": {
"x": {0: torch.export.Dim("batch")},
"y": {0: torch.export.Dim("batch")},
}
}
ep = torch.export.export(Model(), tuple(), kwargs={"x": x, "y": y}, dynamic_shapes=ds)
```
Error:
```
File "torch/_export/non_strict_utils.py", line 383, in make_constraints
flat_dynamic_shapes = _flatten_dynamic_shapes(combined_args, dynamic_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_export/non_strict_utils.py", line 268, in _flatten_dynamic_shapes
_tree_map_with_path(_tree_map_helper, combined_args, dynamic_shapes)
File "torch/export/dynamic_shapes.py", line 534, in _tree_map_with_path
assert tree_name, "Must provide a tree_name when there might be a mismatch"
^^^^^^^^^
AssertionError: Must provide a tree_name when there might be a mismatch
```
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250324+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] model-explorer-onnx==0.3.4
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-array-api==0.3.0
[pip3] onnx-diagnostic==0.2.1
[pip3] onnx-extended==0.4.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-genai-cuda==0.6.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] onnxscript==0.3.0.dev20250301
[pip3] optree==0.14.1
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250324+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250324+cu126
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250324+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,949,571,244
|
[ued] I want to see the full sizes/strides in TORCH_LOGS=recompiles
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"empathy-day"
] | 0
|
CONTRIBUTOR
|
Sometimes I use the shapes of the Tensors to identify them. E.g.
```
DEBUG:torch._dynamo.guards.__recompiles:Recompiling function forward in /home/rzou/dev/kokoro/kokoro/kokoro/istftnet.py:378
triggered by the following guard failure(s):
- 10/1: tensor 'x' size mismatch at index 1. expected 512, actual 256
- 10/0: tensor 'x' size mismatch at index 1. expected 512, actual 256
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,949,361,493
|
windows linker receives wrong non existent path
|
loscrossos
|
closed
|
[
"open source",
"release notes: python_frontend",
"topic: bug fixes"
] | 9
|
NONE
|
Fix to issue #149889 where the windows linker receives a wrong path.
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,949,334,597
|
[Inductor] Inconsistency results after compilation with the inductor
|
Cookiee235
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
model = torch.nn.Sequential(
torch.nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1),
torch.nn.ReLU(),
torch.nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
).cuda()
inputs = torch.randn(1, 3, 32, 32, device='cuda')
res = model(inputs)
compiled_model = torch.compile(model, backend='inductor')
with torch.no_grad():
compiled_out = compiled_model(inputs)
torch.testing.assert_close(res, compiled_out)
```
### Error logs
```
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/torch_tests/03-26_18-09/38.py", line 14, in <module>
torch.testing.assert_close(res, compiled_out)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 477 / 65536 (0.7%)
Greatest absolute difference: 2.0518898963928223e-05 at index (0, 9, 31, 14) (up to 1e-05 allowed)
Greatest relative difference: 0.010188400745391846 at index (0, 23, 31, 15) (up to 1.3e-06 allowed)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,949,295,748
|
Allow TritonTemplate subclasses to override kernel type
|
ahmadsarvmeily
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
Allows subclasses of `TritonTemplate` to override the kernel type, e.g.
```
class MyTritonTemplate(TritonTemplate):
kernel_type = MyTritonTemplateKernel
```
This means that all of the logic in `TritonTemplate` class doesn't need to be duplicated in subclasses if the only required change is the kernel type.
Note that there is precedent for doing this - see `SIMDScheduling` in `torch/_inductor/codegen/simd.py`:
```
class SIMDScheduling(BaseScheduling):
kernel_type: type[Any] = SIMDKernel # override in subclass
...
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,949,264,564
|
`__setitem__` with bool mask and dtype mismatch fails
|
crusaderky
|
open
|
[
"triaged",
"module: type promotion",
"module: advanced indexing"
] | 1
|
NONE
|
### 🐛 Describe the bug
`x[idx] = v` fails when
- `x` has ndim=0, and
- `idx` is a boolean mask (also with ndim=0), and
- `v` requires type promotion.
```python
import torch as xp
x = xp.asarray([0], dtype=xp.float64)
x[x==0] = xp.asarray(1, dtype=xp.float32) # OK
x = xp.asarray(0, dtype=xp.float64)
x[()] = xp.asarray(1, dtype=xp.float32) # OK
x[x==0] = xp.asarray(1, dtype=xp.float64) # OK
x[x==0] = xp.asarray(1, dtype=xp.float32) # error
# RuntimeError: Index put requires the source and destination dtypes match, got Double for the destination and Float for the source.
```
### Versions
torch 2.6.0 cpu
cc @nairbv @mruberry
| true
|
2,949,227,009
|
[ROCm] RuntimeError: HIPBLAS_STATUS_NOT_SUPPORTED in torch 2.6.0+rocm6.2.4
|
zhaohm14
|
closed
|
[
"high priority",
"module: nn",
"module: rocm",
"triaged"
] | 14
|
NONE
|
I encountered a RuntimeError when performing a specific operation with PyTorch 2.6.0+rocm6.2.4 (and also 2.8.0.dev20250325+rocm6.3), while the same code works fine on PyTorch 2.5.1+rocm6.2. This appears to be a regression introduced in the newer version.
Here is the error message:
```log
File "/root/miniconda3/envs/torch2.6/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/torch2.6/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/torch2.6/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
File "/root/miniconda3/envs/torch2.6/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/torch2.6/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/torch2.6/lib/python3.13/site-packages/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: HIPBLAS_STATUS_NOT_SUPPORTED when calling `HIPBLAS_STATUS_NOT_SUPPORTED`
```
Here is the test code that triggers the error:
```python
import torch
import torch.nn as nn
torch.set_default_device('cuda:0')
def test(layer, x):
try:
layer(x)
return True, None
except RuntimeError as e:
return False, e
layer = nn.Linear(2048, 2048)
x = torch.randn((1, 32768, 2048))[:, 0]
print(test(layer, x))
```
- torch 2.6.0+rocm6.2.4 (or 2.8.0.dev20250325+rocm6.3):
``(False, RuntimeError('CUDA error: HIPBLAS_STATUS_NOT_SUPPORTED when calling `HIPBLAS_STATUS_NOT_SUPPORTED`'))``
- torch 2.5.1+rocm6.2:
``(True, None)``
<details open><summary>I made some further tests:</summary>
```python
# test1 ok
print(1, test(nn.Linear(2048, 2048), torch.randn((1, 32767, 2048))[:, 0]))
# test2 ok
print(2, test(nn.Linear(2047, 2047), torch.randn((1, 32784, 2047))[:, 0]))
# test3 fail
print(3, test(nn.Linear(2047, 2047), torch.randn((1, 32785, 2047))[:, 0]))
# test4 ok
print(4, test(nn.Linear(1024, 1024), torch.randn((1, 65535, 1024))[:, 0]))
# test5 fail
print(5, test(nn.Linear(1024, 1024), torch.randn((1, 65536, 1024))[:, 0]))
# test6 ok
print(6, test(nn.Linear(2048, 2048), torch.randn((16, 32767, 2048))[:, 0]))
# test7 fail
print(7, test(nn.Linear(2048, 2048), torch.randn((16, 32768, 2048))[:, 0]))
# test8 fail
layer = nn.Sequential(nn.Linear(2048, 2048), nn.ReLU())
x = torch.randn((1, 32768, 2048))[:, 0]
print(8, test(layer, x))
# test9 ok
layer = nn.Sequential(nn.ReLU(), nn.Linear(2048, 2048))
x = torch.randn((1, 32768, 2048))[:, 0]
print(9, test(layer, x))
```
</details>
And here is some conclusions:
- This error only occurs when `x` is directly fed into `nn.Linear`. If `x` passes through a `ReLU` layer first, it will work fine.
- This error only occurs when `x` is sliced from a tensor with `shape[1] * shape[2] >= 2**26`, and has nothing to do with `shape[0]`.
<details><summary>My environment:</summary>
```log
PyTorch version: 2.6.0+rocm6.2.4
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41134-65d174c3e
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.13.1 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:29:23) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41134
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
BIOS Vendor ID: Advanced Micro Devices, Inc.
Model name: AMD EPYC 9534 64-Core Processor
BIOS Model name: AMD EPYC 9534 64-Core Processor Unknown CPU @ 2.4GHz
BIOS CPU family: 107
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 45%
CPU max MHz: 3718.0659
CPU min MHz: 1500.0000
BogoMIPS: 4892.86
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] pytorch-triton-rocm==3.2.0
[pip3] torch==2.6.0+rocm6.2.4
[pip3] torchvision==0.21.0+rocm6.2.4
[conda] numpy 2.2.2 pypi_0 pypi
[conda] pytorch-triton-rocm 3.2.0 pypi_0 pypi
[conda] torch 2.6.0+rocm6.2.4 pypi_0 pypi
[conda] torchvision 0.21.0+rocm6.2.4 pypi_0 pypi
```
</details>
I hope this report helps in identifying and resolving the issue. Thank you for your attention!
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,949,187,514
|
Refactor cudnn version check in smoke test for Windows
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
After https://github.com/pytorch/pytorch/pull/149885
I see failures on Window smoke test:
https://github.com/pytorch/test-infra/actions/runs/14069923716/job/39401550854
Due to fact that pypi packages such as cudnn and nccl are installed only on Linux. Hence this should resolve issue on Windows platform.
On windows cudnn is shipped with PyTorch as opposed to installed dynamically.
| true
|
2,949,169,514
|
Test Github Runner behaviors
|
iremyux
|
closed
|
[
"open source",
"ciflow/binaries",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Fake PR to test Github Runner behaviors
| true
|
2,949,074,618
|
Add path used by pip's build isolation procedure to DLL search
|
Luthaf
|
open
|
[
"triaged",
"open source"
] | 2
|
CONTRIBUTOR
|
Re-opening a new PR after #131340 and #140535 where closed for being stale without a review.
---
Without this, trying to `import torch` in a downstream `setup.py` file would result in
```
The specified module could not be found. Error loading "C:\...\pip-build-env-himl3xh3\normal\Lib\site-packages\torch\lib\shm.dll" or one of its dependencies."
```
This seems to be because pip does not use a full virtualenv for build isolation, instead creating directories and manually adding them to `sys.path`. The same issue does not seem to apply when using `python -m build`.
---
To reproduce, you can create a directory with two files:
```toml
# pyproject.toml
[project]
name = "windows-torch-mkl-pip"
version = "0.0.0"
[build-system]
requires = [
"setuptools",
"torch"
]
```
```py
# setup.py
from setuptools import setup
import torch
setup()
```
Then, trying to build a wheel with `pip install .` will give some output similar to:
```
Installing collected packages: tbb, mpmath, intel-openmp, typing-extensions, sympy, numpy, networkx, mkl, MarkupSafe, fsspec, filelock, jinja2, torch
Creating C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-himl3xh3\normal\Scripts
Successfully installed MarkupSafe-2.1.5 filelock-3.14.0 fsspec-2024.6.0 intel-openmp-2021.4.0 jinja2-3.1.4 mkl-2021.4.0 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 sympy-1.12.1 tbb-2021.12.0 torch-2.3.1+cpu typing-extensions-4.12.2
Created temporary directory: C:\Users\runneradmin\AppData\Local\Temp\pip-modern-metadata-ascqww5w
Preparing metadata (pyproject.toml): started
Running command Preparing metadata (pyproject.toml)
Traceback (most recent call last):
File "C:\Users\runneradmin\AppData\Local\Temp\cibw-run-7yztij8w\cp312-win_amd64\build\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\runneradmin\AppData\Local\Temp\cibw-run-7yztij8w\cp312-win_amd64\build\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\runneradmin\AppData\Local\Temp\cibw-run-7yztij8w\cp312-win_amd64\build\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 149, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-himl3xh3\overlay\Lib\site-packages\setuptools\build_meta.py", line 366, in prepare_metadata_for_build_wheel
self.run_setup()
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-himl3xh3\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 295, in <module>
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-himl3xh3\normal\Lib\site-packages\torch\__init__.py", line 143, in <module>
raise err
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-himl3xh3\normal\Lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
error: subprocess-exited-with-error
Preparing metadata (pyproject.toml) did not run successfully.
exit code: 1
See above for output.
```
Torch is properly installed in `C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-himl3xh3\normal\Lib\site-packages\torch\` and all the mkl libraries are in `C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-himl3xh3\normal\Library\bin`, but this directory is not covered by existing DLL paths.
---
This is similar to #125109, and the fix is similar to #125684. Ping @atalman and @malfet since you fixed & reviewed the previous similar fix.
| true
|
2,948,895,924
|
Inconsistent behavior of topk function when processing identical maximum tensor values
|
Topoer-seu
|
closed
|
[
"triaged",
"module: sorting and selection"
] | 1
|
NONE
|
### 🐛 Describe the bug
# Inconsistent behavior of topk function when processing identical maximum values in single vs batch mode
## Environment Information
- PyTorch version: 2.2.0
- CUDA version: 11.8
- cuDNN version: 8.7.0
- Python version: 3.8
## Problem Description
I've identified an inconsistency in the `topk` function's behavior when processing tensors with identical maximum values in single sample mode versus batch processing mode.
When a tensor has multiple elements with the same maximum value, the `topk` function should theoretically return the index of the first occurrence consistently. However, I've observed that the results differ depending on whether the tensor is processed individually or as part of a batch.
## Reproducible Example
Consider this sample tensor with identical maximum values at indices 0 and 1:
```python
sample_165 = torch.tensor([[18.3906, 18.3906, 17.5938, 17.9844, 15.1172, 18.3594, 18.3438, 15.7812, 17.8438, 17.6719]], device='cuda:0', dtype=torch.float16)
```
When I process this single sample, `topk` returns index 0 (as expected since it's the first occurrence of the maximum value):
```python
import torch
def cls_acc(output, target, topk=1):
pred = output.topk(topk, 1, True, True)[1].t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
acc = float(correct[:topk].reshape(-1).float().sum(0, keepdim=True).cpu().numpy())
acc = 100 * acc / target.shape[0]
return acc, pred
# Single sample processing
sample_165 = tensor([[18.3906, 18.3906, 17.5938, 17.9844, 15.1172, 18.3594, 18.3438, 15.7812, 17.8438, 17.6719]], device='cuda:0', dtype=torch.float16)
_, single_pred = cls_acc(sample_165, torch.tensor([1], device='cuda:0'))
print(f"Single sample prediction index: {single_pred.item()}") # Returns 0
```
However, when the same sample is processed as part of a batch (e.g., index 165 in a larger tensor), topk sometimes returns index 1 instead:
```python
# Batch processing
tot_logits = torch.load('tot_logits.pt') # A tensor containing sample_165 at index 165
tot_targets = torch.load('tot_targets.pt')
_, pred = cls_acc(tot_logits, tot_targets)
print(f"Batch processing prediction index for sample 165: {pred[0, 165].item()}") # Returns 1
```
## Expected Behavior
The topk function should consistently return the same index (in this case, index 0) for identical maximum values, regardless of whether the tensor is processed individually or as part of a batch.
Actual Behavior
Single sample processing: Returns index 0
Batch processing: Returns index 1 for the same sample
## Impact
This inconsistency causes problems when calculating model accuracy, as the predictions change depending on the evaluation method. It's especially problematic in cases where we need accurate and reproducible evaluation metrics.
## Additional Notes
This behavior appears to be related to CUDA optimization and parallel processing strategies. When running on CPU, the results might be more consistent, but on CUDA, the behavior varies.
I've observed that this issue specifically occurs with float16 data type, which may be more susceptible to numerical precision issues.
### Versions
Collecting environment information...
PyTorch version: 2.2.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.20 (default, Oct 3 2024, 15:24:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 7
CPU max MHz: 3900.0000
CPU min MHz: 1200.0000
BogoMIPS: 5800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 44 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.2.0
[pip3] torchaudio==2.2.0
[pip3] torchvision==0.17.0
[pip3] triton==2.2.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.5.147 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.24.3 py38h14f4228_0
[conda] numpy-base 1.24.3 py38h31eccc5_0
[conda] pytorch 2.2.0 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.2.0 py38_cu118 pytorch
[conda] torchtriton 2.2.0 py38 pytorch
[conda] torchvision 0.17.0 py38_cu118 pytorch
| true
|
2,948,810,336
|
[Windows][CPU] 51 UT of inductor/test_torchinductor_opinfo.py failed with CPP wrapper
|
LifengWang
|
open
|
[
"module: windows",
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
51 UTs in inductor/test_torchinductor_opinfo.py met C++ compile error with CPP wrapper. The pytorch version used is 0324
[cpp_test_torchinductor_opinfo.log](https://github.com/user-attachments/files/19463359/cpp_test_torchinductor_opinfo.log)
nightly whl. Please take a look at the attached log for the details.
powershell reproduce scripts
```
$env:TORCHINDUCTOR_WINDOWS_TESTS = 1
$env:TORCHINDUCTOR_CPP_WRAPPER = 1
pytest -v test/inductor/test_torchinductor_opinfo.py -k 'linalg or to_sparse'
```
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows Server 2022 Datacenter Evaluation (10.0.20348 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:07:43) [MSC v.1942 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.20348-SP0
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
----------------------
Name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2594
MaxClockSpeed: 2594
L2CacheSize: 40960
L2CacheSpeed: None
Revision: 27142
----------------------
Name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU1
CurrentClockSpeed: 2594
MaxClockSpeed: 2594
L2CacheSize: 40960
L2CacheSpeed: None
Revision: 27142
Versions of relevant libraries:
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.13.0
[pip3] torch==2.8.0.dev20250324+cpu
[pip3] torchaudio==2.6.0.dev20250324+cpu
[pip3] torchvision==0.22.0.dev20250324+cpu
[conda] mkl-include 2025.1.0 pypi_0 pypi
[conda] mkl-static 2025.1.0 pypi_0 pypi
[conda] numpy 1.22.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.8.0.dev20250324+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250324+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250324+cpu pypi_0 pypi
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu
| true
|
2,948,760,576
|
[ROCm] Update CUDAPluggableAllocator.h (#1984)
|
amd-sriram
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: rocm",
"ciflow/periodic",
"ciflow/rocm",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300"
] | 11
|
CONTRIBUTOR
|
Altering the flag to use the correct streamType in CUDAPluggableAllocator class for ROCm gpu. The flag TORCH_HIP_VERSION does not work for ROCm as intended. This flag is replaced with USE_ROCM. This is impacting Distributed Fused Adam in Rocm/APEX when using nccl_ub feature. This has been tested with rocm/apex.
See PR https://github.com/ROCm/apex/pull/184
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,948,528,314
|
Missing detail explanation for `torch.Tensor.fill_`
|
zeshengzong
|
closed
|
[
"module: docs",
"triaged",
"module: python frontend"
] | 3
|
CONTRIBUTOR
|
### 📚 The doc issue
Usually there's a link in tensor ops to a detailed explanation like [torch.Tensor.floor](https://pytorch.org/docs/stable/generated/torch.Tensor.floor.html) has `See torch.floor()`


But for [torch.Tensor.fill_](https://pytorch.org/docs/stable/generated/torch.Tensor.fill_.html) there seems no counterpart has detail document

### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
2,948,485,809
|
Mutating a non-functional tensor with a functional tensor is not allowed.
|
liye0626
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 1
|
NONE
|
### 🐛 Describe the bug
```python
from torch.export import export_for_training
...
exported_module = export_for_training(
self.model,
self.example_inputs,
kwargs=self.example_kwarg_inputs,
dynamic_shapes=dynamic_shape,
)
# The line of related code in model forward
dim = -3
tgt = past_key_values_data.index_select(dim, select_indices.to(past_key_values_data.device))
dst = past_key_values_data.narrow(dim, prev_input_len, tgt.shape[dim])
dst.copy_(tgt, non_blocking=True)
```
When I run the above code, I get an error "RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/build/aten/src/ATen/RegisterFunctionalization_0.cpp":3941, please report a bug to PyTorch. mutating a non-functional tensor with a functional tensor is not allowed. Please ensure that all of your inputs are wrapped inside of a functionalize() call."
related code: dst.copy_(tgt, non_blocking=True)
Full Log
```
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/user/workspace/project/2025/executorch/examples/models/llama/export_llama.py", line 32, in <module>
main() # pragma: no cover
File "/home/user/workspace/project/2025/executorch/examples/models/llama/export_llama.py", line 28, in main
export_llama(args)
File "/home/user/workspace/project/2025/executorch/examples/models/llama/export_llama_lib.py", line 520, in export_llama
builder = _export_llama(args)
File "/home/user/workspace/project/2025/executorch/examples/models/llama/export_llama_lib.py", line 662, in _export_llama
).export_to_edge()
File "/usr/local/lib/python3.10/dist-packages/executorch/extension/llm/export/builder.py", line 357, in export_to_edge
self.edge_manager = export_to_edge(
File "/usr/local/lib/python3.10/dist-packages/executorch/extension/export_util/utils.py", line 94, in export_to_edge
return _core_aten_to_edge(
File "/usr/local/lib/python3.10/dist-packages/executorch/extension/export_util/utils.py", line 65, in _core_aten_to_edge
edge_manager: EdgeProgramManager = to_edge(
File "/usr/local/lib/python3.10/dist-packages/executorch/exir/program/_program.py", line 1143, in to_edge
program = program.run_decompositions(_default_decomposition_table())
File "/usr/local/lib/python3.10/dist-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/export/exported_program.py", line 1310, in run_decompositions
return _decompose_exported_program(
File "/usr/local/lib/python3.10/dist-packages/torch/export/exported_program.py", line 784, in _decompose_exported_program
) = _decompose_and_get_gm_with_new_signature_constants(
File "/usr/local/lib/python3.10/dist-packages/torch/export/exported_program.py", line 472, in _decompose_and_get_gm_with_new_signature_constants
aten_export_artifact = _export_to_aten_ir(
File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 736, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1357, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1596, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 582, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 683, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
flat_f_outs = f(*flat_f_args)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
out = PropagateUnbackedSymInts(mod).run(
File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 6826, in run_node
result = super().run_node(n)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 310, in call_function
return target(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 722, in __call__
return self._op(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/functional_tensor.py", line 527, in __torch_dispatch__
outs_unwrapped = func._op_dk(
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/build/aten/src/ATen/RegisterFunctionalization_0.cpp":3941, please report a bug to PyTorch. mutating a non-functional tensor with a functional tensor is not allowed. Please ensure that all of your inputs are wrapped inside of a functionalize() call.
While executing %copy_ : [num_users=0] = call_function[target=torch.ops.aten.copy_.default](args = (%narrow_3, %index_select_6, True), kwargs = {})
Original traceback:
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/external_utils.py", line 45, in inner
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/executorch/examples/models/llama/ea_model.py", line 137, in forward
input_ids, draft_tokens, retrieve_indices,tree_mask,tree_position_ids, new_token, hidden_state, sample_token = update_inference_inputs(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/executorch/examples/models/llama/utils.py", line 452, in update_inference_inputs
dst.copy_(tgt, non_blocking=True)
```
### Versions
torch: 2.6.0.dev20241224+cpu
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,948,462,845
|
torch.set_flush_denormal does not support float16
|
sunjiabin17
|
closed
|
[
"module: docs",
"module: cpu",
"triaged",
"module: correctness (silent)"
] | 3
|
NONE
|
On the x86_64 architecture, running Ubuntu 22.04 with PyTorch v2.6.0, I conducted tests to evaluate the impact of set_flush_denormal on the float16 data type and observed no discernible effect.
```bash
>>> import torch
>>> torch.tensor(3e-5, dtype=torch.half)
tensor(2.9981e-05, dtype=torch.float16)
>>> torch.set_flush_denormal(True)
True
>>> torch.tensor(3e-5, dtype=torch.half)
tensor(2.9981e-05, dtype=torch.float16)
>>> torch.set_flush_denormal(False)
True
>>> torch.tensor(3e-5, dtype=torch.half)
tensor(2.9981e-05, dtype=torch.float16)
```
is this a bug, or for some reason it doesn't support float16 data type?
cc @svekars @sekyondaMeta @AlannaBurke @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,948,359,403
|
[S502778 ] Back out "[custom_ops][perf] Move expensive pytree traversals of tensors to C++ (#148555)"
|
yeqcharlotte
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
NONE
|
Summary:
Original commit changeset: ce4bc74eaca5
Original Phabricator Diff: D71498751
Test Plan: Coming in S502778
Reviewed By: Jialin, jianyuh, ChenheliHua
Differential Revision: D71864977
| true
|
2,948,353,436
|
`torch.compile` errors on converting tensor subclass object into a sequence
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```python
import torch
class Foo(torch.Tensor):
pass
torch._dynamo.config.traceable_tensor_subclasses.add(Foo)
#@torch.compile(fullgraph=True, backend="eager")
def f(x):
res = list(x)
return res
x = torch.ones(2).as_subclass(Foo)
res = f(x)
print(res)
# Eager prints: [Foo(1.), Foo(1.)]
# Compield errors.
```
We need to pass `torch_function_fn` attribute along when constructing new tensor variables for the sequence result:
https://github.com/pytorch/pytorch/blob/f12969421e33fc91a9fa2f2b505d619bf66055b3/torch/_dynamo/variables/tensor.py#L520-L553
### Error logs
```
Traceback (most recent call last):
File "/home/ryanguo99/pt/scratch/tensor-subclass-mutation.py", line 14, in <module>
res = f(x)
^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 1131, in _compile
raise InternalTorchDynamoError(
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 2933, in CALL
self._call(inst)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 2927, in _call
self.call_function(fn, args, kwargs)
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builtin.py", line 1111, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builtin.py", line 789, in <lambda>
return lambda tx, args, kwargs: obj.call_function(
^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builtin.py", line 1111, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builtin.py", line 936, in builtin_dispatch
rv = handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builtin.py", line 850, in call_self_handler
result = self_handler(tx, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builtin.py", line 1499, in _call_tuple_list
return self._call_iter_tuple_list(tx, obj, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builtin.py", line 1478, in _call_iter_tuple_list
list(obj.unpack_var_sequence(tx)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/tensor.py", line 551, in unpack_var_sequence
wrap_fx_proxy_cls(target_cls=type(self), tx=tx, proxy=self.as_proxy()[i])
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builder.py", line 2372, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builder.py", line 2470, in _wrap_fx_proxy
return handle_traced_output(
^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/builder.py", line 2511, in handle_traced_output
return target_cls(proxy, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/variables/torch_function.py", line 582, in __init__
self.torch_function_fn = kwargs.pop("torch_function_fn")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: KeyError: torch_function_fn
from user code:
File "/home/ryanguo99/pt/scratch/tensor-subclass-mutation.py", line 10, in f
res = list(x)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
main f1296942, Python 3.12
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,948,334,268
|
[DONT MERGE] customize win aot
|
chuanqi129
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,948,298,433
|
Add min/max support in export
|
tugsbayasgalan
|
open
|
[
"fb-exported",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Summary: Title
Test Plan: CI
I tried my best to replicate what dynamo does for min/max but had to omit some dynamo variables being handled because they are not applicable to non-strict export. I think this is ok because:
1. The original dynamo logic for handling min/max hasn't changed past 2 years so it is relatively stable so there probably won't be new code added there.
2. Compared to dynamo, non-strict can still fall into original builtin min/max.
This shows up as error in Optimum benchmarking:
pytest tests/exporters/onnx/test_onnx_export.py -k test_pytorch_export_on_cpu_plain_export_227_ibert_feature_extraction
Differential Revision: D71869601
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,948,215,021
|
Support fp8 dtypes in assert_close
|
exclamaforte
|
closed
|
[
"Merged",
"module: testing",
"ciflow/trunk",
"release notes: python_frontend",
"topic: bug fixes",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Fixes #135998
Adds support for fp8. These are compared bitwise, without atol and rtol. The implementation uses the same comparison functions, just with atol and rtol forced to zero. The error message is different from the default case; it only tells the user the first mismatch. This is to avoid triggering the error from #135998.
Test Plan:
New unit test covers new code paths.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,948,136,654
|
[XPU] XPU build has been broken
|
chuanqi129
|
closed
|
[
"needs reproduction",
"module: build",
"triaged",
"module: xpu"
] | 11
|
COLLABORATOR
|
### 🐛 Describe the bug
The XPU build crashed with below error message, it should be introduced by PR https://github.com/pytorch/pytorch/pull/149888
```
[5652/7734] Building SYCL (Device) object test_sycl_build_standalone_gen_simple_kernel.cpp.o
FAILED: test_sycl/CMakeFiles/test_sycl_build_standalone.dir/test_sycl_build_standalone_gen_simple_kernel.cpp.o /var/lib/jenkins/workspace/build/test_sycl/CMakeFiles/test_sycl_build_standalone.dir/test_sycl_build_standalone_gen_simple_kernel.cpp.o
cd /var/lib/jenkins/workspace/build/test_sycl/CMakeFiles/test_sycl_build_standalone.dir && /opt/conda/envs/py_3.9/bin/cmake -E make_directory /var/lib/jenkins/workspace/build/test_sycl/CMakeFiles/test_sycl_build_standalone.dir//. && /opt/conda/envs/py_3.9/bin/cmake -D verbose:BOOL=OFF -D generated_file:STRING=/var/lib/jenkins/workspace/build/test_sycl/CMakeFiles/test_sycl_build_standalone.dir//./test_sycl_build_standalone_gen_simple_kernel.cpp.o -P /var/lib/jenkins/workspace/build/test_sycl/CMakeFiles/test_sycl_build_standalone.dir//test_sycl_build_standalone_gen_simple_kernel.cpp.o.Release.cmake
In file included from /var/lib/jenkins/workspace/third_party/torch-xpu-ops/test/sycl/simple_kernel.cpp:1:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/sycl.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/core.hpp:21:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/accessor.hpp:11:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/access/access.hpp:14:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/CL/__spirv/spirv_ops.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/CL/__spirv/spirv_types.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/defines.hpp:14:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/climits:41:
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/x86_64-linux-gnu/c++/11/bits/c++config.h:299:27: error: expected value in expression
299 | #if _GLIBCXX_USE_CXX11_ABI
| ^
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/x86_64-linux-gnu/c++/11/bits/c++config.h:467:27: error: expected value in expression
467 | #if _GLIBCXX_USE_CXX11_ABI
| ^
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/x86_64-linux-gnu/c++/11/bits/c++config.h:616:3: error: invalid token at start of a preprocessor expression
616 | && _GLIBCXX_USE_DUAL_ABI && __cpp_transactional_memory >= 201500L \
| ^
In file included from /var/lib/jenkins/workspace/third_party/torch-xpu-ops/test/sycl/simple_kernel.cpp:1:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/sycl.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/core.hpp:21:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/accessor.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/aliases.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/half_type.hpp:13:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/iostream_proxy.hpp:11:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/istream:38:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/ios:38:
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/iosfwd:211:53: error: expected value in expression
211 | #if __cplusplus >= 202002L && _GLIBCXX_USE_CXX11_ABI
| ^
In file included from /var/lib/jenkins/workspace/third_party/torch-xpu-ops/test/sycl/simple_kernel.cpp:1:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/sycl.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/core.hpp:21:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/accessor.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/aliases.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/half_type.hpp:13:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/iostream_proxy.hpp:11:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/istream:38:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/ios:40:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/char_traits.h:39:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/stl_algobase.h:66:
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/stl_iterator_base_funcs.h:107:27: error: expected value in expression
107 | #if _GLIBCXX_USE_CXX11_ABI
| ^
In file included from /var/lib/jenkins/workspace/third_party/torch-xpu-ops/test/sycl/simple_kernel.cpp:1:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/sycl.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/core.hpp:21:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/accessor.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/aliases.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/half_type.hpp:13:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/iostream_proxy.hpp:11:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/istream:38:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/ios:42:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/ios_base.h:41:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/locale_classes.h:40:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/string:55:
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/basic_string.h:64:27: error: expected value in expression
64 | #if _GLIBCXX_USE_CXX11_ABI
| ^
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/basic_string.h:6175:27: error: expected value in expression
6175 | #if _GLIBCXX_USE_CXX11_ABI
| ^
In file included from /var/lib/jenkins/workspace/third_party/torch-xpu-ops/test/sycl/simple_kernel.cpp:1:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/sycl.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/core.hpp:21:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/accessor.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/aliases.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/half_type.hpp:13:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/iostream_proxy.hpp:11:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/istream:38:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/ios:42:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/ios_base.h:41:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/locale_classes.h:40:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/string:56:
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/basic_string.tcc:50:27: error: expected value in expression
50 | #if _GLIBCXX_USE_CXX11_ABI
| ^
In file included from /var/lib/jenkins/workspace/third_party/torch-xpu-ops/test/sycl/simple_kernel.cpp:1:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/sycl.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/core.hpp:21:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/accessor.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/aliases.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/half_type.hpp:13:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/iostream_proxy.hpp:11:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/istream:38:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/ios:42:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/ios_base.h:41:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/locale_classes.h:40:
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/string:58:53: error: expected value in expression
58 | #if __cplusplus >= 201703L && _GLIBCXX_USE_CXX11_ABI
| ^
In file included from /var/lib/jenkins/workspace/third_party/torch-xpu-ops/test/sycl/simple_kernel.cpp:1:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/sycl.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/core.hpp:21:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/accessor.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/aliases.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/half_type.hpp:13:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/iostream_proxy.hpp:11:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/istream:38:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/ios:42:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/ios_base.h:41:
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/locale_classes.h:356:27: error: expected value in expression
356 | #if _GLIBCXX_USE_CXX11_ABI
| ^
In file included from /var/lib/jenkins/workspace/third_party/torch-xpu-ops/test/sycl/simple_kernel.cpp:1:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/sycl.hpp:25:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/core.hpp:21:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/accessor.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/aliases.hpp:12:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/half_type.hpp:13:
In file included from /opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/iostream_proxy.hpp:11:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/istream:38:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/ios:42:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/ios_base.h:46:
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/system_error:41:
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/stdexcept:46:27: error: expected value in expression
46 | #if _GLIBCXX_USE_CXX11_ABI
| ^
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/stdexcept:130:28: error: invalid token at start of a preprocessor expression
130 | #if _GLIBCXX_USE_CXX11_ABI || _GLIBCXX_DEFINE_STDEXCEPT_COPY_OPS
| ^
/usr/lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/stdexcept:236:28: error: invalid token at start of a preprocessor expression
236 | #if _GLIBCXX_USE_CXX11_ABI || _GLIBCXX_DEFINE_STDEXCEPT_COPY_OPS
| ^
```
### Versions
python commit: 86dcdf9c8bb8f69c5d28184b31ee6d7f19127d67
cc @malfet @seemethere @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,948,120,854
|
[BE] Use `auto` in MPS codebase more
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
Non-trivial (but still a no-op changes):
- Replace `[mpsGraph broadcastTensor:[mpsGraph constantWithScalar:1 dataType:MPSDataTypeInt32] toShape:inputTensor.shape name:nil]` with `[mpsGraph constantWithScalar:1 dataType:MPSDataTypeInt32 shape:inputTensor.shape]`
| true
|
2,948,115,896
|
[CI] MacOS15-M2 runners are unstable
|
malfet
|
open
|
[
"module: ci",
"triaged",
"module: flaky-tests",
"unstable"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Not sure if tests are exposing some sort of the problem, or we are doing something weird with infra, but
since https://github.com/pytorch/pytorch/pull/149900 was landed, about 5% of the runs finish prematurely with
runner lost communication with the server.
Examples on-trunk ([HUD link](https://hud.pytorch.org/hud/pytorch/pytorch/a8d0c5c92818186119d4a94d98999acc3f549a7e/1?per_page=50&name_filter=macos-m2&mergeLF=true)):
- https://github.com/pytorch/pytorch/actions/runs/14064901753/job/39385353215
- https://github.com/pytorch/pytorch/actions/runs/14066415857/job/39390456448
- https://github.com/pytorch/pytorch/actions/runs/14067898476/job/39395221541
- https://github.com/pytorch/pytorch/actions/runs/14069998244/job/39402121320
- https://github.com/pytorch/pytorch/actions/runs/14070540699/job/39404345215
- https://github.com/pytorch/pytorch/actions/runs/14071756306/job/39407581634
- https://github.com/pytorch/pytorch/actions/runs/14072560903/job/39409942487
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra @clee2000
| true
|
2,948,115,404
|
[WIP] rewrite pad_nd with guard_or_false
|
pianpwk
|
open
|
[
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,948,108,677
|
torch.compile does not support lambda error message arguments for torch._check
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
repo:
```
@torch.compile(fullgraph=True)
def f(a, b):
torch._check(True, lambda:f"hi")
return b*10
a = torch.ones(10, 10, device="cuda",dtype=torch.float64)
b = torch.torch.ones(10, 10, device="cuda",dtype=torch.float64)
f(a, b)
```
output
```
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/lsakka/pytorch/example8.py", line 56, in <module>
f(a, b)
File "/home/lsakka/pytorch/torch/_dynamo/eval_frame.py", line 667, in _fn
raise e.with_traceback(None) from e.__cause__
torch._dynamo.exc.Unsupported: Failed to convert args/kwargs to proxy
Explanation: Missing `as_proxy()` implementation for some arg/kwarg.
Developer debug context: call_function args: ConstantVariable(bool: True) NestedUserFunctionVariable()
from user code:
File "/home/lsakka/pytorch/example8.py", line 51, in f
torch._check(True, lambda:f"hi")
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this esp
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,948,089,653
|
[TEST]
|
muchulee8
|
closed
|
[
"fb-exported",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D71857899
| true
|
2,948,085,511
|
[XPU] Linux CI/CD has been broken by the intel-deep-learning-essentials-2025.0 online installation
|
chuanqi129
|
closed
|
[
"module: ci",
"triaged",
"module: regression",
"module: xpu"
] | 3
|
COLLABORATOR
|
Recently, intel-deep-learning-essentials has release some new version packages which broken the 2025.0 clean installation, it led to the xpu build test build cpu-only pytorch in the past two days. Refer https://github.com/pytorch/pytorch/actions/runs/14053994965/job/39364027978#step:14:1789. And it block all XPU related PR testing and landing, for example PR #149696.
As solution, we decided to use offline installation mode to get more stable CI/CD build environment for XPU.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,948,070,345
|
[inductor] make non-trivial tile ranges unbacked symint aware
|
ColinPeppler
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
# New PR skipping this if unbackeds present: https://github.com/pytorch/pytorch/pull/150225
### Stacktrace
```
# If no fallback is provided then we'd see this.
File "/data/users/colinpeppler/pytorch/torch/_inductor/codegen/simd.py", line 1746, in tile_ranges
if V.graph.sizevars.atomically_apply_size_hint(
File "/data/users/colinpeppler/pytorch/torch/_inductor/sizevars.py", line 653, in atomically_apply_size_hint
size_dict = {
File "/data/users/colinpeppler/pytorch/torch/_inductor/sizevars.py", line 654, in <dictcomp>
symbol: V.graph.sizevars.size_hint(symbol, fallback=fallback)
File "/data/users/colinpeppler/pytorch/torch/_inductor/sizevars.py", line 556, in size_hint
return int(out)
File "/home/colinpeppler/local/miniconda3/envs/pytorch/lib/python3.10/site-packages/sympy/core/expr.py", line 307, in __int__
raise TypeError("Cannot convert symbols to int")
torch._inductor.exc.InductorError: TypeError: Cannot convert symbols to int
```
### Context
When unbacked symint is in the ranges, Inductor will choke up because either:
1. no size hint exists or
~~2. it can't statically evaluate an expression.~~
### Approach
- For (1), use unbacked symint fallback; this isn't optimal and we could do better on choosing a fallback
~~- For (2), use `statically_known*` mechanisms~~
xref: https://fb.workplace.com/groups/1028545332188949/permalink/1178385333871614/
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149994
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,948,043,588
|
[inductor][triton 3.3] Fix cpp_wrapper w/ TMA in triton 3.3
|
pytorchbot
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149973
Fixes #148938
Context:
In triton 3.3, triton kernels expect a global scratch space arg to be passed in. This is fixed in #148051, which fixed most of the AOTI/cpp_wrapper failures; the fix is to inject a (null) global scratch space arg passed as an argument to all kernels.
But in the case of TMA, we need to call a non-triton-generated function - init1DTMADescriptor. The same `generate_args_decl` function used for calling triton kernels (and modified in #148051 to insert a global scratch space) is used to prepare the arguments to init1DTMADescriptor, and so it had an extra global scratch space arg. Then we'd get a null pointer passed into init1DTMADescriptor, resulting in an IMA later on when the TMA use kernel
This PR: adds an option to `generate_args_decl` to specify whether this is a triton kernel (in which case we should add the global scratch space arg) or not (when we shouldn't add the extra arg).
Note: this doesn't appear in CI because we don't run these tests with Hopper machines in CI.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @desertfire @yushangdi @benjaminglass1
| true
|
2,948,036,274
|
[inductor][triton 3.3] Fix cpp_wrapper w/ TMA in triton 3.3 (#149973)
|
davidberard98
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #148938
Context:
In triton 3.3, triton kernels expect a global scratch space arg to be passed in. This is fixed in #148051, which fixed most of the AOTI/cpp_wrapper failures; the fix is to inject a (null) global scratch space arg passed as an argument to all kernels.
But in the case of TMA, we need to call a non-triton-generated function - init1DTMADescriptor. The same `generate_args_decl` function used for calling triton kernels (and modified in #148051 to insert a global scratch space) is used to prepare the arguments to init1DTMADescriptor, and so it had an extra global scratch space arg. Then we'd get a null pointer passed into init1DTMADescriptor, resulting in an IMA later on when the TMA use kernel
This PR: adds an option to `generate_args_decl` to specify whether this is a triton kernel (in which case we should add the global scratch space arg) or not (when we shouldn't add the extra arg).
Note: this doesn't appear in CI because we don't run these tests with Hopper machines in CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149973
Approved by: https://github.com/drisspg
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,995,185
|
Dynamo has limited support for `__instancecheck__` in meta types
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```python
import torch
from torch.nn.parameter import Buffer
@torch.compile(fullgraph=True, backend="eager")
def f(buf):
return isinstance(buf, torch.nn.Buffer)
buf = Buffer(torch.ones(5))
res = f(buf)
# Eager: True
# Compiled: False
print(res)
```
I think we have all we need to trace through meta type `__instancecheck__`; they are mostly written in python anyways. Right now we do it to some extent, but it could break pretty easily:
https://github.com/pytorch/pytorch/blob/1b08aaeafe93393a7bd34f91381ad40cb463bf8f/torch/_dynamo/variables/builtin.py#L1703-L1710
### Error logs
_No response_
### Versions
main 1b08aaea, Python 3.12
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,989,015
|
[Async TP] all-gather-matuls not fusing properly when rowwise scales are used
|
danielvegamyhre
|
open
|
[
"oncall: distributed",
"triaged"
] | 17
|
CONTRIBUTOR
|
### 🐛 Describe the bug
## Summary
I recently implemented async TP support fusing scaled-matmul-reduce-scatter patterns with rowwise scales (https://github.com/pytorch/pytorch/pull/149247) as well as support for various AC settings which had become broken (no AC, per layer SAC, per op SAC with reduce_scatter saved) (https://github.com/pytorch/pytorch/pull/149946).
When testing the performance of various configurations to ensure stability of the changes, I found that while float8 rowwise training with async TP had correct numerical accuracy, the performance was non-optimal (see benchmarks below).
After looking at the traces, I found the matmul-reduce-scatters were being fused properly, so my change was working as intended - however, the all-gather-matmul patterns were NOT being fused properly. This seems to (at least in part) explain the poor performance for async TP with rowwise scales.
Looking at the benchmarks below we ALSO see vanilla TP perf with rowwise scales is unexpectedly low. I will create a separate issue for this, though.
### Performance Benchmarks
Llama3 70b training runs on 128 H100s with full AC, using FSDP=16, TP=8
- **bf16 (vanilla TP):** 598 TPS, peak memory 71.51 GB
- **bf16 (async TP):** TPS 673, peak memory 71.08 (+12.54% TPS vs vanilla TP)
- **float8 tensorwise (vanilla TP):** 820 TPS, peak memory 55.26 GB
- **float8 tensorwise (async TP):** 950 TPS, peak memory 55.91 GB (+15.85% TPS vs vanilla TP)
- **float8 rowwise (vanilla TP):** TPS: 540 TPS, peak memory 71.46 GB
- **float8 rowwise (async TP):** 560 TPS, peak memory 70.65 GB (+3.7% TPS vs vanilla TP but still unexpectedly lower than bf16)
As you can see, float8 rowwise is working but performance needs to be improved further.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @vkuzo @lessw2020
### Versions
pytorch @ HEAD
| true
|
2,947,987,803
|
[dynamo][higher order ops] Make support_aliasing and support_input_mutation default to False
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This PR https://github.com/pytorch/pytorch/pull/148953/files adds support for checking for input mutation and aliasing for HOPs. Currently, the default is that we expect all HOPs to support input mutation and aliasing. And then we set it to False just for invoke_subgraph.
But we should do other way around, where we default the support to False. And then we just set it to True for a few HOPS that do support mutation/aliasing. This issue is to track this work.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,941,641
|
[associative_scan] Fixes for assoc_scan testcases
|
bohnstingl
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
COLLABORATOR
|
This PR fixes some issues with the testcases of `associative_scan`, in particular the problem where the compile_mode is inadvertently always set to `none`.
cc @ydwu4
| true
|
2,947,901,476
|
[ca] introduce RuntimeState to support c++ hooks via graph breaks
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150073
* #150074
* __->__ #149987
* #149897
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,842,991
|
[ROCm] use magma-rocm tarball for CI/CD
|
jeffdaily
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 8
|
COLLABORATOR
|
Follow-up to #149902.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,947,822,823
|
gloo: update to latest version
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
MEMBER
|
This updates submodule Gloo to the latest version and brings a number of benefits:
* connection retries https://github.com/facebookincubator/gloo/commit/d2609ab5e8e06015c9184f6f29d702324709ef1c
* better error messages https://github.com/facebookincubator/gloo/commit/5ca057d6cc57f8b88db1adf56c63829ffe6f0558
* multi_get support for larger scale jobs https://github.com/facebookincubator/gloo/commit/4ff6edf45ff1d314916db34d2df9bd4371417d74
* metadata exchange optimizations https://github.com/facebookincubator/gloo/commit/20dc202dd8e6d6073ac2b42e4a9f17c2abe161f9
* miscellaneous other fixes
Old commit: https://github.com/facebookincubator/gloo/commit/5354032ea08eadd7fc4456477f7f7c6308818509
Test plan:
This is already being used in production environments at scale.
PyTorch CI
```
pytest -v test/distributed/test_c10d_gloo.py
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
| true
|
2,947,814,348
|
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error
|
atalman
|
open
|
[
"oncall: distributed",
"triaged",
"module: nccl"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Found followin nccl error when validating release 2.7 cherry-pick:
https://github.com/pytorch/pytorch/pull/149874
This is repro script:
https://gist.github.com/d4l3k/16a19b475952bc40ddd7f2febcc297b7
Running on g5.12xlarge machine.
Repro with Release 2.7 RC1 and nccl 2.25.1 :
Normally getting:
```
torchrun --nproc-per-node=2 test.py
starting
starting
connected
connected
synchronizing
aborting
aborting
synchronizing
synchronized
completed
synchronized
completed
````
However after few tries received following failure:
2.7 RC1 CUDA 12.8
```
Collective WorkNCCL(SeqNum=2, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1000000) raised the following async exception: NCCL error: unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.25.1
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
Last error:
Exception raised from checkForNCCLErrorsInternal at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2501 (most recent call first):
C++ CapturedTraceback:
#4 std::_Function_handler<std::shared_ptr<c10::LazyValue<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const> (), c10::SetStackTraceFetcher(std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) from Logging.cpp:0
#5 c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) from ??:0
#6 c10::DistBackendError::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) from :0
#7 c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::shared_ptr<c10d::NCCLComm>&) from ??:0
#8 c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() from ??:0
#9 c10d::ProcessGroupNCCL::WorkNCCL::isStarted() from ??:0
#10 c10d::ProcessGroupNCCL::watchdogHandler() from ??:0
#11 c10d::ProcessGroupNCCL::ncclCommWatchdog() from ??:0
#12 execute_native_thread_routine from /opt/conda/conda-bld/gcc-compiler_1654084175708/work/gcc/libstdc++-v3/src/c++11/thread.cc:82
#13 start_thread from ??:0
#14 __GI___clone from :0
```
Same issue with nightly CUDA 12.8 and nccl 2.26.2:
```
NCCL_DEBUG=WARN torchrun --nproc-per-node=2 test.py
/opt/conda/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py:276: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
W0325 20:46:43.725000 94 site-packages/torch/distributed/run.py:766]
W0325 20:46:43.725000 94 site-packages/torch/distributed/run.py:766] *****************************************
W0325 20:46:43.725000 94 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0325 20:46:43.725000 94 site-packages/torch/distributed/run.py:766] *****************************************
/opt/conda/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py:276: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
/opt/conda/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py:276: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
[W325 20:46:45.052437495 TCPStore.cpp:297] [c10d] Starting store with 1000000 workers but somaxconn is 4096.This might cause instability during bootstrap, consider increasing it.
starting
starting
NCCL version 2.26.2+cuda12.2
connected
connected
synchronizing
aborting
aborting
synchronizing
synchronized
completed
synchronized
completed
[W325 20:46:51.073905063 Module.cpp:186] symbolizing C++ stack trace for exception; if this hangs, rerun with TORCH_DISABLE_ADDR2LINE=1...
[E325 20:46:51.228036986 ProcessGroupNCCL.cpp:555] [Rank 1] Collective WorkNCCL(SeqNum=2, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1000000) raised the following async exception: NCCL error: unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.26.2
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
Last error:
Exception raised from checkForNCCLErrorsInternal at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2474 (most recent call first):
C++ CapturedTraceback:
#4 std::_Function_handler<std::shared_ptr<c10::LazyValue<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const> (), c10::SetStackTraceFetcher(std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) from Logging.cpp:0
#5 c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) from ??:0
#6 c10::DistBackendError::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) from :0
#7 c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::shared_ptr<c10d::NCCLComm>&) from ??:0
#8 c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() from ??:0
#9 c10d::ProcessGroupNCCL::WorkNCCL::isCompleted() from ??:0
#10 c10d::ProcessGroupNCCL::watchdogHandler() from ??:0
#11 c10d::ProcessGroupNCCL::ncclCommWatchdog() from ??:0
#12 execute_native_thread_routine from /opt/conda/conda-bld/gcc-compiler_1654084175708/work/gcc/libstdc++-v3/src/c++11/thread.cc:82
#13 start_thread from ??:0
#14 __GI___clone from :0
```
Release 2.6, CUDA 12.4, and nccl 2.21.5:
```
root@7be1ecef64c0 /]# NCCL_DEBUG=WARN torchrun --nproc-per-node=2 test.py
/opt/conda/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
W0325 21:16:42.796000 578 site-packages/torch/distributed/run.py:792]
W0325 21:16:42.796000 578 site-packages/torch/distributed/run.py:792] *****************************************
W0325 21:16:42.796000 578 site-packages/torch/distributed/run.py:792] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0325 21:16:42.796000 578 site-packages/torch/distributed/run.py:792] *****************************************
/opt/conda/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
/opt/conda/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
[W325 21:16:44.104257883 TCPStore.cpp:280] [c10d] Starting store with 1000000 workers but somaxconn is 4096.This might cause instability during bootstrap, consider increasing it.
starting
starting
NCCL version 2.21.5+cuda12.4
connected
connected
synchronizing
aborting
aborting
synchronizing
synchronized
completed
synchronized
completed
[W325 21:16:50.611596175 Module.cpp:182] symbolizing C++ stack trace for exception; if this hangs, rerun with TORCH_DISABLE_ADDR2LINE=1...
[E325 21:16:50.765246490 ProcessGroupNCCL.cpp:552] [Rank 0] Collective WorkNCCL(SeqNum=2, OpType=ALLREDUCE, NumelIn=1000000, NumelOut=1000000, Timeout(ms)=1000000) raised the following async exception: NCCL error: unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
Last error:
Exception raised from checkForNCCLErrorsInternal at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2363 (most recent call first):
C++ CapturedTraceback:
#4 std::_Function_handler<std::shared_ptr<c10::LazyValue<std::string> const> (), c10::SetStackTraceFetcher(std::function<std::string ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) from Logging.cpp:0
#5 c10::Error::Error(c10::SourceLocation, std::string) from ??:0
#6 c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::shared_ptr<c10d::NCCLComm>&) from ??:0
#7 c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() from ??:0
#8 c10d::ProcessGroupNCCL::WorkNCCL::isStarted() from ??:0
#9 c10d::ProcessGroupNCCL::watchdogHandler() from ??:0
#10 c10d::ProcessGroupNCCL::ncclCommWatchdog() from ??:0
#11 execute_native_thread_routine from thread48.o:0
#12 start_thread from ??:0
#13 __GI___clone from :0
```
### Versions
nightly 2.8
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @malfet @d4l3k
| true
|
2,947,809,998
|
[ROCm] Change LoadHIP to use find_file for rocm_version.h
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
COLLABORATOR
|
Fixes #149805
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,947,808,448
|
request for faster inductor kernels for blockwise reduction across dim1 -> write
|
vkuzo
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We should make the following kernel be fast in compile + inductor. This is important to be able to generate the dim1 cast to MX formats.
```
def scale_dim1_reference(x_hp: torch.Tensor, block_size) -> Tuple[torch.Tensor, torch.Tensor]:
# normalize across dim1
x_hp_d1 = x_hp.t().contiguous()
x_hp_d1_block = x_hp_d1.reshape(-1, block_size)
x_hp_d1_block_abs = x_hp_d1_block.abs()
amax_dim1 = torch.amax(x_hp_d1_block_abs, dim=1).unsqueeze(1)
x_hp_d1_block_normalized = x_hp_d1_block / amax_dim1
x_hp_d1_normalized = x_hp_d1_block_normalized.reshape(x_hp_d1.shape)
return x_hp_d1_normalized.t(), amax_dim1
```
Currently, I am only hitting 0.6 to 0.7 TB/s on NVIDIA H100. If the reduction and write is across dim0 instead of dim1, I see 2.0-2.2 TB/s. From discussions with @eellison , this is due to uncoalesced reads and we can fix this.
Repro script: https://gist.github.com/vkuzo/9eff0d27691be483e45bb10edf66d82c
Repro results on NVIDIA H100:
```bash
(pytorch) [vasiliy@devgpu006.vll6 ~/local/pytorch_scripts/mx_cast_poc (20250325_dim1_cast)]$ python 20250325_dim1_cast.py --M 4096 --K 4096
M 4096 K 4096 BLOCK_SIZE 32
GPU: NVIDIA H100
torch version: 2.8.0a0+gitdd94e94
triton version: 3.2.0
time_reference_compile_us 107.69072608695663
mem_bw_gbps 632.8998092645895
(pytorch) [vasiliy@devgpu006.vll6 ~/local/pytorch_scripts/mx_cast_poc (20250325_dim1_cast)]$ python 20250325_dim1_cast.py --M 16384 --K 16384
M 16384 K 16384 BLOCK_SIZE 32
GPU: NVIDIA H100
torch version: 2.8.0a0+gitdd94e94
triton version: 3.2.0
time_reference_compile_us 1612.7510689655173
mem_bw_gbps 676.1855942836252
```
TORCH_LOGS=output_code results: https://gist.github.com/vkuzo/4420c5b508ddd560e5d4620758b5936a
### Versions
main branch
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,947,773,508
|
Automate stable CUDA update and linter using min Python verison
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
1. Fixes: https://github.com/pytorch/pytorch/issues/145571 . Cuda Stable is the same cuda version that is published to pypi, also used to set Metadata section in the rest of whl scripts and tag the docker releases with latest tag.
2. Updates min python version used in linter
| true
|
2,947,759,697
|
Add inductor test for torchbind symint
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: add test
Test Plan:
```
buck run //caffe2/test:test_export -- -r test_compile_custom_obj_unbacked_symint
```
Differential Revision: D71843179
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,660,286
|
UNSTABLE inductor / linux-jammy-cpu-py3.9-gcc11-inductor / test (dynamic_cpu_inductor_torchbench)
|
yangw-dev
|
closed
|
[
"module: ci",
"oncall: pt2",
"unstable"
] | 3
|
CONTRIBUTOR
|
> For example, DISABLED pull / win-vs2022-cpu-py3 / test (default). Once
> created, the job will be disabled within 15 minutes. You can check the
> list of disabled jobs at https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json
> If you need to get this out ASAP instead of waiting for 15 minutes,
> you can manually trigger the workflow at https://github.com/pytorch/test-infra/actions/workflows/update_disabled_tests.yml
> once the issue is created to update the above JSON list right away.
> Noted: you need to have write access to PyTorch repo to disable CI
> jobs. The issue will be rejected otherwise.
## Reason
the test keep failing in HUD, and flaky
cc @seemethere @malfet @pytorch/pytorch-dev-infra @chauhang @penguinwu
| true
|
2,947,657,102
|
Refactor row-wise scaled MM
|
alexsamardzic
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: improvements",
"topic: not user facing",
"module: float8"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150023
* __->__ #149978
1. Add config selection for SM89.
2. Only build kernels if compiling for given arch.
3. Factor out CMake code to enforce compiling for needed archs for individual files into a function.
cc @ptrblck @msaroufim @eqy @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
2,947,653,227
|
UNSTABLE inductor / linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_torchbench)
|
yangw-dev
|
closed
|
[
"module: ci",
"oncall: pt2",
"unstable"
] | 3
|
CONTRIBUTOR
|
> For example, DISABLED pull / win-vs2022-cpu-py3 / test (default). Once
> created, the job will be disabled within 15 minutes. You can check the
> list of disabled jobs at https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json
> If you need to get this out ASAP instead of waiting for 15 minutes,
> you can manually trigger the workflow at https://github.com/pytorch/test-infra/actions/workflows/update_disabled_tests.yml
> once the issue is created to update the above JSON list right away.
> Noted: you need to have write access to PyTorch repo to disable CI
> jobs. The issue will be rejected otherwise.
## Reason
the test keeps failing in HUD
cc @seemethere @malfet @pytorch/pytorch-dev-infra @chauhang @penguinwu
| true
|
2,947,492,821
|
[cuDNN][SDPA] abide by `enable_gqa` convention in cuDNN
|
eqy
|
closed
|
[
"module: cudnn",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa"
] | 18
|
COLLABORATOR
|
long overdue
cc @csarofeen @ptrblck @xwang233
| true
|
2,947,471,155
|
Tensor subclass type not preserved across tensor ops on intermediate tensors under compile
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
## Repro
```python
import torch
class Foo(torch.Tensor):
pass
torch._dynamo.config.traceable_tensor_subclasses.add(Foo)
@torch.compile(fullgraph=True, backend="eager")
def f():
x = torch.ones(10).as_subclass(Foo)
y = x.new_ones((1, 2))
return y
# Eager: Foo([[1., 1.]])
# Compile: tensor([[1., 1.]])
print(f())
```
I think the root cause is Dynamo's `getattr` handling try-catches `NotImplementedError` and fallback to `GetAttrVariable`:
https://github.com/pytorch/pytorch/blob/1b08aaeafe93393a7bd34f91381ad40cb463bf8f/torch/_dynamo/variables/builtin.py#L1866-L1869
That creates problem if the `var_getattr` impl ends up calling another function (e.g., due to custom `__getattr__` or `__torch_function__` which fires on `__get__`) -- the `NotImplementedError` would just propagate upwards and we never restore side-effects, e.g., the torch function state which we enter here:
https://github.com/pytorch/pytorch/blob/1b08aaeafe93393a7bd34f91381ad40cb463bf8f/torch/_tensor.py#L1667-L1672
### Error logs
_No response_
### Versions
main 1b08aaea, python 3.12
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,463,002
|
[MPS] Fix metal ops with different dtypes
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149974
By implementing `_cast_` flavors of both dense and strided ops. Add regression tests that tests `fmax`/`fmin` for mixed dtypes.
Been dreaded to write this PR for a while, as it end up to be pretty bulky:
- Adds 1C10_METAL_ALL_TYPES_FUNCTOR` and `c10::metal::ScalarType` to `c10/metal/common.h` and test that its values always match `c10::ScalarType`
- Add `c10::metal::cast_to` to `c10/metal/utils.h` which could be used to cast any scalar metal dtype to any other one, including complex values
- Implement `val_at_offs<T>(constant void *, long offs, ScalarType dtype)` that is used to dynamically cast types
- Add `binary_strided_cast` and `binary_dense_cast` that are invoked for output dtype and cast both inputs to that output before performing the op
Benchmark collected on M2Pro that runs fmax for 1 mln element tensors (Times are in microseconds.)
| | dense-dense | transp-transp | dense-transp | transp-dense | dense-scalar | dense-bcast |
|-------------------------|---------------|----------------|----------------|----------------|---------------|--------------- |
| fmax (torch.float16, torch.float16) | 160.9 | 159.9 | 270.5 | 270.9 | 236.6 | 293.0
| fmax (torch.float32, torch.float32) | 176.9 | 171.0 | 273.7 | 293.5 | 242.6 | 294.2
| fmax (torch.float32, torch.float16) | 171.4 | 170.9 | 283.6 | 303.0 | 253.7 | 302.3
| add (torch.float16, torch.float16) | 218.0 | 223.6 | 221.0 | 222.0 | 214.9 | 218.3
| add (torch.float32, torch.float32) | 227.4 | 233.9 | 228.8 | 231.9 | 218.9 | 221.4
| add (torch.float32, torch.float16) | 226.1 | 227.5 | 227.5 | 226.9 | 177.0 | 190.8
TODOS:
- Include input and output dtype in non-cast kernel name
- Make TensorFactory.h use `C10_METAL_ALL_TYPES_FUNCTOR`
- Extend mixed_dytpes testing via OpInfo
Fixes https://github.com/pytorch/pytorch/issues/149951
| true
|
2,947,461,232
|
[inductor][triton 3.3] Fix cpp_wrapper w/ TMA in triton 3.3
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"module: aotinductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149973
Fixes #148938
Context:
In triton 3.3, triton kernels expect a global scratch space arg to be passed in. This is fixed in #148051, which fixed most of the AOTI/cpp_wrapper failures; the fix is to inject a (null) global scratch space arg passed as an argument to all kernels.
But in the case of TMA, we need to call a non-triton-generated function - init1DTMADescriptor. The same `generate_args_decl` function used for calling triton kernels (and modified in #148051 to insert a global scratch space) is used to prepare the arguments to init1DTMADescriptor, and so it had an extra global scratch space arg. Then we'd get a null pointer passed into init1DTMADescriptor, resulting in an IMA later on when the TMA use kernel
This PR: adds an option to `generate_args_decl` to specify whether this is a triton kernel (in which case we should add the global scratch space arg) or not (when we shouldn't add the extra arg).
Note: this doesn't appear in CI because we don't run these tests with Hopper machines in CI.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @desertfire @yushangdi @benjaminglass1
| true
|
2,947,447,795
|
sync fork
|
alexanderlerner
|
closed
|
[
"module: rocm",
"release notes: releng"
] | 2
|
NONE
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,947,437,669
|
[ued][deepssek-vl2] builtin operator hash on type GenerationConfig
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Doc - https://docs.google.com/document/d/1Zm9TkApcQFpZ5CjwO6-8PEAGXAUwrE0MzKZQgXScHhQ/edit?tab=t.0
```
torch._dynamo.exc.Unsupported: Failed to trace builtin operator
Explanation: Dynamo does not know how to trace builtin operator `hash` with argument types ['GenerationConfig'] (has_kwargs False)
Hint: Avoid calling builtin `hash` with argument types ['GenerationConfig']. Consider using an equivalent alternative function/method to `hash`.
Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.
Hint: Please report an issue to PyTorch.
Developer debug context: builtin hash [<class 'torch._dynamo.variables.user_defined.UserDefinedObjectVariable'>] False
from user code:
File "/data/users/pianpwk/pytorch/torch/_dynamo/external_utils.py", line 70, in inner
return fn(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/pianpwk/.conda/envs/deepseek_vl2/lib/python3.10/site-packages/transformers/generation/utils.py", line 1334, in generate
and self.generation_config._original_object_hash == hash(self.generation_config)
```
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,436,091
|
[ued][deepssek-vl2] Graph break on copy.deepcopy
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Doc - https://docs.google.com/document/d/1Zm9TkApcQFpZ5CjwO6-8PEAGXAUwrE0MzKZQgXScHhQ/edit?tab=t.0
```
torch._dynamo.exc.Unsupported: copy.deepcopy UserDefinedObjectVariable(GenerationConfig)
from user code:
File "/data/users/pianpwk/pytorch/torch/_dynamo/external_utils.py", line 70, in inner
return fn(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/pianpwk/.conda/envs/deepseek_vl2/lib/python3.10/site-packages/transformers/generation/utils.py", line 1348, in generate
generation_config = copy.deepcopy(generation_config)
# tried to non-strict trace
torch.utils._pytree.register_constant(GenerationConfig)
@torch._dynamo.nonstrict_trace
def non_strict_deepcopy(x: GenerationConfig) -> GenerationConfig:
return copy.deepcopy(x)
# error:
File "/data/users/pianpwk/pytorch/torch/_higher_order_ops/flat_apply.py", line 121, in impl
assert is_valid_output(out)
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function flat_apply(*(TreeSpec(_ConstantFunction, ConstantNode(value=_ConstantFunction(func=<function TorchInGraphFunctionVariable.call_function.<locals>.patched_fn at 0x7f234b1080d0>)), []), TreeSpec(tuple, None, [TreeSpec(list, None, [TreeSpec(GenerationConfig, ConstantNode(value=GenerationConfig {
"bos_token_id": 0,
"eos_token_id": 1
}
), [])]),
TreeSpec(dict, [], [])])), **{}): got AssertionError()
from user code:
File "/data/users/pianpwk/pytorch/torch/_dynamo/external_utils.py", line 70, in inner
return fn(*args, **kwargs)
File "/data/users/pianpwk/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/pianpwk/.conda/envs/deepseek_vl2/lib/python3.10/site-packages/transformers/generation/utils.py", line 1356, in generate
generation_config = non_strict_deepcopy(generation_config)
```
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,413,322
|
[ued][qwen] Recompilations because of ID_MATCH on mod.forward
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Doc - https://docs.google.com/document/d/18DOOgTJRrDUb34G6yHcSl5NCM377CfFz2bnv-NaW-2M/edit?tab=t.0

### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,400,869
|
[ued][qwen][dynamo] inspect.signature with non-constant function
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Graph break - https://www.internalfb.com/phabricator/paste/view/P1760735285
Doc - https://docs.google.com/document/d/18DOOgTJRrDUb34G6yHcSl5NCM377CfFz2bnv-NaW-2M/edit?tab=t.0
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,396,054
|
[inductor] Fix mm logging for `torch._scaled_.mm`
|
YUNQIUGUO
|
open
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"not4land",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
Summary:
This pr is just for recreation of the original pr: https://github.com/pytorch/pytorch/pull/149769
Fix for `torch._scaled_mm` op mm logging, which breaks the original brittle underscore parsing
assumptions.
Test Plan: CI
Differential Revision: D71828732
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,395,016
|
test_pointwise_xlog1py/test_pointwise_zeta regressed for MPS inductor
|
dcci
|
closed
|
[
"module: mps",
"oncall: pt2",
"module: inductor"
] | 2
|
MEMBER
|
### 🐛 Describe the bug
```
_____________________________________________________________________________ MPSBasicTests.test_pointwise_zeta __________________
____________________________________________________________
Traceback (most recent call last):
File "/opt/homebrew/anaconda3/lib/python3.12/unittest/case.py", line 58, in testPartExecutor
yield
File "/opt/homebrew/anaconda3/lib/python3.12/unittest/case.py", line 634, in run
self._callTestMethod(testMethod)
File "/opt/homebrew/anaconda3/lib/python3.12/unittest/case.py", line 589, in _callTestMethod
if method() is not None:
^^^^^^^^
File "/Users/davidino/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/Users/davidino/pytorch/pytorch/test/inductor/test_mps_basic.py", line 127, in test_pointwise_zeta
self.common(
File "/opt/homebrew/anaconda3/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/Users/davidino/pytorch/pytorch/test/inductor/test_torchinductor.py", line 633, in check_model_gpu
check_model(
File "/Users/davidino/pytorch/pytorch/test/inductor/test_torchinductor.py", line 515, in check_model
self.assertEqual(
File "/Users/davidino/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 4094, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 16384 / 16384 (100.0%)
Greatest absolute difference: nan at index (0, 0) (up to 1e-05 allowed)
Greatest relative difference: nan at index (0, 0) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
python test/inductor/test_mps_basic.py MPSBasicTests.test_pointwise_zeta
```
### Versions
```
torch 2.8.0a0+git2096b60
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,339,395
|
[aoti] Better error message when torchbind object is used as a graph input in AOTI
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary: Given an explicit error when torchbind object is used as input to AoTI
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r test_torchbind_input
```
Differential Revision: D69490915
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,337,634
|
[ued][wan][Dynamo] Graph break on tensor slicing
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Doc - [docs.google.com/document/d/1Mx90_1BSc_t12vIhw4eApMMiwvmzZKNShPGXUcjMU-o/edit?tab=t.0#heading=h.143elfb7ki36](https://docs.google.com/document/d/1Mx90_1BSc_t12vIhw4eApMMiwvmzZKNShPGXUcjMU-o/edit?tab=t.0#heading=h.143elfb7ki36)

### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,329,717
|
[ued][wan] nonstrict_trace does not support None inputs
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Doc - [docs.google.com/document/d/1Mx90_1BSc_t12vIhw4eApMMiwvmzZKNShPGXUcjMU-o/edit?tab=t.0#heading=h.143elfb7ki36](https://docs.google.com/document/d/1Mx90_1BSc_t12vIhw4eApMMiwvmzZKNShPGXUcjMU-o/edit?tab=t.0#heading=h.143elfb7ki36)

### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,947,326,744
|
[ued][wan] Dynamic shape issue with tolist
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Doc - https://docs.google.com/document/d/1Mx90_1BSc_t12vIhw4eApMMiwvmzZKNShPGXUcjMU-o/edit?tab=t.0#heading=h.143elfb7ki36

I tried this manual - https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?tab=t.0 - But I was confused what to do.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,255,042
|
AOTI freezing: fix test issues and enable by default
|
benjaminglass1
|
open
|
[
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149961
* #148773
* #144293
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,229,679
|
attn_implementation="eager" Buggy on Blackwell
|
Oseltamivir
|
closed
|
[
"high priority",
"triage review",
"needs reproduction",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 4
|
NONE
|
### 🐛 Describe the bug
```
tokenizer, model, image_processor, max_length = load_pretrained_model(
pretrained_model, None, model_name, device_map=device_map, attn_implementation="eager",
)
```
Causes model(LLaVA-Video-7B-Qwen2) to output "!!!!!!!" tokens
Fixed with `attn_implementation="spda"`
### Versions
PyTorch version: 2.8.0.dev20250325+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA B200
GPU 1: NVIDIA B200
GPU 2: NVIDIA B200
GPU 3: NVIDIA B200
GPU 4: NVIDIA B200
GPU 5: NVIDIA B200
GPU 6: NVIDIA B200
GPU 7: NVIDIA B200
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8570
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 2
CPU(s) scaling MHz: 29%
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 600 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] open_clip_torch==2.31.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250325+cu128
[pip3] torchvision==0.16.2
[pip3] triton==3.2.0
[conda] numpy 1.26.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.8.0.87 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] open-clip-torch 2.31.0 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250325+cu128 pypi_0 pypi
[conda] torchvision 0.16.2 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,947,217,636
|
[inductor] Add more typing to _inductor/ir.py
|
rec
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149959
* #149958
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,217,398
|
[inductor] Add typing to _inductor/ir.py
|
rec
|
open
|
[
"oncall: distributed",
"module: rocm",
"module: cpu",
"open source",
"NNC",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)",
"module: compiled autograd",
"release notes: inductor (aoti)",
"skip-url-lint"
] | 25
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149958
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @EikanWang @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan
| true
|
2,947,186,512
|
[ued] Prohibitive warm start latency
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"module: compile-time"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We observed many models with large warm compile time latency. This is bad for users who are not using inference servers like LLM, and run queries on a new process everytime.
* Flux - https://docs.google.com/document/d/1tKg4JQQchFfAStjvV9EpZq-VEiUUmf5GvS4VHh9pDOI/edit?tab=t.0
* Seen in https://github.com/huggingface/diffusers/issues/10795
* Gemma3 model - https://docs.google.com/document/d/1QIrkKedwnneNPTq5O7bpxvhatWxLTgESXObNRs62Q2M/edit?tab=t.0#heading=h.qkmcu7d03rly
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @oulgen @jamesjwu @aorenste @laithsakka
| true
|
2,947,179,705
|
[ued][flux][dynamo] Wrong error message with dynamo.disable
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug

Even when user decorates a function with disable, Dynamo gives wrong error message. Should be easy to repro.
UED Model doc - https://docs.google.com/document/d/1tKg4JQQchFfAStjvV9EpZq-VEiUUmf5GvS4VHh9pDOI/edit?tab=t.0
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,947,123,596
|
[MPS][BE] Add `c10/metal/common.h`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149955
That could be shared between host and metal code
So far put only one constant, which is a maximum number of tensor dimentions
| true
|
2,947,122,254
|
Skip cxxabi check for s390x
|
AlekseiNikiforovIBM
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 3
|
COLLABORATOR
|
On s390x gcc 14 is used because it contains fix for interaction between precompiled headers and vectorization builtins. This fix is not available in earlier gcc versions. gcc-14 uses ABI19, but check still fails, so skip it for now..
| true
|
2,947,112,001
|
`RuntimeError: UR error` with XPU
|
idkSeth
|
open
|
[
"module: binaries",
"triaged",
"module: xpu"
] | 29
|
NONE
|
### 🐛 Describe the bug
Tried with the current stable version `torch 2.6.0+xpu` and latest nightly `torch 2.8.0.dev20250321+xpu`.
I get `RuntimeError: UR error` whenever I try to use XPU for various tasks.
Minimal code to reproduce:
```
import torch
t = torch.tensor([0], device="xpu")
t.to(torch.float16)
```
Error:
```
Traceback (most recent call last):
File "<python-input-3>", line 1, in <module>
t.to(torch.float16)
~~~~^^^^^^^^^^^^^^^
RuntimeError: UR error
```
Setting oneAPI env variables before results in the following error when importing torch:
```
ImportError: /opt/intel/oneapi/compiler/2025.1/lib/libur_loader.so.0: version `LIBUR_LOADER_0.10' not found (required by /media/seth/Main/conda_pytorch/lib/python3.12/site-packages/torch/lib/../../../../libsycl.so.8)
```
`torch.xpu.is_available()` returns `True`
I have tried the solutions suggested in [#147226](https://github.com/pytorch/pytorch/issues/147226) and [#144143](https://github.com/pytorch/pytorch/issues/144143) by updating conda, removing `libstdc++.so.6` and updating `libstdcxx`.
Output of `SYCL_UR_TRACE=1 python -c "import torch; print(torch.xpu.is_available())"` with the latest nightly version:
```
<LOADER>[INFO]: loaded adapter 0x0x41a2cd60 (libur_adapter_level_zero.so.0)
<LOADER>[INFO]: loaded adapter 0x0x42206580 (libur_adapter_opencl.so.0)
<LOADER>[INFO]: failed to load adapter 'libur_adapter_cuda.so.0' with error: libur_adapter_cuda.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/media/seth/Main/conda_pytorch/lib/python3.12/site-packages/torch/lib/../../../../../lib/libur_adapter_cuda.so.0' with error: /media/seth/Main/conda_pytorch/lib/python3.12/site-packages/torch/lib/../../../../../lib/libur_adapter_cuda.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter 'libur_adapter_hip.so.0' with error: libur_adapter_hip.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/media/seth/Main/conda_pytorch/lib/python3.12/site-packages/torch/lib/../../../../../lib/libur_adapter_hip.so.0' with error: /media/seth/Main/conda_pytorch/lib/python3.12/site-packages/torch/lib/../../../../../lib/libur_adapter_hip.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter 'libur_adapter_native_cpu.so.0' with error: libur_adapter_native_cpu.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/media/seth/Main/conda_pytorch/lib/python3.12/site-packages/torch/lib/../../../../../lib/libur_adapter_native_cpu.so.0' with error: /media/seth/Main/conda_pytorch/lib/python3.12/site-packages/torch/lib/../../../../../lib/libur_adapter_native_cpu.so.0: cannot open shared object file: No such file or directory
True
```
With `torch 2.6.0+xpu`
```
[W326 00:19:55.428225956 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden.
Overriding a previously registered kernel for the same operator and the same dispatch key
operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> ()
registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
dispatch key: XPU
previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477
new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator())
<LOADER>[INFO]: loaded adapter 0x0x43f2940 (libur_adapter_level_zero.so.0)
<LOADER>[INFO]: loaded adapter 0x0x12190180 (libur_adapter_opencl.so.0)
<LOADER>[INFO]: failed to load adapter 'libur_adapter_cuda.so.0' with error: libur_adapter_cuda.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/media/seth/Second/conda_intel-pytorch/lib/python3.13/site-packages/torch/lib/../../../../../lib/libur_adapter_cuda.so.0' with error: /media/seth/Second/conda_intel-pytorch/lib/python3.13/site-packages/torch/lib/../../../../../lib/libur_adapter_cuda.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter 'libur_adapter_hip.so.0' with error: libur_adapter_hip.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/media/seth/Second/conda_intel-pytorch/lib/python3.13/site-packages/torch/lib/../../../../../lib/libur_adapter_hip.so.0' with error: /media/seth/Second/conda_intel-pytorch/lib/python3.13/site-packages/torch/lib/../../../../../lib/libur_adapter_hip.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter 'libur_adapter_native_cpu.so.0' with error: libur_adapter_native_cpu.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/media/seth/Second/conda_intel-pytorch/lib/python3.13/site-packages/torch/lib/../../../../../lib/libur_adapter_native_cpu.so.0' with error: /media/seth/Second/conda_intel-pytorch/lib/python3.13/site-packages/torch/lib/../../../../../lib/libur_adapter_native_cpu.so.0: cannot open shared object file: No such file or directory
True
```
### Versions
Attached are the outputs from both versions of torch installed without setting oneAPI env variables.
[latest_nightly.txt](https://github.com/user-attachments/files/19452065/latest_nightly.txt)
[stable.txt](https://github.com/user-attachments/files/19452066/stable.txt)
cc @seemethere @malfet @osalpekar @atalman @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,947,063,623
|
Removing doc references to PRE_CXX11_ABI.
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Fixes #149550
cc @svekars @sekyondaMeta
| true
|
2,947,063,502
|
[MPS] `torch.fmax`/`torch.fmin` produce garbage for mixed dtypes
|
malfet
|
closed
|
[
"triaged",
"module: correctness (silent)",
"module: mps"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
For example see
```
%python -c "import torch;print(torch.rand(3, device='mps').fmax(torch.arange(3., device='mps', dtype=torch.half)))"
tensor([0.8821, 0.9714, 0.2126], device='mps:0')
% python -c "import torch;print(torch.rand(3, device='mps').fmax(torch.arange(3., device='mps', dtype=torch.float)))"
tensor([0.3768, 1.0000, 2.0000], device='mps:0')
```
2nd and 3rd elements should be 1 and 2, regardless of dtypes, but they are not
### Versions
2.6.0, nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,947,035,476
|
Use statically known true in should_decompose_mm
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149950
This meta function is causing recompiles for large ads runs due to overguarding: https://www.internalfb.com/ai_infra/job_inspector/guided/pt2_compile?jobName=aps-ig_fm_v4_pt2_on-6e0a734dcc&jobVersion=0&jobAttempt=0
If we look at the reasons, it's because of this function adding guards: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/aps-ig_fm_v4_pt2_on-6e0a734dcc/attempt_0/version_0/rank_0/-_18_8_0/recompile_reasons_1971.json?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
This PR moves to statically_known_true so we don't overly guard for dynamic shapes.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,947,029,419
|
[TESTING] triton WS version ee6a03d19db0de2148c2604994e0256eeaefc5bc
|
davidberard98
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149949
| true
|
2,947,011,516
|
Test whether origin/main CI is broken
|
ahmadsharif1
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,946,957,897
|
Dont exclude constant_pad_nd in prologue fusion
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149947
Originally, I excluded constant_pad_nd from fusing to be conservative on compilation time. But, on benchmarking, you do occasionally get speedups by fusing it. Also includes a fix for making single, contiguous dep for prologues.
For instance, the following benchmark gets a 7% speedup by fusing in the constant_pad_nd.
```
import torch
import torch.nn.functional as F
torch._inductor.config.force_disable_caches = True
padded_N = 2048
n_pad_rows = 100
K, N = 2048, 4096
tensor1 = torch.randn(padded_N - n_pad_rows, 4096, device="cuda").to(torch.bfloat16)
tensor2 = torch.randn(4096, 4096, device="cuda").to(torch.bfloat16)
@torch.compile(mode='max-autotune-no-cudagraphs')
def masked_linear(input, weight, n_pad_input_rows):
"""
Linear layer with input padded by `n_pad_input_rows` rows
"""
# Use constant_pad_nd to pad with zeros for the invalid rows
padded_input = F.pad(tensor1, (0, 0, 0, n_pad_input_rows), "constant", 0)
return F.linear(padded_input, weight)
# Invoke the function
masked_linear(tensor1, tensor2, n_pad_rows)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,946,891,679
|
[Async TP] Fuse matmul-reduce-scatters when reduce scatters have multiple users, and save fused node for backward instead of reduce_scatter node
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"Merged",
"release notes: distributed (pipeline)",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 4
|
CONTRIBUTOR
|
Fixes #149876
## Stack
- [previous PR in stack] https://github.com/pytorch/pytorch/pull/149247
## TL;DR
This PR implements support in async TP for saving the reduce-scatter result for backward, which previously would break the torchtitan AC policies: no AC, per op SAC, and per layer SAC.
## Context
In torchtitan's LLama3 per op SAC policy, we want to save the output of `reduce_scatter` ops for backward, which is useful for TP. The reduce_scatter op is also saved for No AC (since all activations are saved) and per layer SAC (since we save the activations for N full layers, which do contain reduce-scatters for TP.
However, doing this causes incompatibility with Async TP for the AC policies above, for 2 reasons:
1) The graph pattern matching specifically only matches on reduce scatter nodes with 1 user, but reduce_scatter nodes saved for backwards will have 2 users (the 2nd one being the return/output node, which saves it for backward).
2) The subgraph replacement logic which replaces the users of the `wait_tensor` after the reduce-scatter with the new fused node has no mechanism to save the fused_node for backward instead of the reduce-scatter node. This means we cannot directly replace the subgraph, since we can't delete nodes which still have users (in this case, the output node is still using the reduce-scatter node).
To fix this, we do 2 things:
1) Add additional pattern matching logic to also match reduce-scatter nodes with 2 users, so we also perform fusion when reduce-scatter is saved for backward.
2) When replacing the subgraph with the fused node, detect if the reduce-scatter was saved for backward, and if so, save the result of the fused node for backward instead. This enables us to properly erase the subgraph and prevent the memory leak which occurred in #149876
## Other changes
- Continue to throw an error if we don't find any candidate all-gathers or reduce-scatters for fusion (since TP should have both) but DON'T throw an error if we don't fuse any matmul-reduce-scatters. This is because I've found there are actually valid graphs where we do fuse reduce scatters in the forward graph but not the backward graph (in the backward pass there are reduce-scatters but the producer op is an "add" not a mm/scaled_mm).
## Test plan
1. All unit tests are passing
2. Visualized the graphs and verified the fusion is occurring properly.
3. Verified via manual torchtitan runs there is no memory leak / OOM occurring anymore.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan
| true
|
2,946,725,309
|
Add triton as dependency to CUDA aarch64 build
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Aarch64 Triton build was added by: https://github.com/pytorch/pytorch/pull/148705
Hence add proper contrain to CUDA 12.8 Aarch64 build
Please note we want to still use:
```platform_system == 'Linux' and platform_machine == 'x86_64'```
For all other builds.
Since these are prototype binaries only used by cuda 12.8 linux aarch64 build. Which we would like to serve from download.pytorch.org
| true
|
2,946,698,348
|
include cudagraph skip reasons in tlparse output
|
bdhirsh
|
open
|
[
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
here's a repro:
```
import torch
torch._dynamo.config.capture_dynamic_output_shape_ops = True
@torch.compile(mode='reduce-overhead')
def f(x):
y = x.nonzero()
return y
x = torch.randn(16, device='cuda')
out = f(x)
```
and the corresponding tlparse: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/hirsheybar/4ece4384-915f-4df9-9f69-e27424cf574b/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
If you run locally with `TORCH_LOGS="+cudagraphs", you'll get nice info about the fact that cudagraphs were skipped. Just from the tlparse though this information is not available - it would be nice if we can log if for folks that rely mainly on tlparse for information
cc @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,945,929,668
|
[BE] Replace XPU support packages installation to offline mode in Linux CI/CD
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
To ensure the build environment is stable
| true
|
2,945,783,704
|
Add `load_state_dict` hint doc about invoke order work with lr_scheduler
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 3
|
CONTRIBUTOR
|
Fixes #119168
## Test Result

| true
|
2,945,675,646
|
SEGV in static destructors when Vulkan is enabled
|
yurivict
|
open
|
[
"needs reproduction",
"module: crash",
"triaged",
"module: vulkan"
] | 2
|
NONE
|
### 🐛 Describe the bug
SEGV in the end when torch.is_vulkan_available() is called.
Crash is in static destructors:
```
(gdb) bt
#0 0x000000008f16daa0 in ?? ()
#1 0x000000002c678124 in __cxa_finalize (dso=dso@entry=0x0) at /disk-samsung/freebsd-src/lib/libc/stdlib/atexit.c:237
#2 0x000000002c6786bc in exit (status=0) at /disk-samsung/freebsd-src/lib/libc/stdlib/exit.c:92
#3 0x000000002c59811b in __libc_start1 (argc=1, argv=0x7fffffffe648, env=0x7fffffffe658, cleanup=<optimized out>, mainX=0x201770) at /disk-samsung/freebsd-src/lib/libc/csu/libc_start1.c:157
#4 0x00000000002016d0 in _start ()
```
### Versions
PyTorch-2.6.0
Python-3.11
clang-19
FreeBSD 14.2
| true
|
2,945,577,481
|
[ONNX] Export generates unnecessary _aten_layer_norm_onnx wrapper for LayerNorm operation
|
novikov-alexander
|
closed
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
When attempting to convert a PyTorch model containing the `torch.nn.LayerNorm` operation to ONNX format, I observed different graph representations depending on the value of the `dynamo` parameter in `torch.onnx.export`.
**Expected behavior:**
- When `dynamo=False`, the exported ONNX graph correctly represents the `LayerNorm` operation without any additional wrappers.
<img width="176" alt="Image" src="https://github.com/user-attachments/assets/7b142b5d-1a97-439f-a130-fed69edca57c" />
**Actual behavior:**
- When `dynamo=True`, the `LayerNorm` operation is wrapped by `_aten_layer_norm_onnx`, which contains `LayerNormalization` inside.
<img width="203" alt="Image" src="https://github.com/user-attachments/assets/03db8e84-710f-45b8-9b78-eed55f33ec3f" />
**Steps to reproduce:**
1. Create a simple model with `torch.nn.LayerNorm`.
2. Export the model to ONNX using `torch.onnx.export` with `dynamo=True` and `dynamo=False`.
3. Compare the resulting ONNX graphs.
**Code snippet:**
```python
import torch
norm_model = torch.nn.LayerNorm(256)
norm_x = torch.rand(950, 1, 256)
torch.onnx.export(
norm_model,
norm_x,
'norm_test_dynamo_false.onnx',
input_names=['input'],
output_names=['output'],
opset_version=17,
dynamo=False
)
torch.onnx.export(
norm_model,
norm_x,
'norm_test_dynamo_true.onnx',
input_names=['input'],
output_names=['output'],
opset_version=17,
dynamo=True
)
```
**Additional information:**
- The issue occurs when using any `opset_version`.
Please let me know if there is a way to correctly export the `LayerNorm` operation when `dynamo=True`.
### Versions
PyTorch version: 2.6.0+cu118
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.21.0
[pip3] onnxscript==0.2.2
[pip3] torch==2.6.0
[pip3] torch_cluster==1.6.3
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
| true
|
2,945,548,056
|
Make `Adam`, `AdamW` work with nonzero-dim Tensor betas
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 1
|
CONTRIBUTOR
|
Fixes #147921
## Changes
- Convert tensor `betas` using `_to_scalar`
- Change annotation of `betas` param
- Change param type in docs
## Test Result
```bash
pytest -s test/test_optim.py -k test_tensor_lr -vv
```


| true
|
2,945,542,651
|
there is some confusion of when the parameters of module have grad, before execute the function register_full_backward_hook or after it
|
l1351868270
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
I write some code to test the register_full_backward_hook, according to my understanding. after backward, the register function of register_full_backward_hook will execute. But, when i write some test code, the result conflict my understood.
```
import torch
import torch.nn as nn
batch = 1
in_features = 4
out_features = 8
device = f'cpu'
factory_kwargs = {'device': device, 'dtype': torch.float32}
def hook_fn(module, grad_input, grad_output):
params = list(module.parameters())
for param in params:
print(f"hook grad: {param.grad}")
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc = torch.nn.Linear(in_features, out_features, bias=False, **factory_kwargs)
def forward(self, x):
x = self.fc(x)
return x
model = Model()
model.register_full_backward_hook(hook_fn)
input = torch.randn(batch, in_features, **factory_kwargs)
output = model(input)
output.backward(torch.randn_like(output))
for param in model.parameters():
print(f"grad: {param.grad}")
```
The result is below, this means before hook_fn, the parameters have no grad
```
hook grad: None
grad: tensor([[-0.1531, -0.0411, -0.1595, 0.2487],
[-1.1162, -0.2994, -1.1628, 1.8129],
[-0.0636, -0.0171, -0.0663, 0.1033],
[-0.6744, -0.1809, -0.7026, 1.0954],
[ 0.5797, 0.1555, 0.6039, -0.9415],
[ 0.1139, 0.0305, 0.1186, -0.1849],
[ 0.3896, 0.1045, 0.4059, -0.6328],
[-0.0853, -0.0229, -0.0888, 0.1385]])
```
but when i change that the input required_grad=True,
```
import torch
import torch.nn as nn
batch = 1
in_features = 4
out_features = 8
device = f'cpu'
factory_kwargs = {'device': device, 'dtype': torch.float32}
def hook_fn(module, grad_input, grad_output):
params = list(module.parameters())
for param in params:
print(f"hook grad: {param.grad}")
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc = torch.nn.Linear(in_features, out_features, bias=False, **factory_kwargs)
def forward(self, x):
x = self.fc(x)
return x
model = Model()
model.register_full_backward_hook(hook_fn)
input = torch.randn(batch, in_features, **factory_kwargs, requires_grad=True)
output = model(input)
output.backward(torch.randn_like(output))
for param in model.parameters():
print(f"grad: {param.grad}")
```
the result is below, this means, before hook_fn, the parameters have grad.
The only diffrent is input = torch.randn(batch, in_features, **factory_kwargs, requires_grad=True).
```
hook grad: tensor([[-0.8558, 1.0175, 0.6584, 0.1840],
[ 0.8886, -1.0565, -0.6837, -0.1911],
[ 0.4815, -0.5724, -0.3704, -0.1035],
[-0.3408, 0.4052, 0.2622, 0.0733],
[-0.4538, 0.5396, 0.3491, 0.0976],
[-0.4484, 0.5331, 0.3450, 0.0964],
[ 0.0482, -0.0574, -0.0371, -0.0104],
[-1.1709, 1.3922, 0.9009, 0.2518]])
grad: tensor([[-0.8558, 1.0175, 0.6584, 0.1840],
[ 0.8886, -1.0565, -0.6837, -0.1911],
[ 0.4815, -0.5724, -0.3704, -0.1035],
[-0.3408, 0.4052, 0.2622, 0.0733],
[-0.4538, 0.5396, 0.3491, 0.0976],
[-0.4484, 0.5331, 0.3450, 0.0964],
[ 0.0482, -0.0574, -0.0371, -0.0104],
[-1.1709, 1.3922, 0.9009, 0.2518]])
```
Is there some more detail explain of register_full_backward_hook
In my understood, the parameters have already requires_grad=True. It have grad or have no grad should independently of the input.
### Versions
master
| true
|
2,945,540,112
|
torch.inverse can't export onnx opset16
|
Lucky-BenXie
|
open
|
[
"needs reproduction",
"module: onnx",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
torch.inverse can't export onnx when opset_version == 16
### Versions
111
| true
|
2,945,488,946
|
refresh results of benchmarks
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149936
while the test was disabled, I put a fix but another win change landed before the test was restored
to it stayed disabled.
<img width="698" alt="Screenshot 2025-03-24 at 6 26 36 PM" src="https://github.com/user-attachments/assets/2713c685-aee2-4dea-9a6c-cad01ef575cd" />
caused by
https://github.com/pytorch/pytorch/pull/149295
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,945,451,498
|
update aotinductor doc for XPU support
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
as title. Since the AOTInductor feature starting from 2.7 works on Intel GPU, add the related contents into its doc.
| true
|
2,945,443,564
|
inconsistant behavior to pass tensor/tensor_list from torch.distributed module to cpp API
|
sanshang-nv
|
open
|
[
"oncall: distributed",
"triaged"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Inconsistant behavior to pass tensor/tensor_list from `torch.distributed` module to cpp implementation. Some wrap tensor/tensor list with one more list, but some don't. This causes inconsistant format in dumped Execution Trace.
For example:
https://github.com/pytorch/pytorch/blob/5a7588f1832a840285ed29b039f01b9031570e5c/torch/distributed/distributed_c10d.py#L3786
https://github.com/pytorch/pytorch/blob/5a7588f1832a840285ed29b039f01b9031570e5c/torch/distributed/distributed_c10d.py#L3890
https://github.com/pytorch/pytorch/blob/5a7588f1832a840285ed29b039f01b9031570e5c/torch/distributed/distributed_c10d.py#L4237
https://github.com/pytorch/pytorch/blob/5a7588f1832a840285ed29b039f01b9031570e5c/torch/distributed/distributed_c10d.py#L4626
Is this a bug?
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+ecf3bae
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.03 [51/1981]
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU(s) scaling MHz: 144%
CPU max MHz: 2801.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall[17/1981]
gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est t
m2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3
cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bm
i2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc c
qm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes
vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr
amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.10.0
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] optree==0.14.0
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.2.0+git0d4682f0b.nvinternal
[pip3] torch==2.7.0a0+ecf3bae
[pip3] torch_geometric==2.5.3
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0a0+8ea4772
[pip3] torchvision==0.22.0a0+8ea4772
[pip3] triton==3.2.0
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,945,442,115
|
Add XPU and SYCL Merge Patterns
|
EikanWang
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149933
As the title
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.