id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,983,218,493
|
Introduce test skip markers for Sandcastle
|
Flamefire
|
open
|
[
"oncall: distributed",
"oncall: jit",
"triaged",
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Simplify the markers a bit to make them more expressive
It also makes it easier to skip those tests "manually" by changing the single definition of the skip marker.
This is important to reduce potential false positives (of failed tests) in some environments, such as HPC clusters
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,983,218,454
|
DISABLED test_parity__foreach_acos_fastpath_inplace_cuda_complex128 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_inplace_cuda_complex128&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40237819332).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_inplace_cuda_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_acos_', keys=('aten::_foreach_acos_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.complex128], Tensor[size=(19, 19), device="cuda:0", dtype=torch.complex128], Tensor[size=(18, 18), device="cuda:0", dtype=torch.complex128], Tensor[size=(17, 17), device="cuda:0", dtype=torch.complex128], Tensor[size=(16, 16), device="cuda:0", dtype=torch.complex128], Tensor[size=(15, 15), device="cuda:0", dtype=torch.complex128], Tensor[size=(14, 14), device="cuda:0", dtype=torch.complex128], Tensor[size=(13, 13), device="cuda:0", dtype=torch.complex128], Tensor[size=(12, 12), device="cuda:0", dtype=torch.complex128], Tensor[size=(11, 11), device="cuda:0", dtype=torch.complex128], Tensor[size=(10, 10), device="cuda:0", dtype=torch.complex128], Tensor[size=(9, 9), device="cuda:0", dtype=torch.complex128], Tensor[size=(8, 8), device="cuda:0", dtype=torch.complex128], Tensor[size=(7, 7), device="cuda:0", dtype=torch.complex128], Tensor[size=(6, 6), device="cuda:0", dtype=torch.complex128], Tensor[size=(5, 5), device="cuda:0", dtype=torch.complex128], Tensor[size=(4, 4), device="cuda:0", dtype=torch.complex128], Tensor[size=(3, 3), device="cuda:0", dtype=torch.complex128], Tensor[size=(2, 2), device="cuda:0", dtype=torch.complex128], Tensor[size=(1, 1), device="cuda:0", dtype=torch.complex128]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_acos_fastpath_inplace_cuda_complex128
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,983,218,447
|
DISABLED test_foreach_reduce_large_input__foreach_max_w_empty_False_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 2
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_reduce_large_input__foreach_max_w_empty_False_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40239501580).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_reduce_large_input__foreach_max_w_empty_False_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,982,691,025
|
Update triton wheel build, setuptools pin
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Observing failure in release workflow:
https://github.com/pytorch/pytorch/actions/runs/14346340202/job/40216804374
```
Traceback (most recent call last):
File "/opt/python/cp311-cp311/lib/python3.11/site-packages/wheel/bdist_wheel.py", line 11, in <module>
from setuptools.command.bdist_wheel import bdist_wheel as bdist_wheel
ModuleNotFoundError: No module named 'setuptools.command.bdist_wheel'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/tmppwpqef_x/triton/python/setup.py", line 27, in <module>
from wheel.bdist_wheel import bdist_wheel
File "/opt/python/cp311-cp311/lib/python3.11/site-packages/wheel/bdist_wheel.py", line 13, in <module>
raise ImportError(ERROR) from exc
ImportError: The 'wheel.bdist_wheel' module has been removed.
Please update your setuptools to v70.1 or later.
If you're explicitly importing 'wheel.bdist_wheel', please update your import to point to 'setuptools.command.bdist_wheel' instead.
```
| true
|
2,982,589,347
|
[1/N] Use internal linkage in torch/csrc C++ files
|
cyyever
|
closed
|
[
"oncall: distributed",
"module: cpu",
"module: mkldnn",
"open source",
"Merged",
"NNC",
"ciflow/trunk",
"release notes: quantization",
"release notes: linalg_frontend",
"ciflow/periodic",
"module: dynamo",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 11
|
COLLABORATOR
|
Turn more functions and variables into static if they are not used outside the cpp files. Unused functions are removed.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @voznesenskym @penguinwu @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,982,518,956
|
Docs: Add missing whitespace in the cmake warning message
|
koyuki7w
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
A trailing whitespace is needed to be concatenated to the following string correctly.
| true
|
2,982,518,800
|
ShardTensor gather will encounter an error when a local tensor on certain ranks has zero elements
|
tiankongdeguiji
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When a local tensor on certain ranks has zero elements, ShardTensor gather will raise an error. We can reproduce this using the following command: `torchrun --master_addr=localhost --master_port=49941 --nnodes=1 --nproc-per-node=8 test_shard_gather.py`
test_shard_gather.py
```python
import os
import torch
import numpy as np
import torch.distributed as dist
from torch.distributed._shard.sharded_tensor import ShardedTensor, Shard, ShardMetadata
from torch.distributed._shard.sharding_spec import ChunkShardingSpec
rank = int(os.environ["RANK"])
dist.init_process_group(backend="gloo")
world_size = dist.get_world_size()
assert world_size == 8
def init_tensor(local_tensor_sizes):
cumsum_tensor_sizes = np.cumsum([0] + local_tensor_sizes)
if rank == 0:
dst_tensor = torch.empty((sum(local_tensor_sizes),), dtype=torch.float32)
else:
dst_tensor = None
src_tensor = ShardedTensor._init_from_local_shards(
[
Shard(
tensor=torch.tensor(
list(range(local_tensor_sizes[rank])),
dtype=torch.float32,
),
metadata=ShardMetadata(
shard_offsets=[cumsum_tensor_sizes[rank]],
shard_sizes=[local_tensor_sizes[rank]],
placement=f"rank:{rank}/cpu"
),
)
],
(sum(local_tensor_sizes),)
)
return src_tensor, dst_tensor
src_tensor1, dst_tensor1 = init_tensor([3, 3, 3, 3, 3, 3, 2, 0])
src_tensor2, dst_tensor2 = init_tensor([3, 3, 3, 3, 3, 3, 3, 3])
src_tensor1.gather(out=dst_tensor1, dtype=torch.float32)
src_tensor2.gather(out=dst_tensor2, dtype=torch.float32)
```
error info
```
lib/python3.11/site-packages/torch/distributed/_shard/sharded_tensor/api.py:457: UserWarning: Gathering a tensor with zero elements on rank 7
warnings.warn(
[rank0]: Traceback (most recent call last):
[rank0]: File "test_shard_gather.py", line 47, in <module>
[rank0]: src_tensor2.gather(out=dst_tensor2, dtype=torch.float32)
[rank0]: File "lib/python3.11/site-packages/torch/distributed/_shard/sharded_tensor/api.py", line 464, in gather
[rank0]: dist.gather(
[rank0]: File "lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 4011, in gather
[rank0]: work.wait()
[rank0]: RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [11.159.115.41]:5854
E0409 19:23:38.904000 98234 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 98475) of binary: /home/hongsheng.jhs/Library/anaconda2/envs/trec110/bin/python
Traceback (most recent call last):
File "bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==2.6.0+cu124', 'console_scripts', 'torchrun')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/torch/distributed/run.py", line 918, in main
run(args)
File "lib/python3.11/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
```
### Versions
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,982,328,511
|
[CI] Enable XCCL in XPU CI build
|
chuanqi129
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"keep-going",
"ciflow/xpu"
] | 10
|
COLLABORATOR
|
As XCCL has been enabled for torch xpu, enable it in CI build.
| true
|
2,982,257,076
|
Cannot export once a nn.Module is compiled
|
GdoongMathew
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Once a `nn.Module` is compiled through `module.compile()`, it fails during `torch.export.export`.
## Example
```python
class Custom(torch.nn.Module):
def __init__(self):
super().__init__()
self.module = torch.nn.Conv2d(3, 3, 3)
def forward(self, x: torch.Tensor) -> torch.Tensor:
ret = self.module(x)
return ret
if __name__ == "__main__":
image = torch.zeros((2, 3, 128, 128), device="cuda")
custom_module = Custom()
custom_module.cuda().eval()
custom_module.compile()
torch.export.export(
custom_module,
(image,),
)
```
Using `torch.compile(custom_module)` wouldn't cause this problem tho.
### Error logs
I0409 17:46:27.915000 112626 torch/_dynamo/eval_frame.py:1755] Failed to capture a graph during tracing as no tensor operations were found.:
I0409 17:46:27.915000 112626 torch/_dynamo/eval_frame.py:1755]
I0409 17:46:27.915000 112626 torch/_dynamo/eval_frame.py:1755] class GraphModule(torch.nn.Module):
I0409 17:46:27.915000 112626 torch/_dynamo/eval_frame.py:1755] def forward(self, x: "f32[2, 3, 128, 128]"):
I0409 17:46:27.915000 112626 torch/_dynamo/eval_frame.py:1755] return ()
I0409 17:46:27.915000 112626 torch/_dynamo/eval_frame.py:1755]
AssertionError: Unexpectedly found a <class 'torch.Tensor'> in the outputs.
### Versions
<details>
<summary>Environment Info</summary>
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU
Nvidia driver version: 555.99
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 6900HS with Radeon Graphics
CPU family: 25
Model: 68
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
BogoMIPS: 6587.62
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable non
stop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs
ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
</details>
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,982,143,457
|
Test new Windows Arm64 runner image
|
iremyux
|
closed
|
[
"open source",
"ciflow/binaries",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Draft PR to see if the new Windows Arm64 runner image works as expected.
| true
|
2,982,101,598
|
Fix StrictMinMaxConstraint issue
|
FlintWangacc
|
open
|
[
"open source",
"release notes: fx",
"fx"
] | 2
|
NONE
|
Fixes #150922
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,982,100,650
|
Add more check for torch.cuda.nccl
|
FFFrog
|
open
|
[
"oncall: distributed",
"open source",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151221
* __->__ #150923
Changes:
- add op check for nccl operations
- add related tests for op check
| true
|
2,982,099,310
|
StrictMinMaxConstraint issue in pytorch 2.4.0
|
FlintWangacc
|
open
|
[
"triaged",
"module: fx"
] | 0
|
NONE
|
### 🐛 Describe the bug
```python
class StrictMinMaxConstraint(Constraint):
"""
For clients: the size at this dimension must be within 'vr' (which
specifies a lower and upper bound, inclusive-inclusive) AND it
must be non-negative and should not be 0 or 1 (but see NB below).
For backends: there must not be any guards on this dimension which
are not implied by the given lower and upper bound. Regardless of
the lower bound, the backend can assume the size is non-negative
and that it is not 0 or 1.
An unbounded StrictMinMaxConstraint can be thought of as a strict version
of "RelaxedUnspecConstraint".
NB: Export will often unsoundly assume that a graph works for 0/1, even
though at trace time we assumed size is not 0 or 1. The idea is that
if we produce a graph that works for a range of values, it will be OK
for N=0/1 too.
"""
```
But in the usage place, it looks like this.
```python
if isinstance(constraint, StrictMinMaxConstraint):
if not (s == constraint.vr.lower == constraint.vr.upper): # allow static constraints
constraint_violated = True
```
I think it should look like this.
```python
if isinstance(constraint, StrictMinMaxConstraint):
if not (s >= constraint.vr.lower and s <= constraint.vr.upper): # allow static constraints
constraint_violated = True
```
### Versions
```shell
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: 19.1.7 (https://github.com/llvm/llvm-project.git cd708029e0b2869e80abe31ddb175f7c35361f90)
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,982,054,195
|
`M(*[torch.from_numpy(v).to('cpu') for v in inp])` hang when start with `multiprocessing.Process`
|
syheliel
|
open
|
[
"needs reproduction",
"module: multiprocessing",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
when run under `mp.Process(target=run_model)`, the program will hang until timeout:
```
import numpy as np
import torch
import multiprocessing as mp
import hanging_threads # for debugging long time hang thread info
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.v19_0 = torch.nn.Parameter(torch.empty([1, 22, 1, 2, 2], dtype=torch.float32), requires_grad=True)
self.m3 = torch.nn.PReLU(num_parameters=1)
self.m16 = torch.nn.Conv2d(2, 1, kernel_size=(1, 2), stride=(1, 1))
self.m18 = torch.nn.Linear(in_features=2, out_features=1, bias=True)
def forward(self, *args):
v19_0 = self.v19_0
getitem = args[0]
atan = torch.atan(getitem)
m3 = self.m3(v19_0)
sub = torch.sub(m3, atan)
sum_1 = torch.rand_like(getitem).sum(1)
div = torch.div(sum_1, torch.rand_like(getitem))
sin = torch.sin(div)
where = torch.where(torch.rand_like(sub) > 0.5, sub, sin)
m16 = self.m16(where.sum(1))
m18 = self.m18(where)
gelu = torch._C._nn.gelu(m18)
return (m16, gelu)
m = M()
input_0 = np.random.rand(1, 1, 2, 1, 1).astype(np.float32)
inp = [input_0]
torch_inputs = [torch.from_numpy(v).to('cpu') for v in inp]
m_out = m(*torch_inputs)
def run_model():
try:
m_out = m(*[torch.from_numpy(v).to('cpu') for v in inp])
except Exception as e:
print(f"Error occurred: {str(e)}")
return False
return True
if __name__ == "__main__":
p = mp.Process(target=run_model)
hanging_threads.start_monitoring(seconds_frozen=30)
p.start()
timeout = 60
p.join(timeout)
if p.is_alive():
print(f"Process timed out after {timeout} seconds")
p.terminate()
p.join()
elif p.exitcode != 0:
print(f"Process failed with exit code {p.exitcode}")
else:
print("Process completed successfully")
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (conda-forge gcc 12.1.0-17) 12.1.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 4.0.0
Libc version: glibc-2.39
Python version: 3.10.16 | packaged by conda-forge | (main, Apr 8 2025, 20:53:32) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 32
在线 CPU 列表: 0-31
厂商 ID: GenuineIntel
型号名称: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
CPU 系列: 6
型号: 85
每个核的线程数: 1
每个座的核数: 1
座: 32
步进: 4
BogoMIPS: 4190.15
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti ssbd ibrs ibpb stibp tpr_shadow ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat vnmi pku ospke md_clear flush_l1d arch_capabilities
虚拟化: VT-x
超管理器厂商: VMware
虚拟化类型: 完全
L1d 缓存: 1 MiB (32 instances)
L1i 缓存: 1 MiB (32 instances)
L2 缓存: 32 MiB (32 instances)
L3 缓存: 704 MiB (32 instances)
NUMA 节点: 2
NUMA 节点0 CPU: 0-15
NUMA 节点1 CPU: 16-31
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.6.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] intel-extension-for-pytorch 2.6.0 pypi_0 pypi
[conda] numpy 2.2.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @VitalyFedyunin @albanD
| true
|
2,982,050,287
|
Assertion Failure: TestMathBitsCPU conj_view tests
|
rahultrada
|
closed
|
[
"module: tests",
"module: complex",
"module: correctness (silent)",
"module: arm"
] | 1
|
NONE
|
### 🐛 Describe the bug
Under test class `TestMathBitsCPU`, the following four tests
```
test_conj_view__refs_dot_cpu_complex64
test_conj_view__refs_vdot_cpu_complex64
test_neg_conj_view__refs_dot_cpu_complex128
test_neg_conj_view__refs_vdot_cpu_complex128
```
are failing with a similar error on `aarch64`
```
AssertionError: Scalars are not close! Expected 0j but got (11.81559195439695-74.26404405406299j). Absolute difference: 75.19811468711579 (up to 1e-07 allowed) Relative difference: inf (up to 1e-07 allowed)
```
The error is not encountered in CI.
Repro steps:
Install nightly torch `2.8.0.dev20250408+cpu`
Install CI requirements `pip install -r .ci/docker/requirements-ci.txt`
Run `python test/test_ops.py TestCommonCPU.test_conj_view__refs_dot_cpu_complex64` (replace test name with any of the 4 mentioned above)
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250408+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1024-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 1 MiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.13.0
[pip3] torch==2.8.0.dev20250408+cpu
[conda] Could not collect
```
cc @mruberry @ZainRizvi @ezyang @anjali411 @dylanbespalko @nikitaved @amjames @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,982,042,450
|
Assertion Failure: TestCommonCPU complex64 tests
|
rahultrada
|
closed
|
[
"module: tests",
"module: arm"
] | 1
|
NONE
|
### 🐛 Describe the bug
Under test class `TestCommonCPU`, the following four tests
```
test_python_ref__refs_linalg_vecdot_cpu_complex64
test_python_ref_torch_fallback__refs_dot_cpu_complex64
test_python_ref_torch_fallback__refs_linalg_vecdot_cpu_complex64
test_python_ref_torch_fallback__refs_vdot_cpu_complex64
```
are failing with a similar error on aarch64
```
AssertionError: tensor(False) is not true : Reference result was farther (45.67470518976121) from the precise computation than the torch result was (0.0)!
```
The error is not encountered in CI.
Repro steps:
Install nightly torch `2.8.0.dev20250408+cpu`
Install CI requirements `pip install -r .ci/docker/requirements-ci.txt`
Run `python test/test_ops.py TestCommonCPU.test_python_ref__refs_linalg_vecdot_cpu_complex64` (replace test name with any of the 4 mentioned above)
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250408+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1024-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 1 MiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.13.0
[pip3] torch==2.8.0.dev20250408+cpu
[conda] Could not collect
```
cc @mruberry @ZainRizvi @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,982,031,937
|
Assertion Failure: TestCommonCPU complex128 tests
|
rahultrada
|
open
|
[
"module: tests",
"triaged",
"module: complex",
"module: third_party",
"module: correctness (silent)",
"module: arm"
] | 4
|
NONE
|
### 🐛 Describe the bug
Under test class `TestCommonCPU`, the following four tests
```
test_python_ref__refs_linalg_vecdot_cpu_complex128
test_python_ref_torch_fallback__refs_dot_cpu_complex128
test_python_ref_torch_fallback__refs_linalg_vecdot_cpu_complex128
test_python_ref_torch_fallback__refs_vdot_cpu_complex128
```
are failing with a similar error on `aarch64`
```
AssertionError: Scalars are not close! Expected 0j but got (-32.10686702180997+10.451870383282637j). Absolute difference: 33.765255877382735 (up to 1e-07 allowed) Relative difference: inf (up to 1e-07 allowed)
```
**Edit**: There are more test failures posted in a comment below
The error is not encountered in CI.
Repro steps:
1. Install nightly torch `2.8.0.dev20250408+cpu`
2. Install CI requirements `pip install -r .ci/docker/requirements-ci.txt`
3. Run `python test/test_ops.py TestCommonCPU.test_python_ref__refs_linalg_vecdot_cpu_complex128` (replace test name with any of the 4 mentioned above)
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250408+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1024-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 1 MiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.13.0
[pip3] torch==2.8.0.dev20250408+cpu
[conda] Could not collect
```
cc @mruberry @ZainRizvi @ezyang @anjali411 @dylanbespalko @nikitaved @amjames @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,982,014,122
|
Document garbage_collection_threshold default
|
neoncube2
|
open
|
[
"module: docs",
"module: cuda",
"triaged"
] | 1
|
NONE
|
### 📚 The doc issue
It'd be nice if https://pytorch.org/docs/stable/notes/cuda.html#memory-management documented the default for `garbage_collection_threshold`
### Suggest a potential alternative/fix
I _think_ the default is `1.0`
cc @svekars @sekyondaMeta @AlannaBurke @ptrblck @msaroufim @eqy
| true
|
2,981,985,194
|
Elastic training crashes on killed agent
|
andreacarrara-polimi
|
open
|
[
"oncall: distributed",
"triaged"
] | 7
|
NONE
|
### 🐛 Describe the bug
I'm trying to use Elastic to handle nodes joining or leaving during training. My setup runs two EC2 instances (Ubuntu 24.04, g4dn.xlarge, NVIDIA Tesla T4, driver 550, PyTorch in a venv). My script is minimal, reproducible and attached [here](https://gist.github.com/andreacarrara-polimi/9ee271977693c922a52548a8bab845fe). It's a simplified version of [this example](https://github.com/pytorch/elastic/blob/master/examples/imagenet/main.py). Each node runs:
```bash
torchrun \
--nproc-per-node=1 \
--max-restarts=3 \
--node-rank=0|1 \
--nnodes=1:2 \
--rdzv-id=001 \
--rdzv-backend=c10d \
--rdzv-endpoint=<IP> \
script.py 50 10 --batch_size 32
```
Each node launches one agent, which manages one worker. The node with `--node-rank=0` acts as the rendezvous server. If I kill its process, the training crashes as expected since it's a single point of failure. However, the problem is with the other node. Killing its worker results in correct behavior as the agent restarts it up to the value of `--max-restarts`. But when I kill its agent, the training crashes instead of continuing with the rendezvous node only. The full traceback of the exception is included below.
On the rendezvous node:
```
[rank1]:[E409 07:28:50.917104566 ProcessGroupNCCL.cpp:552] [Rank 1] Collective WorkNCCL(SeqNum=632, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) raised the following async exception: NCCL error: remote process exited or there was a network error, NCCL version 2.21.5
ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
Last error:
socketProgress: Connection closed by remote peer ip-172-31-36-217.eu-north-1.compute.internal<59116>
Exception raised from checkForNCCLErrorsInternal at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2363 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7861000c31b6 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::shared_ptr<c10d::NCCLComm>&) + 0x220 (0x7860ad5f61c0 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7860ad5fe64b in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::watchdogHandler() + 0x650 (0x7860ad600590 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7860ad6016ed in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #5: <unknown function> + 0x145c0 (0x78610022e5c0 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch.so)
frame #6: <unknown function> + 0x9caa4 (0x786100c9caa4 in /lib/x86_64-linux-gnu/libc.so.6)
frame #7: <unknown function> + 0x129c3c (0x786100d29c3c in /lib/x86_64-linux-gnu/libc.so.6)
[rank1]:[E409 07:28:50.919638087 ProcessGroupNCCL.cpp:2168] [PG ID 0 PG GUID 0(default_pg) Rank 1] failure detected by watchdog at work sequence id: 632 PG status: last enqueued work: 632, last completed work: 631
[rank1]:[E409 07:28:50.919656087 ProcessGroupNCCL.cpp:667] Stack trace of the failed collective not found, potentially because FlightRecorder is disabled. You can enable it by setting TORCH_NCCL_TRACE_BUFFER_SIZE to a non-zero value.
Started epoch 19
[rank1]:[E409 07:28:50.740369821 ProcessGroupNCCL.cpp:681] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank1]:[E409 07:28:50.740390368 ProcessGroupNCCL.cpp:695] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
[rank1]:[E409 07:28:50.740431045 ProcessGroupNCCL.cpp:1895] [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.21.5
ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
Last error:
socketProgress: Connection closed by remote peer ip-172-31-36-217.eu-north-1.compute.internal<59116>
Exception raised from checkForNCCLErrorsInternal at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2363 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7861000c31b6 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::shared_ptr<c10d::NCCLComm>&) + 0x220 (0x7860ad5f61c0 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7860ad5fe64b in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::watchdogHandler() + 0x650 (0x7860ad600590 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7860ad6016ed in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #5: <unknown function> + 0x145c0 (0x78610022e5c0 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch.so)
frame #6: <unknown function> + 0x9caa4 (0x786100c9caa4 in /lib/x86_64-linux-gnu/libc.so.6)
frame #7: <unknown function> + 0x129c3c (0x786100d29c3c in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.21.5
ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
Last error:
socketProgress: Connection closed by remote peer ip-172-31-36-217.eu-north-1.compute.internal<59116>
Exception raised from checkForNCCLErrorsInternal at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2363 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7861000c31b6 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::shared_ptr<c10d::NCCLComm>&) + 0x220 (0x7860ad5f61c0 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7860ad5fe64b in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::watchdogHandler() + 0x650 (0x7860ad600590 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7860ad6016ed in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #5: <unknown function> + 0x145c0 (0x78610022e5c0 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch.so)
frame #6: <unknown function> + 0x9caa4 (0x786100c9caa4 in /lib/x86_64-linux-gnu/libc.so.6)
frame #7: <unknown function> + 0x129c3c (0x786100d29c3c in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1901 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7861000c31b6 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0xe5c6fc (0x7860ad25c6fc in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #2: <unknown function> + 0x145c0 (0x78610022e5c0 in /home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x9caa4 (0x786100c9caa4 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: <unknown function> + 0x129c3c (0x786100d29c3c in /lib/x86_64-linux-gnu/libc.so.6)
E0409 07:28:52.647000 1312 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: -6) local_rank: 0 (pid: 1318) of binary: /home/ubuntu/prototype-venv/bin/python3
Traceback (most recent call last):
File "/home/ubuntu/prototype-venv/bin/torchrun", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/run.py", line 918, in main
run(args)
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 260, in launch_agent
result = agent.run()
^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 137, in wrapper
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
result = self._invoke_run(role)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 899, in _invoke_run
self._restart_workers(self._worker_group)
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 137, in wrapper
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 702, in _restart_workers
self._initialize_workers(worker_group)
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 137, in wrapper
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 683, in _initialize_workers
self._rendezvous(worker_group)
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 137, in wrapper
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 500, in _rendezvous
rdzv_info = spec.rdzv_handler.next_rendezvous()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1162, in next_rendezvous
self._op_executor.run(join_op, deadline, self._get_deadline)
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 676, in run
raise RendezvousClosedError
torch.distributed.elastic.rendezvous.api.RendezvousClosedError
```
On the other node:
```
W0409 07:28:49.530000 1359 torch/distributed/elastic/agent/server/api.py:719] Received 15 death signal, shutting down workers
W0409 07:28:49.531000 1359 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1365 closing signal SIGTERM
Traceback (most recent call last):
File "/home/ubuntu/prototype-venv/bin/torchrun", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/run.py", line 918, in main
run(args)
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 260, in launch_agent
result = agent.run()
^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 137, in wrapper
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
result = self._invoke_run(role)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 870, in _invoke_run
time.sleep(monitor_interval)
File "/home/ubuntu/prototype-venv/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 84, in _terminate_process_handler
raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval)
torch.distributed.elastic.multiprocessing.api.SignalException: Process 1359 got signal: 15
```
This happens regardless of whether I use `CTRL-C` or `kill <PID>`. I’ve reviewed several related issues and discussions, like [67616](https://github.com/pytorch/pytorch/issues/67616), [67742](https://github.com/pytorch/pytorch/issues/67742), [147064](https://github.com/pytorch/pytorch/issues/147064) and [this post](https://discuss.pytorch.org/t/training-process-is-terminated-when-node-fails-for-torch-elastic/135580). None of them address this scenario. Let me know if this behavior is expected or if I’m missing something. I’m happy to provide more details if needed.
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1026-aws-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 7
BogoMIPS: 4999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 2 MiB (2 instances)
L3 cache: 35.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,981,923,995
|
`module.compile()` behaves differently from `torch.compile(module)`
|
GdoongMathew
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
`module.compile(backend=custom_backend)`'s forward method returns a different result from `torch.compile(module)`, especially when using `Conv2d`.
## Example
```python
def print_fx_graph(graph: torch.fx.GraphModule, example_inputs: list[torch.Tensor]):
print("Current graph:")
graph.graph.print_tabular()
return graph.forward
class Custom(torch.nn.Module):
def __init__(self):
super().__init__()
self.module = torch.nn.Conv2d(3, 3, 3)
def forward(self, x):
return self.module(x)
if __name__ == "__main__":
module = torch.nn.Conv2d(3, 3, 3)
module.cuda().eval()
image = torch.zeros((2, 3, 768, 768), device="cuda")
torch._dynamo.reset()
module.compile(backend=print_fx_graph)
print(module(image).shape)
torch._dynamo.reset()
custom_module = Custom()
custom_module.cuda().eval()
custom_module.compile(backend=print_fx_graph)
print(custom_module(image).shape)
```
>>>
```terminal
torch.Size([2, 3, 766, 766])
Current graph:
opcode name target args kwargs
------------- ---------------------------------------- --------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------- --------
placeholder l_self_modules_module_parameters_weight_ L_self_modules_module_parameters_weight_ () {}
placeholder l_self_modules_module_parameters_bias_ L_self_modules_module_parameters_bias_ () {}
placeholder l_x_ L_x_ () {}
call_function conv2d <built-in method conv2d of type object at 0x7fa3e7cd8f40> (l_x_, l_self_modules_module_parameters_weight_, l_self_modules_module_parameters_bias_, (1, 1), (0, 0), (1, 1), 1) {}
output output output ((conv2d,),) {}
torch.Size([2, 3, 766, 766])
```
### Error logs
_No response_
### Versions
<details>
<summary>Environment Info</summary>
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU
Nvidia driver version: 555.99
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 6900HS with Radeon Graphics
CPU family: 25
Model: 68
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
BogoMIPS: 6587.62
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable non
stop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs
ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,981,910,226
|
fix shard tensor gather when a local tensor on certain ranks has zero elements
|
tiankongdeguiji
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (sharded)"
] | 9
|
CONTRIBUTOR
|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,981,906,172
|
[Docs] Clarify behavior when integer dtype is used with requires_grad=True in `tensor.to()`
|
shink
|
closed
|
[
"open source",
"Merged",
"release notes: python_frontend",
"topic: docs"
] | 5
|
CONTRIBUTOR
|
Fixes #150618
Related comment: https://github.com/pytorch/pytorch/issues/3226#issuecomment-489362234
| true
|
2,981,904,285
|
using torch.compile with torchao at the same time cause stack overflow error
|
zhangvia
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
### 🐛 Describe the bug
when use torch.compile and torchao at the same time, there will be a stack overflow error. related issue [1775](https://github.com/pytorch/ao/issues/1775)
This issue appears to be related to a specific Python version and CUDA driver version.
error happens in:
python:3.10.0
torch:2.6.0+cu124
torchao:0.8.0+cu124
cuda driver:535.161.07
error disappear:
python:3.11.11
torch:2.6.0+cu124
torchao:0.8.0+cu124
cuda driver:535.161.07
### Versions
```python
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.2.14-050214-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
GPU 4: NVIDIA GeForce RTX 4090
GPU 5: NVIDIA GeForce RTX 4090
GPU 6: NVIDIA GeForce RTX 4090
GPU 7: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.161.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz
Stepping: 7
CPU MHz: 3489.838
CPU max MHz: 3900.0000
CPU min MHz: 1000.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 1.1 MiB
L1i cache: 1.1 MiB
L2 cache: 36 MiB
L3 cache: 49.5 MiB
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] lion-pytorch==0.2.2
[pip3] numpy==1.23.0
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.15.0
[pip3] onnx-tool==0.9.0
[pip3] onnxruntime-gpu==1.18.1
[pip3] onnxsim==0.4.36
[pip3] open-clip-torch==2.24.0
[pip3] pytorch-lightning==2.2.1
[pip3] pytorch-minimize==0.0.2
[pip3] torch==2.5.1+cu124
[pip3] torch-optimi==0.2.1
[pip3] torch_tensorrt==2.3.0+cu118
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.1+cu124
[pip3] torchgeometry==0.1.2
[pip3] torchmetrics==1.3.2
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] lion-pytorch 0.2.2 pypi_0 pypi
[conda] numpy 1.23.0 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] open-clip-torch 2.24.0 pypi_0 pypi
[conda] pytorch-lightning 2.2.1 pypi_0 pypi
[conda] pytorch-minimize 0.0.2 pypi_0 pypi
[conda] torch 2.5.1+cu124 pypi_0 pypi
[conda] torch-optimi 0.2.1 pypi_0 pypi
[conda] torch-tensorrt 2.3.0+cu118 pypi_0 pypi
[conda] torchao 0.9.0 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchgeometry 0.1.2 pypi_0 pypi
[conda] torchmetrics 1.3.2 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,981,764,194
|
[dynamo][invoke_subgraph] Use FxGraphModule comparison instead of hashing
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150717
* #151256
* __->__ #150911
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,981,745,425
|
[DO NOT MERGE] Throwaway changes
|
mlazos
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150910
* #152390
* #150909
* #150907
* #151406
* #150906
more throwaway
| true
|
2,981,745,262
|
[Inductor] Fix cuda_template.py typing
|
mlazos
|
closed
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #152390
* __->__ #150909
* #150907
* #151406
* #150906
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,744,853
|
[Inductor] Fix cuda_kernel typing
|
mlazos
|
closed
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #152390
* #150909
* __->__ #150908
* #150907
* #151406
* #150906
* #151713
* #151405
* #150905
* #152306
* #152305
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,744,709
|
[Cutlass] Changes to gemm template for EVT
|
mlazos
|
open
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152815
* __->__ #150907
* #151406
* #150906
* #152733
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,744,562
|
[Cutlass] Integrate EVT into CUDACPPScheduling
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152815
* #150907
* #151406
* __->__ #150906
* #152733
Previously merged:
* #151713
* #151405
* #150905
* #152306
* #152305
Allow epilogue nodes in cuda combined scheduling
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,744,441
|
[Cutlass] Implement cutlass epilogue visitor python codegen
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9
|
CONTRIBUTOR
|
This PR implements the second codegen task of CUTLASS EVT: translating inductor epilogue nodes into python code that will be traced by the EVT infra.
Details:
The implementation uses a simple ops wrapper which only supports add and mul pointwise ops today (to be extended in the future). This ops wrapper generates python code from inner_fn of the epilogue nodes in the format EVT expects. The main caveat is that one of the outputs needs to be named "D" and the accumulator input needs to be named "acc". Reads/writes are named according to the inductor buffer names otherwise.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #152390
* #150909
* #150908
* #150907
* #151406
* #150906
* #151713
* #151405
* __->__ #150905
* #152306
* #152305
Previously merged:
* #150904
* #150903
* #150346
* #150345
* #150344
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,744,335
|
[Cutlass] Implement EVT example tensor creation
|
mlazos
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
This PR implements a translation layer from inductor IR to "example tensors" the expected arguments of the EVT tracer. These tensors basically store the name, shape, stride, and dtype of the tensor and allow an ast-based python parse to generate the EVT C++.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #150909
* #150908
* #150907
* #151406
* #150906
* #151713
* #151405
* #150905
* __->__ #150904
udpates to example tensor creation
Previously merged:
* https://github.com/pytorch/pytorch/pull/150903
* https://github.com/pytorch/pytorch/pull/150346
* https://github.com/pytorch/pytorch/pull/150345
* https://github.com/pytorch/pytorch/pull/150344
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,744,221
|
[Cutlass] Implement Epilogue Argument emitter
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 20
|
CONTRIBUTOR
|
This implements epilogue visitor tree argument generation (example type [here](https://github.com/NVIDIA/cutlass/blob/3fe62887d8dd75700fdaf57f9c181878701b0802/include/cutlass/epilogue/fusion/sm90_callbacks_tma_warpspecialized.hpp#L332)).
Details:
The codegen task here is to implement a function which can generate a tree of C++ structs and properly extract the correct properties from Inductor buffers and write them to the correct locations in the generated struct. To implement this with the minimum amount of code, I generate the cutlass DAGIR (the EVT internal represenation) which specifically has a pass, [pass_argument_type.py ](https://github.com/NVIDIA/cutlass/blob/5e497243f7ad13a2aa842143f9b10bbb23d98292/python/cutlass/backend/evt/passes/pass_argument_type.py#L4) which generates a nested tree of custom argument types for each node in the DAGIR. This nested tree of constructors is then passed kwargs to fill in the proper values, where the node's name is used to differentiate between different values in the kwarg dictionary. This however is non-customizable; the nested tree of EVT args is a nested tree of ctypes which looks for *actual values* so that this object can be passed directly to the cutlass-python C++ runner. Inductor on the other hand needs to fill this struct with string C++ expressions representing the values (or extracting the values from kernel launcher args). So `_render_argument_type` implements this: it iterates over the tree of types created by pass_argument_type.py and generates a string representing the nested structs, filling in C++ expressions representing the different fields.
Long term plan:
Long term, I will ask the nvidia to provide an overridable [visitor_factory](https://github.com/NVIDIA/cutlass/blob/5e497243f7ad13a2aa842143f9b10bbb23d98292/python/cutlass/backend/evt/passes/pass_argument_type.py#L82) which could allow us to override the behavior of pass_argument_type.py to generate the string we would like during DAGIR generation.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #150909
* #150908
* #150907
* #151406
* #150906
* #151405
* #150905
* #150904
* __->__ #150903
Previously merged:
* #150346
* #150345
* #150344
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,724,861
|
DISABLED test_parity__foreach_acos_fastpath_inplace_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_inplace_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40221902832).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_inplace_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,981,649,287
|
[llvm] strong_type.h: error: is_arithmetic cannot be specialized: Users are not allowed to specialize this standard library entity
|
atupone
|
open
|
[
"needs reproduction",
"module: build",
"triaged"
] | 1
|
CONTRIBUTOR
|
I got a report (https://bugs.gentoo.org/953366) that clang++ is complaining about is_arithmetic usage here.
I'm not able to reproduce
cc @malfet @seemethere
| true
|
2,981,480,620
|
Windows Preview (Nightly) does not support Nvidia 5090D
|
monkeycc
|
closed
|
[] | 4
|
NONE
|
Windows 11
cuda 12.8
cudnn 9.8
`pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
`
```
torch 2.8.0.dev20250407+cu128
torchaudio 2.6.0.dev20250408+cu128
torchvision 0.22.0.dev20250408+cu128
```
```
Package Version
------------------ ------------------------
certifi 2025.1.31
charset-normalizer 3.4.1
colorama 0.4.6
coloredlogs 15.0.1
contourpy 1.3.1
cycler 0.12.1
filelock 3.18.0
flatbuffers 25.2.10
fonttools 4.57.0
fsspec 2025.3.2
humanfriendly 10.0
idna 3.10
Jinja2 3.1.6
kiwisolver 1.4.8
MarkupSafe 3.0.2
matplotlib 3.10.1
mpmath 1.3.0
networkx 3.4.2
numpy 2.1.1
onnx-weekly 1.18.0.dev20250407
onnxruntime 1.21.0
onnxruntime-gpu 1.21.0
opencv-python 4.11.0.86
packaging 24.2
pandas 2.2.3
pillow 11.0.0
pip 25.0
protobuf 6.30.2
psutil 7.0.0
py-cpuinfo 9.0.0
pyparsing 3.2.3
pyreadline3 3.5.4
python-dateutil 2.9.0.post0
pytz 2025.2
PyYAML 6.0.2
requests 2.32.3
scipy 1.15.2
seaborn 0.13.2
setuptools 75.8.0
six 1.17.0
sympy 1.13.3
torch 2.8.0.dev20250407+cu128
torchaudio 2.6.0.dev20250408+cu128
torchvision 0.22.0.dev20250408+cu128
tqdm 4.67.1
typing_extensions 4.13.1
tzdata 2025.2
ultralytics 8.3.104
ultralytics-thop 2.0.14
urllib3 2.3.0
wheel 0.45.1
```
```python
import torch
# Check if CUDA (GPU support) is available
print(f"Is CUDA available: {torch.cuda.is_available()}")
# If available, print current GPU device information
if torch.cuda.is_available():
print(f"Current GPU device: {torch.cuda.current_device()}")
print(f"Device name: {torch.cuda.get_device_name(0)}")
print(f"CUDA version: {torch.version.cuda}")
else:
print("CUDA is not available, PyTorch will run on CPU")
print(torch.cuda.is_available()) # Should return True
print(torch.version.cuda) # Check CUDA version
```
```
Is CUDA available: False
CUDA is not available, PyTorch will run on CPU
False
12.8
```
| true
|
2,981,467,342
|
[torch.compile] handle a custom __delattr__ method correctly
|
SandishKumarHN
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Fixes #150765
- handle a custom __delattr__ method correctly
Test:
```
import torch
class MyObject:
def __init__(self, val):
self.val = val
# Flag to track deletion attempts instead of using print
self.deletion_attempted = False
def __delattr__(self, attr):
if attr == "val":
# Set flag instead of printing
self.deletion_attempted = True
else:
super().__delattr__(attr)
@torch.compile(fullgraph=True, backend="eager")
def test(input_tensor):
instance_a = MyObject(1)
instance_b = MyObject(2)
del instance_a.val
del instance_b.val
exists_a = hasattr(instance_a, 'val')
exists_b = hasattr(instance_b, 'val')
deletion_attempted_a = instance_a.deletion_attempted
deletion_attempted_b = instance_b.deletion_attempted
return input_tensor + 1, exists_a, exists_b, deletion_attempted_a, deletion_attempted_b
# Run the test
result = test(torch.ones(1))
print(f"Result tensor: {result[0]}")
print(f"val attribute still exists on instance_a: {result[1]}")
print(f"val attribute still exists on instance_b: {result[2]}")
print(f"Deletion was attempted on instance_a: {result[3]}")
print(f"Deletion was attempted on instance_b: {result[4]}")
```
output:
```
(base) sany@sandishs-Laptop pytorch % python3 test_delattr_fix.py
Result tensor: tensor([2.])
val attribute still exists on instance_a: True
val attribute still exists on instance_b: True
Deletion was attempted on instance_a: True
Deletion was attempted on instance_b: True
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,981,418,143
|
[device_mesh] replace dim_group_info with group_name
|
wanchaol
|
open
|
[
"oncall: distributed",
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150897
* __->__ #150898
* #150896
as titled, there's no need to maintain a dim_group_info anymore, we can
simply maintain a list of group_name instead. This will simplify the
logic
cc @H-Huang @awgu @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,981,418,058
|
[device_mesh] improve device selection logic
|
wanchaol
|
open
|
[
"oncall: distributed",
"module: cpu",
"open source",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150897
* #150898
* #150896
as titled, this PR improves the device selection logic when user did not
set the device before calling the DeviceMesh constructor, as a device
manager, DeviceMesh should try to set the device for users in a good
way.
The behavior of set_device before:
* If user call `init_process_group` to init a world process group, we assume user already called set_device and we don't set the device for the user
* If user does not init a world process group by themselves, we init a world process group for the user and follow a heuristic to set the device.
This is ok but sometimes the set_device heuristic wouldn't work well (i.e. if user use `TORCH_CUDA_VISBILE_DEVICES`
So this PR improves the device selection logic to:
* If user call `init_process_group` to init a world process group **and the default cuda context is initialized**, then we assume user must called some cuda operation before therefore must have selected the device by themselves
* If not the above, then we check if envvars have "LOCAL_RANK" and "WORLD_SIZE" from the launcher (i.e. torchrun), if so, we use "LOCAL_RANK" to set the device for the current process, which is a very standard practice. (This solves the `TORCH_CUDA_VISBILE_DEVICES` issue)
* If not above, then we fallback to the old heuristic.
cc @H-Huang @awgu @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
2,981,417,968
|
Fix DTensorTestBase to barrier with device ids
|
wanchaol
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150897
* #150898
* __->__ #150896
try to get rid of the below annoying warnings when running the unit tests
cc @H-Huang @awgu @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,981,296,976
|
Revert "[ATen][CUDA] Implement 128 bit vectorization v2 (#145746)"
|
malfet
|
closed
|
[
"ciflow/periodic",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
This reverts commit e84bf88dde509d44175a0a1c00cec13c9926843e.
Fixes #ISSUE_NUMBER
| true
|
2,981,254,784
|
[aotinductor] fix std::{min.max} compilation error for sympy expr with multiple args
|
ColinPeppler
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 20
|
CONTRIBUTOR
|
### Compilation error
The issue is that u0 (an unbacked symint) can come from a smaller int dtype e.g. int16, int32.
```
error: no matching function for call to ‘min(int64_t&, short int&)’
759 | call_add_kernel_with_scaling_0(... std::min(100L, s97, u0) ...);
```
### Diff
The fix is to explicitly specify `int64_t` in the std::min template.
```
int64_t s97 = arg0_1_size[0];
int16_t u0_raw; # not a long
auto u0 = u0_raw;
# Before
std::min({100L, s97, u0})
# After
std::min<int64_t>({100L, s97, u0})
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151032
* __->__ #150894
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72858987](https://our.internmc.facebook.com/intern/diff/D72858987)
| true
|
2,981,207,048
|
Hipify global scrach defintion in AOTI codegen
|
zoranzhao
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
MEMBER
|
Summary: as title, a refactor is very needed I think .... or at least unify internal/external AOTI wrapper hipification method
Test Plan: P1780296121
Differential Revision: D72683568
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,191,407
|
Fix inplacing with multiple, fused uses
|
pytorchbot
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150845
We had `can_inplace` defined on a single use. When that buffer has multiple uses inside a fused node, we need to check if the other accesses have the same index. Otherwise we may read memory that has already been written to from inplacing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,189,896
|
[ONNX] How to export Llama4
|
srijanie03
|
open
|
[
"module: onnx",
"triaged"
] | 19
|
NONE
|
### 🐛 Describe the bug
I am trying to do an onnx export for the Llama 4 Scout model but it fails saying:
`RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DynamicCache`
The error traceback:
```
Traceback (most recent call last):
File "/proj/work/sdey/examples/llama4/llama4_scout.py", line 80, in <module>
torch.onnx.export(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/__init__.py", line 375, in export
export(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 502, in export
_export(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 997, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/proj/work/sdey//venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 904, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 1500, in _get_trace_graph
outs = ONNXTracedModule(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 139, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/proj/work/sdey/venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 133, in wrapper
out_vars, _ = _flatten(outs)
RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DynamicCache
```
This occurs for higher version of `transformers > 4.44.2`
Code to reproduce:
```
import torch
from transformers import AutoProcessor,AutoModelForImageTextToText, pipeline
processor = AutoProcessor.from_pretrained("meta-llama/Llama-4-Scout-17B-16E")
model = AutoModelForImageTextToText.from_pretrained("meta-llama/Llama-4-Scout-17B-16E",torch_dtype=torch.bfloat16)
url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": url1},
{"type": "image", "url": url2},
{"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"},
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
)
torch.onnx.export(
model,
(inputs["input_ids"], inputs["pixel_values"], inputs["attention_mask"]),
"llama4_scout.onnx",
do_constant_folding=False,
training= torch.onnx.TrainingMode.EVAL,
export_params=False)
```
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (aarch64)
GCC version: (GCC) 13.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1019-nvidia-64k-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GH200 480GB
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Vendor ID: ARM
Model name: Neoverse-V2
Model: 0
Thread(s) per core: 1
Core(s) per socket: 72
Socket(s): 1
Stepping: r0p0
Frequency boost: disabled
CPU max MHz: 3375.0000
CPU min MHz: 81.0000
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti
L1d cache: 4.5 MiB (72 instances)
L1i cache: 4.5 MiB (72 instances)
L2 cache: 72 MiB (72 instances)
L3 cache: 114 MiB (1 instance)
NUMA node(s): 9
NUMA node0 CPU(s): 0-71
NUMA node1 CPU(s):
NUMA node2 CPU(s):
NUMA node3 CPU(s):
NUMA node4 CPU(s):
NUMA node5 CPU(s):
NUMA node6 CPU(s):
NUMA node7 CPU(s):
NUMA node8 CPU(s):
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] onnx==1.17.0
[pip3] onnxruntime-training==1.20.0+cpu
[pip3] onnxscript==0.2.3
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
| true
|
2,981,184,960
|
[AO] fix per token block size calculation
|
mcr229
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: AO frontend"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150890
| true
|
2,981,184,669
|
2.7 RC: fails to install the needed `optree` dependency
|
stas00
|
closed
|
[
"high priority",
"triage review",
"module: regression",
"has workaround",
"oncall: pt2"
] | 13
|
CONTRIBUTOR
|
The same issue occurs in nightly and 2.7-RC
```
pip3 install torch --index-url https://download.pytorch.org/whl/test/cu128 -U
```
then when trying to use torch:
```
File "/code/users/stas/github/DeepSpeed/deepspeed/runtime/compiler.py", line 25, in disable
return torch.compiler.disable(func)
File "/home/yak/.local/lib/python3.10/site-packages/torch/compiler/__init__.py", line 241, in disable
import torch._dynamo
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/__init__.py", line 13, in <module>
from . import config, convert_frame, eval_frame, resume_execution
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 52, in <module>
from torch._dynamo.symbolic_convert import TensorifyState
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 57, in <module>
from . import (
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/trace_rules.py", line 32, in <module>
from .variables import (
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/variables/__init__.py", line 19, in <module>
from .base import VariableTracker
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/variables/base.py", line 619, in <module>
from . import builder
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 90, in <module>
from ..side_effects import SideEffects
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/side_effects.py", line 21, in <module>
from .codegen import PyCodegen
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/codegen.py", line 43, in <module>
from .variables.functions import (
File "/home/yak/.local/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 72, in <module>
from torch.distributed.fsdp._fully_shard import _fsdp_param_group
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/fsdp/__init__.py", line 1, in <module>
from ._flat_param import FlatParameter as FlatParameter
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 33, in <module>
from ._fsdp_extensions import (
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/fsdp/_fsdp_extensions.py", line 8, in <module>
from torch.distributed.fsdp._shard_utils import (
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/fsdp/_shard_utils.py", line 18, in <module>
from torch.distributed.tensor import DeviceMesh, DTensor, Replicate, Shard as DShard
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/tensor/__init__.py", line 4, in <module>
import torch.distributed.tensor._ops # force import all built-in dtensor ops
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/tensor/_ops/__init__.py", line 2, in <module>
from ._conv_ops import * # noqa: F403
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/tensor/_ops/_conv_ops.py", line 5, in <module>
from torch.distributed.tensor._dtensor_spec import DTensorSpec, TensorMeta
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/tensor/_dtensor_spec.py", line 6, in <module>
from torch.distributed.tensor.placement_types import (
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/tensor/placement_types.py", line 8, in <module>
import torch.distributed._functional_collectives as funcol
File "/home/yak/.local/lib/python3.10/site-packages/torch/distributed/_functional_collectives.py", line 17, in <module>
from torch.utils._cxx_pytree import tree_map_only
File "/home/yak/.local/lib/python3.10/site-packages/torch/utils/_cxx_pytree.py", line 79, in <module>
__TORCH_DICT_SESSION = optree.dict_insertion_ordered(True, namespace="torch")
AttributeError: module 'optree' has no attribute 'dict_insertion_ordered'
```
If one scrolls up, one finds this warning:
```
/home/yak/.local/lib/python3.10/site-packages/torch/utils/_pytree.py:173: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using `python3 -m pip install --upgrade 'optree>=0.13.0'`.
```
Why not have this as a proper dependency? or replace the warning with an assert so that the user won't waste time trying to figure out what's going on?
I have tried installing 5 other nightly versions before I found this warning, since I thought I got a lemon nightly.
This issue was filed on request from @seemethere after a slack discussion https://pytorch.slack.com/archives/C3PDTEV8E/p1744132678676049
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,981,175,085
|
[inductor] Change minimum number of SMs to 60 to let Ada use Triton GEMM backend
|
henrylhtsang
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148622
* __->__ #150888
context: https://github.com/pytorch/pytorch/issues/150390#issuecomment-2790272814
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,981,172,089
|
add logs for debugging chunk metadata
|
wconstab
|
closed
|
[
"oncall: distributed",
"release notes: distributed (checkpoint)"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150887
* #150862
* #150650
* #150490
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
2,981,166,871
|
not-for-landing add logs for debugging chunk metadata
|
teja-rao
|
open
|
[
"oncall: distributed",
"release notes: distributed (checkpoint)"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,981,149,305
|
Add basic functionality for installing params/bufs when specified
|
Lucaskabela
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150978
* __->__ #150885
* #151022
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,981,143,911
|
[Codemod][AddExplicitStrictExportForTrainingInferenceArg] caffe2/test/export
|
gmagogsfm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Differential Revision: D72667175
| true
|
2,981,107,079
|
`torch.autograd.backward` fails with single scalar `Tensor` as `inputs`
|
ValerianRey
|
closed
|
[
"high priority",
"module: autograd",
"triaged",
"actionable"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When manually specifying the `inputs` parameter of `torch.autograd.backward` to be a single **scalar** tensor, a `TypeError` is wrongly raised.
The following code works without any problem:
```python
import torch
x = torch.tensor([5.0, 6.0], requires_grad=True)
y = (x * 2).sum()
torch.autograd.backward(tensors=y, inputs=x)
```
However, if `x` is a scalar, the same call to `backward` will fail:
```python
import torch
x = torch.tensor(5.0, requires_grad=True)
y = x * 2
torch.autograd.backward(tensors=y, inputs=x)
```
produces the following error:
```
Traceback (most recent call last):
File "<python-input-0>", line 4, in <module>
torch.autograd.backward(tensors=y, inputs=x)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv-pytorch-2/lib/python3.13/site-packages/torch/autograd/__init__.py", line 328, in backward
if inputs is not None and len(inputs) == 0:
~~~^^^^^^^^
File "/home/valerian/.pyenv/versions/test-venv-pytorch-2/lib/python3.13/site-packages/torch/_tensor.py", line 1163, in __len__
raise TypeError("len() of a 0-d tensor")
TypeError: len() of a 0-d tensor
```
According to the documentation of `backward`, it should be possible to specify a single tensor as `inputs`, and there's no indication that this tensor should not be a scalar. Therefore, I think that the expected behavior would be for this to work (similarly as if we provided `inputs=[x]` instead of `inputs=x`).
I think the condition (coming from autograd's code) `if inputs is not None and len(inputs) == 0:` is not robust enough. FYI I found another issue related to this https://github.com/pytorch/pytorch/issues/70504.
I'm not particularly fan of the fact that inputs is allowed to be a single tensor, but I guess that removing this possibility would be a highly breaking change. So my suggestion is to fix the input validation of `torch.autograd.backward`.
What do you think? Should I make a PR to fix this?
### Versions
PyTorch version: 2.8.0.dev20250408+cu118
CUDA used to build PyTorch: 11.8
OS: Ubuntu 22.04.5 LTS (x86_64)
Python version: 3.13.1
<details>
<summary>details</summary>
Collecting environment information...
PyTorch version: 2.8.0.dev20250408+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.13.1 (main, Jan 29 2025, 18:48:09) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU max MHz: 4100,0000
CPU min MHz: 800,0000
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1,5 MiB (6 instances)
L3 cache: 9 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250408+cu118
[pip3] torchaudio==2.6.0.dev20250408+cu118
[pip3] torchvision==0.22.0.dev20250408+cu118
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchjd 0.5.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
</details>
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,981,104,947
|
[c10d][fr] Enable FR analysis script for all coalesce op
|
fduwjj
|
closed
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing",
"suppress-api-compatibility-check",
"suppress-bc-linter"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150882
This PR is to enable FR for all coalesce ops. (batch p2p is enabled in the current script, so we will mainly focus on non-P2P ops)
For non-P2P coalesced ops, there are are several ways to call it (due to legendary reasons):
1. Directly call python api like `all_reduce_coalesced` in python, this will be deprecated soon.
2. Directly call api inside PGNCCL like `allreduce_coalesced`. The way case 1 will eventually call into this. This is not deprecated and will not be deprecated, IIUC.
3. Using `_coalescing_manager` in python, like:
>>> with _coalescing_manager():
>>> for i in range(num_colls):
>>> dist.all_reduce(tensors[i])
This way has two path:
3.1 Fast path: when users call all-reduce, all-gather-into-tensor or reduce-scatter, we will only launch one big collective by calling the api from case 1.
3.2 Slow path: we call startCoalescing() in the beginning and then a bunch of collectives (each one will generate a FR entry) and then endCoalescing().
Inside startCoalescing(), groupStart() is called and inside endCoalescing(), groupEnd() is then called. So although this is going to be one collective, we generate multiple entries for each collective coalesced.
4. For uneven all-gather and reduce-scatter, it follows the pattern mention in 3.2.
This script address all these cases. If there is only one collective launched, we just do usual check like normal collective, but if it's like case 3.2 and 4, we need to look collective one by one. We cannot check the state match for these collectives, we can only check the state match for the last one which is the work item with coalesced label.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
Differential Revision: [D72752901](https://our.internmc.facebook.com/intern/diff/D72752901)
| true
|
2,981,104,794
|
[c10d][fr] Refactor analysis script for modularization and reusing for coalesce collectives
|
fduwjj
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150882
* __->__ #150881
Trying to make the code of FR analysis more reusable and modularized. So we split core error analysis logic into separate functions.
This PR mostly is shuffle around the code a bit.
Differential Revision: [D72690120](https://our.internmc.facebook.com/intern/diff/D72690120)
| true
|
2,981,067,897
|
Document non-pytorch CUDA memory allocation and how to query it
|
wconstab
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 6
|
CONTRIBUTOR
|
This PR documents the fact that PyTorch does not have visibility into how every CUDA memory allocation happend - it only knows about allocations that went through the pytorch CUDA allocator.
It also adds a code snippet showing how to use pynvml to query current GPU memory usage.
## Preview
Added a note at the top of "Understanding CUDA Memory Usage" doc:
<img width="732" alt="image" src="https://github.com/user-attachments/assets/69e28d2a-841a-4b1b-b886-e96fb5d76582" />
which links to a section below:
<img width="733" alt="image" src="https://github.com/user-attachments/assets/cab4f252-9ac2-4fc6-a45d-fdb958fc7dbc" />
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150880
| true
|
2,981,031,828
|
strict multidimensional slicing
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150879
WIP
Differential Revision: [D72673207](https://our.internmc.facebook.com/intern/diff/D72673207/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,981,014,386
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_uint8 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 7
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_uint8&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40194907361).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_uint8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,981,013,877
|
DISABLED test_multiple_module (__main__.TestInvokeSubgraphExportStrict)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 4
|
NONE
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_multiple_module&suite=TestInvokeSubgraphExportStrict&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40188284672).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 11 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_multiple_module`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/higher_order_ops/test_invoke_subgraph.py", line 1228, in test_multiple_module
self.assertTrue(torch.allclose(ep.module()(x, y), M()(x, y)))
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/exported_program.py", line 1297, in module
module = _unlift_exported_program_lifted_states(self)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/_unlift.py", line 420, in _unlift_exported_program_lifted_states
_register_attrs_to_new_gm(new_gm, ep.graph_signature, ep.state_dict, ep.constants)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/_unlift.py", line 261, in _register_attrs_to_new_gm
_assign_attr(
~~~~~~~~~~~~^
value, new_gm, name, attr_kind=_AttrKind.BUFFER, persistent=persistent
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1452, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1233, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1079, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 779, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile_inner
check_fn = CheckFunctionManager(
code,
...<2 lines>...
hooks.guard_fail_fn if hooks else None,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/guards.py", line 2534, in __init__
guard.create(builder)
~~~~~~~~~~~~^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_guards.py", line 357, in create
return self.create_fn(builder, self)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/guards.py", line 1721, in CONSTANT_MATCH
self.EQUALS_MATCH(guard)
~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/guards.py", line 1669, in EQUALS_MATCH
assert istype(val, ok_types) or pytree.is_constant_class(type(val)), (
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Unexpected type <class 're._ZeroSentinel'>
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/higher_order_ops/test_invoke_subgraph.py TestInvokeSubgraphExportStrict.test_multiple_module
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `higher_order_ops/test_invoke_subgraph.py`
cc @clee2000 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh
| true
|
2,980,984,221
|
[export] Refine draft-export CVE with Dim.AUTO
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Instead of using refine_dynamic_shapes_from_suggested_fixes to fix ConstraintViolationErrors in draft-export, we can just convert the dims to Dim.AUTO, which is less error prone
| true
|
2,980,917,451
|
Do not cover up `__dunder`__ method type-hints from `.pyi` file
|
alanhdu
|
open
|
[
"oncall: distributed",
"module: typing",
"triaged",
"open source",
"topic: not user facing",
"release notes: torch.func",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
In the build system, we generate a `torch/_C/__init__.pyi` that contains typehints of the base `TensorBase` that `torch.Tensor` inherits from. That contains a bunch of type-annotations for annotating these dunder methods.
Unfortunately, by defining them here, these are being automatically overwritten and "hidden", leading to a bunch of confusing type-errors like
```python
def inv(x: torch.Tensor):
# Unsupported operand [58]: `/` is not supported for operand types `int` and `torch._tensor.Tensor`.
1 / x
```
This modifies the code to use the *runtime* behavior of these functions but to fall back on the `.pyi` annotations at type-checking time.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @ezyang @malfet @xuzhao9 @gramster
| true
|
2,980,917,250
|
Fix aten.div type promotion for FakeTensor
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Summary:
When we divide a FakeTensor by an integer using the fast op implementation, the type promotion should be `ELEMENTWISE_TYPE_PROMOTION_KIND.INT_TO_FLOAT` so we get a float when dividing an int FakeTensor by an integer.
```
FAST = get_fast_op_impls()
fast_div = FAST[torch.ops.aten.div.Tensor]
fast_div(fake_tensor, some_int)
```
Test Plan:
```
python test/test_fake_tensor.py -k test_fast_div
```
Differential Revision: D72667430
| true
|
2,980,871,008
|
Add Created On | Last Updated On to the docs
|
svekars
|
open
|
[
"module: build",
"module: docs",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 📚 The doc issue
The "Created On | Last Updated" dates provide immediate context about documentation age and maintenance status, helping users assess content reliability.
Benefits
--------
* **Content freshness indicator**: Users can quickly determine if documentation is current or outdated
* **Maintenance transparency**: Shows active vs. abandoned documentation
* **Trust building**: Regularly updated docs build user confidence
* **Prioritization aid**: Helps maintainers identify outdated content needing review
### Suggest a potential alternative/fix
Implementation
--------------
Can be implemented directly in the pytorch_sphinx_theme2 and enabled in `conf.py`. Recommended for all sites by default. We can use a `git log` command to get the both dates or just the created on and then use the `sphinx_last_updated_by_git` extension for efficiency to obtain last updated date. Dates would appear under each page's H1 heading:

This is already implemented in the tutorials repo: https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html
Discussion: Placement of Date Information
=========================================
Top of Page Placement (Under H1)
--------------------------------
**Pros:**
* Immediately visible to users
* Provides context before reading content
* Consistent with existing implementation in tutorials
**Cons:**
* Could be distracting from main content
* Takes up valuable "above the fold" space
Bottom of Page Placement
------------------------
**Pros:**
* Less distracting from main content
* Doesn't compete with important information
* Still available for users who want it
**Cons:**
* Easily missed on long pages
* Users may read outdated content without realizing it
* Requires scrolling to assess content freshness
cc @malfet @seemethere @sekyondaMeta @AlannaBurke
| true
|
2,980,815,251
|
[Inductor][NCU] Add kernel name filtering, and allow custom metrics
|
yf225
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150872
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,980,741,548
|
register_replacement should also respect arg_kwarg_vals
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
Followup to the stack over at https://github.com/pytorch/pytorch/pull/150511. Assigning myself so I don't forget. It will happen soon
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,980,730,515
|
Fix torchscript issues with reference quantized modules
|
Ivan-Dimitrov
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: AO frontend"
] | 15
|
CONTRIBUTOR
|
Summary:
The reference quantized modules for linear / conv / etc fail to torchscript due to two issues
(1) The type of torch.qscheme doesn't script
(2) The "_DTYPE_TO_QVALUE_BOUNDS" values were resolving to union[float, int] instead of just int. We fix that with a hard cast.
See: <internal post> + comments for more context
Test Plan: unit tests + fixing this NB N6923590
Differential Revision: D72652616
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,980,672,676
|
Move prologue_supported_inputs computations to def_kernal
|
laithsakka
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150869
* #151778
* #151773
* #151764
This avoid replaying load_input on a cache hit on the generate_code_cache.
Effect on the current benchmark on a local run on dev server.
18549985383 -> 15072230073
25697270062 -> 20738613297
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,980,672,162
|
DISABLED test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_True (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40189601067).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 160, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4097, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 14 but got 28.
Absolute difference: 14
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,980,658,621
|
Expand allowed_getattr_types_for_subgm to torch.Tensor
|
SherlockNoMad
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary:
att
regular weight has the type of torch.nn.parameter.Parameter
buffer and tensor constant has the type of torch.Tensor
both types are valid.
Test Plan: CI
Differential Revision: D72657275
| true
|
2,980,654,925
|
add test for import cutlass
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150866
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,980,650,824
|
Add dynamic version for mm_loop benchmark
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150869
* #149267
* __->__ #150865
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,980,650,463
|
[dynamo] `torch.compile` graph breaks on `setattr` of type objects
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Observed in https://github.com/pytorch/pytorch/issues/150848#issuecomment-2787279312
Minimal repro:
```python
import torch
class Foo():
pass
@torch.compile(backend="eager", fullgraph=True)
def f(x):
Foo.bar = 42
return x + 1
f(torch.ones(2))
```
### Error logs
```verbatim
Traceback (most recent call last):
File "/home/ryanguo99/pt/scratch/setattr.py", line 11, in <module>
f(torch.ones(2))
File "/home/ryanguo99/pt/pytorch/torch/_dynamo/eval_frame.py", line 667, in _fn
raise e.with_traceback(None) from e.__cause__
torch._dynamo.exc.Unsupported: Failed to trace builtin operator
Explanation: Dynamo does not know how to trace builtin operator `setattr` with argument types ['type', 'str', 'int'] (has_kwargs False)
Hint: Avoid calling builtin `setattr` with argument types ['type', 'str', 'int']. Consider using an equivalent alternative function/method to `setattr`.
Hint: If you are attempting to call a logging function (e.g. `print`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.
Hint: Please report an issue to PyTorch.
Developer debug context: builtin setattr [<class 'torch._dynamo.variables.user_defined.UserDefinedClassVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False
from user code:
File "/home/ryanguo99/pt/scratch/setattr.py", line 8, in f
Foo.bar = 42
```
### Versions
main a8f6b40e, python 3.12
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,980,636,743
|
[ez][c10d] Disable start event recording for coalesced col and improve profile title
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150863
While looking at enabling FR analysis for coalesced collectives, I found that for the slow-path coalescing (cols which are not all-gather, all-reduce or reduce-scatter), we still record start event for them. This is wrong and we should do the same thing as endEvent recodring.
And I made the profiler title more visible when we pass in the opType for coalesced all-gather and reduce-scatter.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,980,620,931
|
[DTensor] Fix empty shard global-offset calculation
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150862
`compute_local_shape_and_global_offset` util computes the local shape of
a particular shard of a DTensor, and the global offset (which describes
how the shard fits into the global tensor).
When the tensor dim does not evenly divide into the mesh dim, uneven
sharding occurs. In some cases, uneven sharding results in an empty
shard.
e.g.
tensor dim size: 4096
mesh dim size: 30
ranks 0..27 have local size 18
rank 28 has local size 8
rank 29 has local size 0 <--- empty shard
The global offset for an empty shard was previously undefined and
returned values that were computed based on logic that assumes no empty
shards. This caused DCP to fail to save a checkpoint, becuase
deduplication logic could 'throw away' real (non-empty) shards thinking
they were duplicates of zero-sized shards with the same offset.
Now, we define the global offset of an empty shard to be the dim-size,
which is out of bounds of the tensor and can't overlap with any
non-empty shards.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
2,980,613,704
|
Allow non-Tensor subclass in `torch.Tensor._make_wrapper_subclass`
|
ZhiyuanChen
|
open
|
[
"triaged",
"tensor subclass"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
I have a NestedTensor class like this:
```python
from collections.abc import Mapping, Sequence
from typing import Any, Iterable, SupportsFloat, Tuple
import torch
from torch import Tensor
from ..utils import method_cache
from .functions import NestedTensorFuncRegistry, NestedTensorFuncWrapper
from .utils import mask_tensor, pad_tensor, tensor_mask
try:
from typing import Self # type: ignore[attr-defined]
except ImportError:
from typing_extensions import Self
try:
from torch import nested
except ImportError:
nested = None
class NestedTensor:
__storage: Tuple[Tensor, ...]
dtype: torch.dtype | None = None
device: torch.device | None = None
requires_grad: bool | None = None
_pin_memory: bool = False
batch_first: bool = True
padding_value: SupportsFloat = 0.0
mask_value: bool = False
def __init__(
self,
*tensors: Iterable[Tensor],
dtype: torch.dtype | None = None,
device: torch.device | None = None,
requires_grad: bool | None = None,
pin_memory: bool = False,
batch_first: bool = True,
padding_value: SupportsFloat = 0.0,
mask_value: bool = False,
) -> None:
self.dtype = dtype
self.device = device
self.requires_grad = requires_grad
self._pin_memory = pin_memory
if len(tensors) == 1 and isinstance(tensors, Sequence):
tensors = tensors[0] # type: ignore
self._storage = tensors
self.batch_first = batch_first
self.padding_value = padding_value
self.mask_value = mask_value
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
if kwargs is None:
kwargs = {}
if func not in NestedTensorFuncRegistry or not all(issubclass(t, (torch.Tensor, NestedTensor)) for t in types):
args = [a.tensor if hasattr(a, "tensor") else a for a in args]
for k, v in kwargs.items():
if hasattr(v, "tensor"):
kwargs[k] = v.tensor
return func(*args, **kwargs)
return NestedTensorFuncRegistry[func](*args, **kwargs)
@classmethod
def __torch_dispatch__(cls, func, types, args=(), kwargs=None):
args = [a.tensor if hasattr(a, "tensor") else a for a in args]
for k, v in kwargs.items():
if hasattr(v, "tensor"):
kwargs[k] = v.tensor
return func(*args, **kwargs)
def __getattr__(self, name: str) -> Any:
if not self._storage:
raise ValueError(f"Unable to get {name} from an empty {self.__class__.__name__}")
ret = [getattr(i, name) for i in self._storage]
elem = ret[0]
if isinstance(elem, Tensor):
return NestedTensor(ret, **self._state)
if callable(elem):
return NestedTensorFuncWrapper(ret, state=self._state)
if elem.__hash__ is not None and len(set(ret)) == 1:
return elem
return ret
class NestedTensorFuncWrapper: # pylint: disable=R0903
__storage: Sequence[Callable] = []
state: Mapping = {}
def __init__(self, *callables: Iterable[Callable], state: Mapping | None = None) -> None:
if len(callables) == 1 and isinstance(callables, Sequence):
callables = callables[0] # type: ignore
self._storage = callables # type: ignore
if state is None:
state = {}
self.state = state
@property
def _storage(self):
return self.__storage
@_storage.setter
def _storage(self, callables: Sequence):
if not isinstance(callables, Sequence):
raise ValueError(f"callables must be a Sequence, bug got {type(callables)}")
if len(callables) == 0:
raise ValueError("callables must be a non-empty Sequence.")
if not callable(callables[0]):
raise ValueError(f"callables must be a Sequence of Callable, bug got {type(callables[0])}")
self.__storage = callables
def __call__(self, *args, **kwargs) -> NestedTensor | Sequence[Tensor]:
from .nested_tensor import NestedTensor
from .tensor import PNTensor
ret = [call(*args, **kwargs) for call in self._storage]
elem = ret[0]
if isinstance(elem, Tensor):
try:
return PNTensor(ret)
except ValueError:
return NestedTensor(ret, **self.state)
if elem.__hash__ is not None and len(set(ret)) == 1:
return elem
return ret
```
Since torch's Tensor have hundreds of functions (last time I check, the `len(dir(torc.Tensor)) = 745`, and I do not want to implement them one by one. So I modify the `__getattr__` method and let it handles function calls. All I need to do is only override the methods that do not work as expected.
But when I try to make the NestedTensor class subclass `torch.Tensor`, like this:
```
class NestedTensor(torch.Tensor):
@staticmethod
def __new__(
cls,
*tensors: Iterable[Tensor],
dtype: torch.dtype | None = None,
device: torch.device | None = None,
batch_first: bool = True,
padding_value: SupportsFloat = 0.0,
mask_value: bool = False,
**kwargs,
):
ks = DispatchKeySet(DispatchKey.NestedTensor)
ks = ks.add(DispatchKey.AutogradNestedTensor)
return torch.Tensor._make_wrapper_subclass(cls, (), device=device, dtype=dtype, **kwargs)
```
All undefined methods will go through the MRO and uses `torch.Tensor`'s version.
Which magically works, but it returns a padded tensor. This breaks some functions like `mean` and `sqrt`.
Since there is a dedicated `torch.Tensor._make_subclass` for subclassing Tensor, it would be reasonable to make `torch.Tensor._make_wrapper_subclass` works for non-`Tensor` subclass.
By the way, the second parameter of `torch.Tensor._make_wrapper_subclass` (size) is also confusing, I passed an empty tuple as of now, but it seems to work.
### Alternatives
Introduce another API that works for non-tensor subclasses.
### Additional context
You can find a MWE of the two versions at [tensor1](https://github.com/ZhiyuanChen/DanLing/blob/tensor/danling/tensor/nested_tensor.py) and [tensor2](https://github.com/ZhiyuanChen/DanLing/blob/tensor2/danling/tensor/nested_tensor.py).
cc @ezyang @albanD
| true
|
2,980,588,621
|
Fill config2launcher with correct launchers during cache hit coordinate descent
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150860
This bug was crazy hard to reproduce, so I can't seem to get a unit test written that isn't the internal one I used for debugging.
Here's a short TLDR of the bug:
- Due to D71983456(OSS: https://github.com/pytorch/pytorch/pull/149910), we cache CachingAutotuners in memory.
- Importantly: **Saving stuff in PyCodeCache in memory is not semantically equivalent to writing to disk**. By saving it in memory, CachingAutotuners do not reset global state.
- It's possible through recompiles for different dynamo frames to compile down to exactly the same inductor output code. This involves models that run multiple times, but differ very subtley, or in ways that cause a dynamo guard failure but not a different inductor output code.
- Because of this, we reuse CachingAutotuners for a second compile (with different example inputs, just the same triton kernel code)
- CachingAutotuners have a Coordinate Descent class on them, which has a cache: https://fburl.com/code/4igrsams (OSS: https://github.com/pytorch/pytorch/blob/aafc4b6188b70cf808f756f23b1a05355bcb7696/torch/_inductor/runtime/coordinate_descent_tuner.py#L69)
- Because we are caching these in memory and not on disk, this cache is **not cleared** between runs.
- However, this variable is *not* saved on the class, and is reinitialized every time we do autotuning: https://fburl.com/code/n2o8tmje
(OSS: https://github.com/pytorch/pytorch/blob/aafc4b6188b70cf808f756f23b1a05355bcb7696/torch/_inductor/runtime/triton_heuristics.py#L933)
- `config2launcher` is added when we call `benchmark_one_config`, but on a CoorDesc *cache hit*, we never call `benchmark_one_config`! So we end up returning None, and erroring with:
```
AttributeError: 'NoneType' object has no attribute 'store_cubin'
```
This fixes the problem for now by just recompiling the launcher. Technically, we might be able to save config2launcher on the class to avoid this, but I don't want to risk another weird cache safety bug here, so taking the simpler approach for now.
Note that this error only reproduces if:
- None of AOTAutogradCache, FXgraphCache hit on the second entry: otherwise, the CachingAutotuner will go through a pickling and then not be saved in memory
- We haven't spawned parallel compile workers. If there are parallel compile workers, we pickle the autotuner on the way from the worker to the parent process, once again resetting the Autotuner.
- The autotune cache doesn't already have the best config stored in it
So it was extraordinarily hard to debug/reproduce. Because of this, I have a complicated internal unit test but no OSS test that can trigger the exact problem. I'll work on a separate test later, but this needs to go in to fix a sev, so we're landing it based on an internal test only.
Differential Revision: [D72655382](https://our.internmc.facebook.com/intern/diff/D72655382/)
**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D72655382/)!
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,980,573,902
|
RMS norm causes NaNs when used with torch.compile + float8 with rowwise scales
|
danielvegamyhre
|
open
|
[
"high priority",
"triaged",
"has workaround",
"module: correctness (silent)",
"months",
"module: norms and normalization",
"bug",
"oncall: pt2",
"module: inductor",
"module: float8"
] | 33
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This PR (https://github.com/pytorch/pytorch/pull/147203) causes NaNs in torchtitan training when RMS norm is used with torch.compile and float8 training with rowwise scalese.
- Sometime between 2.6.0 and present, a change in pytorch core was introduced that caused loss to not go down and then eventually become NaN after 40 or so steps.
- I binary searched the commits in this time range and confirmed this commit is what caused the regression ([link](https://github.com/pytorch/torchtitan/issues/1056#issuecomment-2785050324))
- I also confirmed the issue reproduces using `rmsnorm` with the latest nightly build, and does not reproduce using `layernorm` ([link](https://github.com/pytorch/torchtitan/issues/1056#issuecomment-2787101126))
As a next step I plan to diff the triton kernels generated by inductor with vs without the change, but in the meantime if PR author @riccardofelluga (or anyone else) has thoughts to share on this please let me know.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov @yanbing-j @vkuzo @albanD @jerryzh168 @riccardofelluga @lessw2020 @drisspg
### Versions
pytorch: nightly build
torchao: 0.9.0
torchtitan: main branch@ HEAD
| true
|
2,980,471,381
|
Add basic unit test and noop config
|
Lucaskabela
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150978
* #150885
* __->__ #150858
### Tests
```
python test/dynamo/test_install_params_as_graph_attr.py
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,980,440,089
|
No way to save/load model weights between PyTorch and LibTorch (C++) in a compatible way
|
FuryBaM
|
open
|
[
"module: cpp",
"module: serialization",
"triaged",
"enhancement"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
LibTorch (C++) does not provide a way to save or load model weights in a way compatible with Python’s torch.save() / torch.load() or state_dict() / load_state_dict(). This makes it hard to share models between Python and C++ environments.
### Alternatives
I've managed to implement cross-compatibility between PyTorch (Python) and LibTorch (C++) by creating my own binary weight serializer. Currently, LibTorch lacks a native and simple way to save/load only model weights that are compatible with Python torch.nn.Module.state_dict() and vice versa.
Here is a working example of what I had to do manually (binary format):
**C++ Code (LibTorch)**
```cpp
void save_weights(const std::string& path) {
std::ofstream file(path, std::ios::binary);
if (!file) throw std::runtime_error("Failed to open file for writing");
for (const auto& pair : named_parameters()) {
const std::string& name = pair.key();
const torch::Tensor& param = pair.value();
torch::Tensor tensor = param.cpu().contiguous();
int64_t name_len = name.size();
file.write(reinterpret_cast<const char*>(&name_len), sizeof(name_len));
file.write(name.data(), name_len);
auto shape = tensor.sizes();
int64_t ndims = shape.size();
file.write(reinterpret_cast<const char*>(&ndims), sizeof(ndims));
for (auto dim : shape) {
file.write(reinterpret_cast<const char*>(&dim), sizeof(dim));
}
int64_t num_elems = tensor.numel();
file.write(reinterpret_cast<const char*>(tensor.data_ptr()), num_elems * tensor.element_size());
}
for (const auto& pair : named_buffers()) {
const std::string& name = pair.key();
const torch::Tensor& param = pair.value();
torch::Tensor tensor = param.cpu().contiguous();
int64_t name_len = name.size();
file.write(reinterpret_cast<const char*>(&name_len), sizeof(name_len));
file.write(name.data(), name_len);
auto shape = tensor.sizes();
int64_t ndims = shape.size();
file.write(reinterpret_cast<const char*>(&ndims), sizeof(ndims));
for (auto dim : shape) {
file.write(reinterpret_cast<const char*>(&dim), sizeof(dim));
}
int64_t num_elems = tensor.numel();
file.write(reinterpret_cast<const char*>(tensor.data_ptr()), num_elems * tensor.element_size());
}
}
void load_weights(const std::string& path) {
std::ifstream file(path, std::ios::binary);
if (!file) throw std::runtime_error("Failed to open file for reading");
std::unordered_map<std::string, torch::Tensor> params_map;
for (auto& pair : named_parameters()) {
params_map[pair.key()] = pair.value();
}
std::unordered_map<std::string, torch::Tensor> buffers_map;
for (auto& pair : named_buffers()) {
buffers_map[pair.key()] = pair.value();
}
while (file.peek() != EOF) {
int64_t name_len;
file.read(reinterpret_cast<char*>(&name_len), sizeof(name_len));
std::string name(name_len, '\0');
file.read(name.data(), name_len);
torch::Tensor* target_tensor = nullptr;
auto it_param = params_map.find(name);
if (it_param != params_map.end()) {
target_tensor = &it_param->second;
}
else {
auto it_buf = buffers_map.find(name);
if (it_buf != buffers_map.end()) {
target_tensor = &it_buf->second;
}
else {
throw std::runtime_error("Parameter or buffer " + name + " not found in model");
}
}
torch::Tensor& param = *target_tensor;
int64_t ndims;
file.read(reinterpret_cast<char*>(&ndims), sizeof(ndims));
std::vector<int64_t> shape(ndims);
for (int i = 0; i < ndims; ++i) {
file.read(reinterpret_cast<char*>(&shape[i]), sizeof(shape[i]));
}
auto expected_shape = param.sizes();
if (shape != expected_shape) {
throw std::runtime_error("Shape mismatch for " + name);
}
int64_t num_elems = 1;
for (auto dim : shape) num_elems *= dim;
file.read(reinterpret_cast<char*>(param.data_ptr()), num_elems * param.element_size());
}
}
```
**Python Code**
```python
def save_weights(self, path):
with open(path, 'wb') as file:
for name, param in self.named_parameters():
tensor = param.detach().cpu().contiguous()
name_len = len(name)
file.write(name_len.to_bytes(8, byteorder='little'))
file.write(name.encode('utf-8'))
shape = tensor.size()
ndims = len(shape)
file.write(ndims.to_bytes(8, byteorder='little'))
for dim in shape:
file.write(dim.to_bytes(8, byteorder='little'))
num_elems = tensor.numel()
file.write(tensor.numpy().tobytes())
for name, buffer in self.named_buffers():
tensor = buffer.detach().cpu().contiguous()
name_len = len(name)
file.write(name_len.to_bytes(8, byteorder='little'))
file.write(name.encode('utf-8'))
shape = tensor.size()
ndims = len(shape)
file.write(ndims.to_bytes(8, byteorder='little'))
for dim in shape:
file.write(dim.to_bytes(8, byteorder='little'))
num_elems = tensor.numel()
file.write(tensor.numpy().tobytes())
def load_weights(self, path):
if not os.path.exists(path):
raise RuntimeError("Failed to open file for reading")
with open(path, 'rb') as file:
params_map = {name: param for name, param in self.named_parameters()}
buffers_map = {name: buffer for name, buffer in self.named_buffers()}
while True:
name_len = int.from_bytes(file.read(8), byteorder='little')
if not name_len:
break
name = file.read(name_len).decode('utf-8')
target_tensor = params_map.get(name)
if target_tensor is None:
target_tensor = buffers_map.get(name)
if target_tensor is None:
raise RuntimeError(f"Parameter or buffer '{name}' not found in model")
ndims = int.from_bytes(file.read(8), byteorder='little')
shape = [int.from_bytes(file.read(8), byteorder='little') for _ in range(ndims)]
if len(shape) == 0:
shape = []
expected_shape = target_tensor.shape
if tuple(shape) != tuple(expected_shape):
raise RuntimeError(f"Shape mismatch for {name}, expected {expected_shape}, got {shape}")
num_elems = 1
for dim in shape:
num_elems *= dim
data = file.read(num_elems * target_tensor.element_size())
tensor_data = torch.frombuffer(data, dtype=target_tensor.dtype).reshape(shape)
with torch.no_grad():
target_tensor.copy_(tensor_data)
```
**Why this matters**
LibTorch doesn’t support Python’s torch.save() and torch.load() directly due to incompatible serialization mechanisms. Also, TorchScript lacks the ability to export model weights easily for reloading in C++.
**It would be very helpful if PyTorch could support either of the following:**
- A standard binary format for model weights compatible between Python ↔ C++.
- A .load_state_dict() and .state_dict() equivalent in LibTorch.
- Built-in utility to read .pt or .pth weight-only files in C++.
This would massively simplify workflows for deploying or testing models across environments.
### Additional context
You see full code in the my repo [Toguzkumalak](https://github.com/FuryBaM/Toguzkumalak/tree/master/Toguzkumalak).
Results in libtorch c++
```
Policy: [0.110181, 0.122630, 0.111052, 0.103369, 0.105678, 0.114038, 0.108151, 0.115452, 0.109449]
Value prediction: 0.051766
State: [9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 0.000000, 0.000000, 0.000000]
```
in python
```python
if __name__ == "__main__":
model = TNET()
model.load_binary_weights("C:/Users/Akzhol/source/repos/Toguzkumalak/Toguzkumalak/build/Release/model_data/weights.dat")
input_data = torch.tensor([9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000,
9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000, 9.000000,
9.000000, 9.000000, 0.000000, 0.000000, 0.000000], dtype=torch.float32).view(1, -1)
model.eval()
output = model(input_data)
print(output)
```
```
(tensor([[0.1102, 0.1226, 0.1111, 0.1034, 0.1057, 0.1140, 0.1082, 0.1155, 0.1094]],
grad_fn=<ExpBackward0>), tensor([[0.0518]], grad_fn=<TanhBackward0>))
```
cc @jbschlosser @mruberry @mikaylagawarecki
| true
|
2,980,431,315
|
[logging] Separate cuda synchronize overhead in autotuning
|
masnesral
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: In order to more accurately debug the overhead of autotuning (and pad_mm), explicity do a cuda.synchronize before benchmarking and time that.
Test Plan: See internal run here: https://fburl.com/f365xfcj
Zooming on on relevant examples from the perfetto:
<img width="1076" alt="Screenshot 2025-04-08 at 9 41 08 AM" src="https://github.com/user-attachments/assets/ce6f7da9-cf34-432d-a524-730198a22399" />
<img width="1091" alt="Screenshot 2025-04-08 at 9 39 21 AM" src="https://github.com/user-attachments/assets/3be0a1d0-77d5-48e6-8891-9898214bcc34" />
Differential Revision: D72652092
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,980,414,143
|
Revert "[CUDA] Only use vec128 if CUDA version is newer than 12.8"
|
atalman
|
closed
|
[
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
Reverts pytorch/pytorch#150818
Reverting since reverted on trunk
| true
|
2,980,383,313
|
[pytorch] add header docs for TORCH_LIBRARY_THREAD_UNSAFE_LAZY_INIT
|
rmaz
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: mobile"
] | 7
|
CONTRIBUTOR
|
Summary: Add header docs for the experimental TORCH_LIBRARY_THREAD_UNSAFE_LAZY_INIT feature, and guard behind C10_MOBILE.
Reviewed By: albanD
Differential Revision: D72572345
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,980,223,011
|
Inconsistent value from `torch.sum`
|
LogicFan
|
closed
|
[
"module: rocm",
"triaged"
] | 10
|
NONE
|
### 🐛 Describe the bug
Applying `torch.sum` to the same vector results a different value.
```
j = torch.load('./j0.pt')
for _ in range(1000):
print(torch.sum(j))
```
in rare chance, it will produce very wrong value.

[j0.zip](https://github.com/user-attachments/files/19652473/j0.zip)
### Versions
PyTorch version: 2.3.0+rocm6.2.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41134-65d174c3e
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 7900 XTX (gfx1100)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41134
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
BogoMIPS: 6787.30
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid fsrm
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 32 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton-rocm==2.3.0+rocm6.2.3.5a02332983
[pip3] torch==2.3.0+rocm6.2.3
[pip3] torch-tb-profiler==0.4.3
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.18.0+rocm6.2.3
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-triton-rocm 2.3.0+rocm6.2.3.5a02332983 pypi_0 pypi
[conda] torch 2.3.0+rocm6.2.3 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchvision 0.18.0+rocm6.2.3 pypi_0 pypi
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,980,215,463
|
NCCL init hits CUDA failure 'invalid argument' on 12.2 driver
|
kwen2501
|
open
|
[
"oncall: distributed",
"triaged",
"module: nccl",
"has workaround"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Error seen with nightly build, e.g. torch==2.8.0.dev20250327+cu126
```
[2025-04-08 08:39:46] devgpu263:589012:591652 [0] transport/nvls.cc:254 NCCL WARN Cuda failure 1 'invalid argument'
devgpu263:589012:591652 [0] NCCL INFO transport/nvls.cc:409 -> 1
devgpu263:589012:591652 [0] NCCL INFO init.cc:1141 -> 1
devgpu263:589012:591652 [0] NCCL INFO init.cc:1409 -> 1
devgpu263:589012:591652 [0] NCCL INFO group.cc:75 -> 1 [Async thread]
devgpu263:589012:589012 [0] NCCL INFO group.cc:422 -> 1
devgpu263:589012:589012 [0] NCCL INFO group.cc:581 -> 1
devgpu263:589012:589012 [0] NCCL INFO init.cc:1836 -> 1
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/kw2501/local/driver_12.2/repro.py", line 12, in <module>
[rank0]: dist.all_reduce(x)
[rank0]: File "/home/kw2501/.conda/envs/titan/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/kw2501/.conda/envs/titan/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2868, in all_reduce
[rank0]: work = group.allreduce([tensor], opts)
[rank0]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.26.2
[rank0]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank0]: Last error:
[rank0]: Cuda failure 1 'invalid argument'
```
Mini repro:
```python
import os
import torch
import torch.distributed as dist
if __name__ == "__main__":
rank = int(os.getenv("RANK"))
world_size = int(os.getenv("WORLD_SIZE"))
device = torch.device("cuda", rank)
dist.init_process_group("nccl", rank=rank, world_size=world_size)
x = torch.empty(4, device=device)
dist.all_reduce(x)
print(x)
```
Command line:
```
NCCL_DEBUG=INFO torchrun --standalone --nproc-per-node 4 repro.py
```
**Fails with 12.2 driver:**
`Driver Version: 535.154.05 CUDA Version: 12.2`
**Works with 12.4 driver:**
`Driver Version: 550.90.07 CUDA Version: 12.4`
```
| Run w/ 12.2 driver | Run w/ 12.4 driver or higher
torch Built w/ |
12.2 toolkit | Works | Not tested
--------------------------------------------------------------------
torch Built w/ |
12.6 toolkit | Fails | Works
```
Line 254 in nvls.cc:
```
CUCHECKGOTO(cuMulticastBindMem(*mcHandle, 0/*mcOffset*/, *ucHandle, 0/*memOffset*/, ucsize, 0/*flags*/), ret, fail);
```
### Versions
```
[kw2501@devgpu263.prn2 ~]$ python collect_env.py
Collecting environment information...
PyTorch version: 2.8.0.dev20250327+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk20_zion_2830_g3e5ab162667d-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn.so.9.6.0
/usr/lib64/libcudnn_adv.so.9.6.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn.so.9.6.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.6.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib64/libcudnn_graph.so.9.6.0
/usr/lib64/libcudnn_heuristic.so.9.6.0
/usr/lib64/libcudnn_ops.so.9.6.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 84%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4792.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250327+cu126
[pip3] torchaudio==2.6.0.dev20250329+cu126
[pip3] torchdata==0.11.0
[pip3] torchtitan==0.0.2
[pip3] torchvision==0.22.0.dev20250329+cu126
[pip3] triton==3.2.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250327+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250329+cu126 pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchtitan 0.0.2 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250329+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,980,210,594
|
Cannot export torch.sym_max(x.shape[0], y.shape[0])
|
xadupre
|
closed
|
[
"oncall: pt2",
"module: dynamic shapes",
"oncall: export"
] | 8
|
COLLABORATOR
|
### 🐛 Describe the bug
Nothing I tried for this example works.
```python
import torch
class Model(torch.nn.Module):
def forward(self, x, y):
s1 = max(x.shape[0], y.shape[0])
s2 = max(x.shape[1], y.shape[1])
z = torch.zeros((s1, s2), dtype=x.dtype)
z[:x.shape[0], :x.shape[1]] = x
z[:y.shape[0], :y.shape[1]] += y
return z
model = Model()
x = torch.arange(6).reshape((2,3))
y = torch.arange(6).reshape((3,2)) * 10
z = model(x, y)
print(f"x.shape={x.shape}, y.shape={y.shape}, z.shape={z.shape}")
DYN = torch.export.Dim.DYNAMIC
ep = torch.export.export(model, (x,y), dynamic_shapes=({0:DYN, 1:DYN},{0:DYN, 1:DYN}))
print(ep)
# %%
# But does it really work?
# We just print the shapes.
model_ep = ep.module()
ez = model_ep(x,y)
print("case 1:", z.shape, ez.shape)
x = torch.arange(4).reshape((2,2))
y = torch.arange(9).reshape((3,3))
try:
ez = model_ep(x,y)
print("case 2:", model(x,y).shape, ez.shape)
except Exception as e:
print("case 2 failed:", e)
```
Which gives:
```
x.shape=torch.Size([2, 3]), y.shape=torch.Size([3, 2]), z.shape=torch.Size([3, 3])
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "i64[s35, s16]", y: "i64[s58, s43]"):
#
sym_size_int_8: "Sym(s35)" = torch.ops.aten.sym_size.int(x, 0)
sym_size_int_9: "Sym(s16)" = torch.ops.aten.sym_size.int(x, 1)
sym_size_int_10: "Sym(s58)" = torch.ops.aten.sym_size.int(y, 0)
sym_size_int_11: "Sym(s43)" = torch.ops.aten.sym_size.int(y, 1)
# File: /home/xadupre/github/onnx-diagnostic/_doc/recipes/plot_dynamic_shapes_max.py:20 in forward, code: z = torch.zeros((s1, s2), dtype=x.dtype)
zeros: "i64[s58, s16]" = torch.ops.aten.zeros.default([sym_size_int_10, sym_size_int_9], dtype = torch.int64, device = device(type='cpu'), pin_memory = False); sym_size_int_9 = None
# File: /home/xadupre/github/onnx-diagnostic/_doc/recipes/plot_dynamic_shapes_max.py:21 in forward, code: z[:x.shape[0], :x.shape[1]] = x
slice_1: "i64[s35, s16]" = torch.ops.aten.slice.Tensor(zeros, 0, 0, sym_size_int_8); sym_size_int_8 = None
copy_: "i64[s35, s16]" = torch.ops.aten.copy_.default(slice_1, x); slice_1 = x = copy_ = None
# File: /home/xadupre/github/onnx-diagnostic/_doc/recipes/plot_dynamic_shapes_max.py:22 in forward, code: z[:y.shape[0], :y.shape[1]] += y
slice_2: "i64[s58, s16]" = torch.ops.aten.slice.Tensor(zeros, 0, None, sym_size_int_10); sym_size_int_10 = None
slice_3: "i64[s58, s43]" = torch.ops.aten.slice.Tensor(slice_2, 1, None, sym_size_int_11); slice_2 = None
add_: "i64[s58, s43]" = torch.ops.aten.add_.Tensor(slice_3, y); slice_3 = y = None
slice_4: "i64[s58, s43]" = torch.ops.aten.slice.Tensor(zeros, 1, 0, sym_size_int_11); sym_size_int_11 = None
copy__1: "i64[s58, s43]" = torch.ops.aten.copy_.default(slice_4, add_); slice_4 = add_ = copy__1 = None
return (zeros,)
Graph signature:
# inputs
x: USER_INPUT
y: USER_INPUT
# outputs
zeros: USER_OUTPUT
Range constraints: {s35: VR[2, int_oo], s16: VR[2, int_oo], s58: VR[2, int_oo], s43: VR[2, int_oo]}
case 1: torch.Size([3, 3]) torch.Size([3, 3])
case 2 failed: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1
```
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250408+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] model-explorer-onnx==0.3.4
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-array-api==0.3.0
[pip3] onnx-extended==0.4.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-genai-cuda==0.6.0
[pip3] onnxruntime-training==1.22.0+cu126
[pip3] onnxscript==0.3.0.dev20250301
[pip3] optree==0.14.1
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250408+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250408+cu126
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250408+cu126
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,980,115,696
|
[BE] Add FrozenOrderedSet
|
eellison
|
closed
|
[
"good first issue",
"triaged",
"better-engineering",
"oncall: pt2"
] | 6
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Subclass torch/utils/_ordered_set.py and error on update. We might use this in some places in the compiler/pytorch.
cc @chauhang @penguinwu @Skylion007 who made some changes here
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,980,069,764
|
logging start of torch elastic workers.
|
aschhabra
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (torchelastic)"
] | 11
|
CONTRIBUTOR
|
Summary:
We would like to log start of the workers. It will help with complete logging.
Test Plan:
unit tests
https://www.internalfb.com/intern/testinfra/testrun/6473924724652056
e2e tests
https://www.internalfb.com/mlhub/pipelines/runs/mast/f712311762-27449483648-TrainingApplication_V403K?job_attempt=0&version=0&tab=execution_details&env=PRODUCTION
Reviewed By: tnykiel
Differential Revision: D72297314
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,980,016,941
|
torch.compile cannot handle plain nn.Parameter subclasses
|
zou3519
|
closed
|
[
"triaged",
"module: regression",
"tensor subclass",
"oncall: pt2",
"module: dynamo",
"vllm-compile",
"dynamo-triage-jan2025"
] | 12
|
CONTRIBUTOR
|
Context: vLLM has a lot of plain nn.Paramater subclasses (e.g. no torch_function, no torch_dispatch). Dynamo doesn't have a good time with them:
Repro:
```
import torch
from torch.nn import Parameter
from typing import Callable
class BasevLLMParameter(Parameter):
"""
Base parameter for vLLM linear layers. Extends the torch.nn.parameter
by taking in a linear weight loader. Will copy the loaded weight
into the parameter when the provided weight loader is called.
"""
def __new__(cls, data: torch.Tensor, **kwargs):
return super().__new__(cls, data=data, requires_grad=False)
def __init__(self, data: torch.Tensor, weight_loader: Callable):
self._weight_loader = weight_loader
model = torch.nn.Linear(3, 3)
model.weight = BasevLLMParameter(model.weight, weight_loader=lambda x: x)
model.bias = BasevLLMParameter(model.bias, weight_loader=lambda x: x)
@torch.compile
def f(x):
y = model(x)
return y
x = torch.randn(2, 3)
f(x)
```
Gives:
```
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'builtin_function_or_method' object has no attribute '__func__'
from user code:
File "/home/rzou/dev/ocu11/pt-ocu11/foo.py", line 26, in f
y = model(x)
File "/home/rzou/dev/ocu11/pt-ocu11/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
```
cc @ezyang @albanD @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,979,952,101
|
[Build] Fix fbgemm build with gcc-12+
|
malfet
|
closed
|
[
"Merged",
"release notes: build",
"topic: bug fixes"
] | 3
|
CONTRIBUTOR
|
By suppressing more warnings
TODO: fbgemm pin really needs to get updated
| true
|
2,979,844,106
|
PyTorch can not be build by gcc-12
|
malfet
|
closed
|
[
"module: build",
"triaged",
"module: third_party"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
CI
### Versions
Attempt to run `python setup.py develop` results in
```
In file included from /usr/lib/gcc/x86_64-linux-gnu/12/include/immintrin.h:43:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avxintrin.h: In function ‘void fbgemm::FloatToBfloat16_avx512(const float*, bfloat16*, size_t)’:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avxintrin.h:1224:11: note: ‘__Y’ was declared here
1224 | __m256i __Y = __Y;
| ^~~
In function ‘__m512i _mm512_cvtepu16_epi32(__m256i)’,
inlined from ‘void fbgemm::{anonymous}::Bfloat16ToFloatKernelAvx512(const fbgemm::bfloat16*, float*)’ at /root/pytorch/third_party/fbgemm/src/FbgemmBfloat16ConvertAvx512.cc:39:47,
inlined from ‘void fbgemm::Bfloat16ToFloat_avx512(const bfloat16*, float*, size_t)’ at /root/pytorch/third_party/fbgemm/src/FbgemmBfloat16ConvertAvx512.cc:57:32:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:2388:52: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
2388 | return (__m512i) __builtin_ia32_pmovzxwd512_mask ((__v16hi) __A,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
2389 | (__v16si)
| ~~~~~~~~~
2390 | _mm512_undefined_epi32 (),
| ~~~~~~~~~~~~~~~~~~~~~~~~~~
2391 | (__mmask16) -1);
| ~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h: In function ‘void fbgemm::Bfloat16ToFloat_avx512(const bfloat16*, float*, size_t)’:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:206:11: note: ‘__Y’ was declared here
206 | __m512i __Y = __Y;
| ^~~
In function ‘__m512i _mm512_slli_epi32(__m512i, unsigned int)’,
inlined from ‘void fbgemm::{anonymous}::Bfloat16ToFloatKernelAvx512(const fbgemm::bfloat16*, float*)’ at /root/pytorch/third_party/fbgemm/src/FbgemmBfloat16ConvertAvx512.cc:40:38,
inlined from ‘void fbgemm::Bfloat16ToFloat_avx512(const bfloat16*, float*, size_t)’ at /root/pytorch/third_party/fbgemm/src/FbgemmBfloat16ConvertAvx512.cc:57:32:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:1242:50: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
1242 | return (__m512i) __builtin_ia32_pslldi512_mask ((__v16si) __A, __B,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
1243 | (__v16si)
| ~~~~~~~~~
1244 | _mm512_undefined_epi32 (),
| ~~~~~~~~~~~~~~~~~~~~~~~~~~
1245 | (__mmask16) -1);
| ~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h: In function ‘void fbgemm::Bfloat16ToFloat_avx512(const bfloat16*, float*, size_t)’:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:206:11: note: ‘__Y’ was declared here
206 | __m512i __Y = __Y;
| ^~~
cc1plus: all warnings being treated as errors
ninja: build stopped: subcommand failed.
```
And looking at the logs it seems to stems from `BENCHMARK_ENABLE_WERROR`, which sets it for the entire project, rather than for just GBench submodule
cc @seemethere
| true
|
2,979,725,031
|
Fix inplacing with multiple, fused uses
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150845
We had `can_inplace` defined on a single use. When that buffer has multiple uses inside a fused node, we need to check if the other accesses have the same index. Otherwise we may read memory that has already been written to from inplacing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,979,528,308
|
[NVIDIA] Thor
|
johnnynunez
|
closed
|
[
"triaged",
"open source"
] | 8
|
CONTRIBUTOR
|
Thor are based on SBSA
@malfet @atalman
| true
|
2,979,438,761
|
[AOTInductor] Can't compile with a relative cache path for bert
|
ChuanqiXu9
|
open
|
[
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Reproducer:
```
import os
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(10, 16)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(16, 1)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
x = self.sigmoid(x)
return x
with torch.no_grad():
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Model().to(device=device)
example_inputs=(torch.randn(8, 10, device=device),)
batch_dim = torch.export.Dim("batch", min=1, max=1024)
# [Optional] Specify the first dimension of the input x as dynamic.
exported = torch.export.export(model, example_inputs, dynamic_shapes={"x": {0: batch_dim}})
# [Note] In this example we directly feed the exported module to aoti_compile_and_package.
# Depending on your use case, e.g. if your training platform and inference platform
# are different, you may choose to save the exported model using torch.export.save and
# then load it back using torch.export.load on your inference platform to run AOT compilation.
output_path = torch._inductor.aoti_compile_and_package(
exported,
# [Optional] Specify the generated shared library path. If not specified,
# the generated artifact is stored in your system temp directory.
package_path=os.path.join(os.getcwd(), "model.pt2"),
)
```
Compile it with:
```
TORCHINDUCTOR_CACHE_DIR=cache python bug.py
```
it will note the compilation can't find the source file.
After some debugging, it shows the cause is the cppbuilder assumes the source path is an absolute path but the given cache dir is a relative path. We can fix this by making `cache_dir()` always return absolute path or assign the absolute path in cppbuilder if it is relative.
### Versions
trunk 17005992668f3f6e25761930e6514de435922b13
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
2,979,424,599
|
Exporting BertModel failed with marking batch as dynamic
|
ChuanqiXu9
|
closed
|
[
"oncall: pt2",
"module: dynamic shapes",
"oncall: export"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Reproducer:
```
import torch
import json
from transformers import BertModel, BertConfig
import os
CONFIG = """
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.6.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
"""
config = json.loads(CONFIG)
bloom_config = BertConfig(**config)
model = BertModel(bloom_config).half().cuda()
vocab_size = 30522
input_ids = torch.randint(0, vocab_size, (2, 3)).cuda()
attention_mask = torch.ones(2, 3).cuda()
example_inputs = (input_ids, attention_mask)
batch_dim = torch.export.Dim("batch", min = 2, max = 10)
s_dim = torch.export.Dim("s")
exported = torch.export.export(model, example_inputs,
dynamic_shapes = {"input_ids": {0: batch_dim, 1: torch.export.Dim.STATIC},
"attention_mask": {0: torch.export.Dim.STATIC,
1: torch.export.Dim.STATIC}})
```
The error message is:
```
torch._dynamo.exc.UserError: Constraints violated (batch)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of batch = L['args'][0][0].size()[0] in the specified range batch <= 10 are valid because batch was inferred to be a constant (2).
Suggested fixes:
batch = 2
```
This is confusing since I don't know how to do the suggested fix.
### Versions
trunk (17005992668f3f6e25761930e6514de435922b13)
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,979,388,641
|
Fix the Problems About Defining Static Variable in Inline Function
|
pytorchbot
|
open
|
[
"oncall: distributed",
"oncall: jit",
"open source",
"release notes: cpp"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147095
Refer to https://github.com/pytorch/pytorch/issues/125465 for more informations
- Remove unused header files
- Move the inline function that defines the static variable to .cc
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,979,278,506
|
gui-uv doesn't work with rocm
|
Krytern
|
closed
|
[
"module: rocm"
] | 0
|
NONE
|
EDIT: WRONG PROJECT!
| true
|
2,979,228,514
|
Code Clean: Using the new builtin function provides by python 3.8 later
|
FFFrog
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"fx"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150839
Changes:
- reversed
- math.perm
- inspect.getfile
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,979,228,204
|
Code Clean: Remove specific bytecode support in dynamo for python3.8
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150839
* __->__ #150838
* #150834
Related Bytecode:
- CALL_FINALLy
- END_FINALLy
- POP_FINALLy
The bytecodes above were removed before python3.9, refer to [this](https://github.com/python/cpython/blob/53908bd7905b849e110d2c6f4bce739bff037146/Misc/NEWS.d/3.9.0a2.rst) for more infos.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,979,216,663
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_int8 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_int8&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40151959719).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_int8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs', keys=('aten::_foreach_abs', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int8], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int8], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int8], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int8], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int8], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int8], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int8], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int8], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int8], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int8], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int8], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int8], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int8], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int8], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int8], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int8], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int8], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int8], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int8], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int8]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_outplace_cuda_int8
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,979,097,872
|
`Aborted (core dumped)` in `torch.cuda.nccl.reduce`
|
vwrewsge
|
open
|
[
"oncall: distributed",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
When using `torch.cuda.nccl.reduce` with invalid operation codes, the program crashes with `Aborted (core dumped)` instead of raising a `RuntimeError` or validating the input.
# To Reproduce
```
import torch
import torch.cuda as cuda
from torch.cuda.nccl import reduce
def test_bug():
# Checking for unsupported operations
unsupported_ops = [0xFF, 0xAA] # Example of invalid NCCL operation codes
for op in unsupported_ops:
input_tensor = torch.tensor([1.0, 2.0, 3.0], device=f'cuda:0')
output_tensor = torch.zeros_like(input_tensor)
reduce(inputs=[input_tensor], output=output_tensor, root=0, op=op)
if __name__ == "__main__":
test_bug()
```
# Output
```
Aborted (core dumped)
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,979,082,232
|
`Floating point exception` in `torch.nn.functional.ctc_loss`
|
vwrewsge
|
closed
|
[
"module: nn",
"module: cuda",
"triaged",
"module: edge cases"
] | 1
|
NONE
|
### 🐛 Describe the bug
When calling `ctc_loss` with empty tensors on CUDA (device="cuda"), a `Floating point exception` occurs. This does not happen on CPU, where a proper error is raised.
# To reproduce
```
import torch
import torch.nn.functional as F
device = "cuda" # "cpu" is fine
num_classes = 4
log_probs = torch.rand(0, 0, num_classes, device=device)
targets = torch.tensor([], device=device, dtype=torch.long)
input_lengths = torch.tensor([], device=device, dtype=torch.long)
target_lengths = torch.tensor([], device=device, dtype=torch.long)
result = F.ctc_loss(log_probs, targets, input_lengths, target_lengths, reduction='none')
```
# Output
```
Floating point exception (core dumped)
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.