id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,970,505,614
|
[inductor][fix] enable dtype promotion for bucketize
|
eknag
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 18
|
CONTRIBUTOR
|
Summary:
bucketization involves comparing an input with border values. Without careful consideration of dtypes, this can cause dangerous implicit casting.
aten.bucketize resolves this via dtype promotion. We enable dtype promotion for the inductor bucketization pass so as to maintain alignment with the aten op.
Test Plan:
```
python3 test/inductor/test_torchinductor.py -k "bucketize"
```
Fixes #145929
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,970,491,481
|
update get start xpu document for v2.7
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
update get start xpu document for v2.7
| true
|
2,970,473,554
|
[ROCm] Expand workspace size for gfx95
|
jpvillam-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 8
|
CONTRIBUTOR
|
Use same workspace size for gfx95* as gfx94*
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,970,415,160
|
[dynamo] disable new test_assert_failure_in_generic_ctx_mgr internally
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150631
* #150471
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,970,403,509
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_int8 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_int8&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39919872843).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_int8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs_', keys=('aten::_foreach_abs_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int8], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int8], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int8], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int8], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int8], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int8], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int8], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int8], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int8], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int8], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int8], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int8], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int8], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int8], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int8], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int8], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int8], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int8], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int8], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int8]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_inplace_cuda_int8
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,970,312,779
|
torch.compile on MPS: rms_norm not invoked
|
manuelcandales
|
closed
|
[
"triaged",
"module: correctness (silent)",
"module: reductions",
"module: mps",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When forcing torch.compile to use the fused rms_norm MPS implementation, it doesn't do it
One way to notice the bug, is to see that the compiled model outputs zeros, instead of the correct values:
```python
import torch
with torch.no_grad():
x = torch.randn(1024, requires_grad=False, device="mps")
model = torch.nn.RMSNorm(1024, device="mps")
y1 = model(x)
model = torch.compile(model)
y2 = model(x)
print(y1)
print(y2)
```
Outputs something like:
```
tensor([ 1.7966, -0.1217, 0.0942, ..., 1.1045, -0.0254, 0.0899], device='mps:0')
tensor([0., 0., 0., ..., 0., 0., 0.], device='mps:0')
```
The reason for that, is that the generated code looks like this:
```
def call(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (1024, ), (1, ))
assert_size_stride(arg1_1, (1024, ), (1, ))
with torch._ops.contextlib.nullcontext():
# MPS set device
buf0 = empty_strided((1024, ), (1, ), device='mps', dtype=torch.float32)
return (buf0, )
```
Notice there is no call to `aten.rms_norm`
### Versions
PyTorch version: 2.8.0.dev20250330
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.1 | packaged by Anaconda, Inc. | (main, Jan 19 2024, 09:45:58) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.8.0.dev20250330
[pip3] torchaudio==2.6.0.dev20250330
[pip3] torchvision==0.18.0.dev20240223
[conda] numpy 1.26.4 py312h7f4fdc5_0
[conda] numpy-base 1.26.4 py312he047099_0
[conda] torch 2.8.0.dev20250330 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250330 pypi_0 pypi
[conda] torchvision 0.18.0.dev20240223 py312_cpu pytorch-nightly
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,970,197,455
|
Release 2.7.0 validations checklist and cherry-picks
|
atalman
|
closed
|
[
"oncall: releng",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Similar to https://github.com/pytorch/pytorch/issues/144503
We need to make sure that:
- [x] Validate Linux aarch64 CUDA builds with triton (Please note all CUDA Aarch64 builds where validated by Nvidia)
- [x] Python 3.13 and 3.13t wheel validate - https://github.com/pytorch/test-infra/actions/runs/14177205860/job/39714932351
- [x] Amazon Linux 2023 Test + torch.compile + no numpy installed: https://github.com/pytorch/test-infra/actions/runs/14177205860/job/39714879789
- [x] Validate Metadata section of wheels - make sure python versions are set
- [x] PyTorch 2.7.0 exposes statically linked libstdc++ CXX11 ABI symbols : https://github.com/pytorch/pytorch/issues/133437 @ZainRizvi
* Tested on macOS by running and verified and no matches
* `(release2.7) ~/test/release2.7/.venv/lib/python3.12/site-packages/torch/lib nm -gU libtorch_cpu.dylib | grep "recursive_directory_iterator"`
- [x] CUDA
- [x] pypi binaries with slimmed dependencies are usable in standard AWS containers 2023 regression in 1.13 - https://github.com/pytorch/test-infra/actions/runs/14177205860/job/39714879789
- [x] Check cuda 1.12.1 update issue: https://github.com/pytorch/pytorch/issues/94772 with small wheels . Passes on GPU but failing on CPU, new issue: https://github.com/pytorch/pytorch/issues/145801
- [x] `torch.compile`
- [x] Basic test works (for example see test mentioned in https://github.com/openai/triton/pull/1176 ) in PyTorch docker container
- [x] `torch.compile` raises an error if used on Windows. Test (part of torchvision): https://github.com/pytorch/test-infra/actions/runs/14182325015/job/39731076931#step:9:447
- [x] `torch.compile` works on 3.13 : Test: https://github.com/pytorch/test-infra/actions/runs/14315674885/job/40121143490#step:15:3483
- [x] `torch.compile` raises error on 3.13t: Validated : ``RuntimeError: torch.compile is not supported on Python built with GIL disabled``
- MPS
- [x] Resnet is usable out of the box (https://github.com/pytorch/test-infra/actions/runs/14315674885/job/40121143490#step:15:3469)
- Is torchvision usable? True German shepherd (cpu): 37.6% German shepherd (mps): 34.1%
- [x] Validate docker release builds
Issues/Milestone validation
- [x] https://github.com/pytorch/pytorch/pull/150203
- [x] https://github.com/pytorch/pytorch/pull/149926 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/149866 @ZainRizvi
- [x] https://github.com/pytorch/pytorch/issues/149829 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/149806 @atalman
- [ ] https://github.com/pytorch/pytorch/issues/149550 @AlannaBurke
- [x] https://github.com/pytorch/pytorch/pull/149505 @ZainRizvi
- [x] https://github.com/pytorch/pytorch/pull/149473 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/149425 @ZainRizvi
- [x] https://github.com/pytorch/pytorch/pull/149351 @atalman
- [x] https://github.com/pytorch/pytorch/pull/149208 @janeyx99
- [x] https://github.com/pytorch/pytorch/issues/149153 @atalman
- [x] https://github.com/pytorch/pytorch/issues/149132 @ZainRizvi
- [x] https://github.com/pytorch/pytorch/pull/149057 @angelayi
- [x] https://github.com/pytorch/pytorch/pull/149033 @atalman
- [x] https://github.com/pytorch/pytorch/pull/149001
- [x] https://github.com/pytorch/pytorch/pull/148640 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/148603 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/pull/148453 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/pull/148403 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/148388 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/pull/148245 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/148156 @ZainRizvi
- [x] https://github.com/pytorch/pytorch/issues/148120 @atalman
- [x] https://github.com/pytorch/pytorch/pull/148081 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/147889 @ZainRizvi
- [x] https://github.com/pytorch/pytorch/issues/147857 @atalman
- [x] https://github.com/pytorch/pytorch/pull/147835 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/147614 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/146977 @atalman
- [x] https://github.com/pytorch/pytorch/issues/146792 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/issues/146679 @atalman
- [x] https://github.com/pytorch/pytorch/issues/146469 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/issues/145897 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/issues/145571 @atalman
- [x] https://github.com/pytorch/pytorch/issues/145225 @atalman
- [x] https://github.com/pytorch/pytorch/issues/144768 @atalman
- [x] https://github.com/pytorch/pytorch/issues/144567 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/144382 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/issues/143477 @ZainRizvi
- [x] https://github.com/pytorch/pytorch/pull/140677 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/139497 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/pull/137570 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/137296 @xadupre @titaiwangms
- [x] https://github.com/pytorch/pytorch/pull/136753 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/135465 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/135337 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/130558 @atalman
- [x] https://github.com/pytorch/pytorch/issues/150516 @zou3519
- [x] https://github.com/pytorch/pytorch/pull/151257 @atalman
### Versions
2.7.0
| true
|
2,970,181,890
|
Make error message descriptive
|
sibuachu
|
open
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (sharded)"
] | 5
|
NONE
|
Summary: Adding the number of locals shards to error messages makes it easier to debug.
Test Plan: UT
Differential Revision: D72396478
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,970,135,079
|
Refactor `torch/utils/data/datapipes/gen_pyi.py` with `torchgen`
|
XuehaiPan
|
open
|
[
"open source",
"topic: not user facing",
"suppress-bc-linter"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150732
* #150731
* #150730
* __->__ #150626
* #150729
* #150728
* #150727
* #150726
| true
|
2,970,102,392
|
[cuda] Add new faster gammabeta backward kernel (#148605) (Reapply with launch bounds)
|
ahmadsharif1
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: nn",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
# Changes over the previous PR
This reverts commit 61a1f09 and adds `__launch_bounds__` to the kernel.
Previously I merged 114d404 that did not work on Blackwell because it consumed too many registers. It got reverted in 61a1f09. For more context see: https://github.com/pytorch/pytorch/issues/150266.
This PR reverts the revert (i.e. reapplies the original diff), with one additional line with `__launch_bounds__` added:
```
git diff HEAD^
diff --git a/aten/src/ATen/native/cuda/layer_norm_kernel.cu b/aten/src/ATen/native/cuda/layer_norm_kernel.cu
index 0d63a2f979c..3ce2c24c18e 100644
--- a/aten/src/ATen/native/cuda/layer_norm_kernel.cu
+++ b/aten/src/ATen/native/cuda/layer_norm_kernel.cu
@@ -657,6 +657,7 @@ bool aligned_grid
>
__global__
void
+__launch_bounds__(block_dim_x * block_dim_y)
GammaBetaBackwardCUDAKernelTemplate(
int64_t M,
int64_t N,
```
I managed to get a Blackwell machine and verified that the fix works. The fix was verified using this repro that I got from @drisspg
<details>
<summary> Repro script that fails on Blackwell </summary>
```
import torch
from torch.nn import init
# from transformer_nuggets import init_logging
# from transformer_nuggets.utils.benchmark import profiler
# from pathlib import Path
# init_logging()
class PermuteModule(torch.nn.Module):
def __init__(self, permutation):
super(PermuteModule, self).__init__()
self.permutation = permutation
def forward(self, x:torch.Tensor) -> torch.Tensor:
assert len(x.shape) == len(self.permutation), f"Dimension mismatch! Unable to permute {len(x.shape)} dim input with a {len(self.permutation)} dim permutation!"
return x.permute(*self.permutation)
def test(n_layers:int, conv_stride:int):
_sequence = []
for _ in range(n_layers):
# Conv1d inputs are (N x C x L), LayerNorm expects (* x C). Dims must be permuted between modules.
_sequence += [
PermuteModule((0,2,1)),
torch.nn.Conv1d(in_channels=512, out_channels=512, groups=1, kernel_size=9, dilation=1, stride=conv_stride, padding=0, bias=False),
PermuteModule((0,2,1)),
torch.nn.LayerNorm(512),
torch.nn.ReLU()
]
model = torch.nn.Sequential(*_sequence).to(device="cuda")
data = torch.randn((100,2048,512), device="cuda")
out = model(data)
loss = torch.nn.functional.mse_loss(out, torch.rand_like(out))
loss.backward()
torch.autograd.set_detect_anomaly(True)
print(f"Torch version: {torch.__version__}")
# with profiler(Path("conv")):
# # print(f"layers=1, stride=1")
# # test(n_layers=1, conv_stride=1)
# # print(f"layers=2, stride=1")
# # test(n_layers=2, conv_stride=1)
# # print(f"layers=1, stride=2")
# # test(n_layers=1, conv_stride=2)
# print(f"layers=2, stride=2")
# test(n_layers=2, conv_stride=2)
print(f"layers=2, stride=2")
test(n_layers=2, conv_stride=2)
# we will not reach this print statement.
print("DONE.")
```
</details>
I also re-ran my performance benchmark and found no regressions over the previous PR.
# Full description of the old PR
Original PR: https://github.com/pytorch/pytorch/pull/148605
This PR adds a new kernel for producing gamma and beta values for the backward pass in a performant way.
To test the performance against the baseline, I measured the backward pass of layernorm while sweeping over the following variables:
1. dtype in {half, float}
2. M in `2**k, 2**k - 1, 2**k + 1 for k in range(...)`
3. N in `2**k, 2**k - 1, 2**k + 1 for k in range(...)`
4. Whether we flush the L2 cache before running the backward pass
Summary: The new code performs better than the old code, especially for powers of 2. For M >> N case, it performs very well (kernel itself can be 30x faster and the overall backward pass can be 5-10x faster).
In order to visualize results of the kernel when choosing different values of M, N and dtype, I wrote some code to generate a heatmap. The heatmap has N on the x-axis, M on the y-axis and color-coded points where green shows performance improvement and red shows regressions. For example, `m=32 n=2048 1.42x` in the heatmap would indicate the normalized shape had 32 elements. The leading dimensions' product was 2048 elements and the new kernel resulted in the *backward pass* being 1.42x faster than the old *backward pass*.
Important note: This heatmap shows the total backward pass time as seen by the user. The kernel time difference can be sometimes very large while the total backward pass time is not that high. For example, for dtype=torch.half, M=32 N=2048, flush_l2_cache=True case, the heatmap shows a speedup of 1.42x, while ncu tells me the new kernel is 2.5x faster than the old:
M=32 N=2048 dtype=half flush_l2=True Old Kernel NCU summary:
```
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 1.59
SM Frequency Ghz 1.35
Elapsed Cycles cycle 27,526
Memory Throughput % 2.21
DRAM Throughput % 0.54
Duration us 20.42
L1/TEX Cache Throughput % 4.31
L2 Cache Throughput % 2.62
SM Active Cycles cycle 1,475.02
Compute (SM) Throughput % 0.29
----------------------- ----------- ------------
```
M=32 N=2048 dtype=half flush_l2=True New Kernel NCU summary:
```
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 1.59
SM Frequency Ghz 1.34
Elapsed Cycles cycle 10,920
Memory Throughput % 5.64
DRAM Throughput % 1.35
Duration us 8.13
L1/TEX Cache Throughput % 1.92
L2 Cache Throughput % 6.89
SM Active Cycles cycle 3,554.41
Compute (SM) Throughput % 0.67
----------------------- ----------- ------------
```
Let's look at some rows from the heatmap. For dtype=float16 flush_l2_cache=True and when input shapes are powers of 2, we get the following:
<img width="1508" alt="image" src="https://github.com/user-attachments/assets/06179599-b2f0-4a45-8664-247a1067950b" />
There are 3 columns -- the first shows all data points, the second shows speedups only and the 3rd column shows regressions only. We can see that there are dramatic speedups for M >> N cases and the regressions are not that high (less than 1%, which could just be measurement noise). Here is a small guide I made:

For dtype=float32, we get a similar chart:
<img width="1499" alt="image" src="https://github.com/user-attachments/assets/c4d31a76-03b0-426c-9114-e1bfad29b530" />
The new code performs especially well for m >> n cases, and also where m and n are small. The m >> n case is special because we run 2 reduction kernels back to back and parallelize in the "M" dimension (the older kernel only parallelized in the "N" dimension).
The new code can sometimes have regressions for non-powers of 2. That is because the old code was using block sizes of {16, 32} while we have `threads.x = 32`. For example when N=33, the old code would have 3 blocks and we will have 2 blocks. I wrote some code to specialize for this case, but I think it will add complexity and @ngimel mentioned that non-powers of 2 are rare enough.
I am including the regressions here for completeness' sake:
<img width="1500" alt="image" src="https://github.com/user-attachments/assets/31c17cfb-ed9b-4106-b9c8-5c359751f530" />
To see this better:
1. Click the image
2. Right click the expanded image and open in a new tab
3. Go to that tab and left click once to zoom in
If you want to see the full data, here it is:

I also measured binary size and compile time since those are important for developers:
Binary size comparison

```
# Original
-rwxr-xr-x 1 ahmads users 307193112 Mar 6 08:46 ./torch/lib/libtorch_cuda.so
# This PR
-rwxr-xr-x 1 ahmads users 307193112 Mar 6 08:46 ./torch/lib/libtorch_cuda.so
```
The diff in bytes is 302kB which is about a 0.1% increase.
Compile time difference:
```
# Original
real 0m10.931s
user 0m9.676s
sys 0m1.004s
# this PR
real 0m16.720s
user 0m15.514s
sys 0m1.066s
# Command I ran
time /usr/local/cuda/bin/nvcc -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DFLASHATTENTION_DISABLE_ALIBI -DFLASHATTENTION_DISABLE_SOFTCAP -DFLASH_NAMESPACE=pytorch_flash -DFMT_HEADER_ONLY=1 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DTORCH_CUDA_USE_NVTX3 -DUNFUSE_FMA -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_CUDA -DUSE_CUFILE -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_FLASH_ATTENTION -DUSE_MEM_EFF_ATTENTION -DUSE_NCCL -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cuda_EXPORTS -I/home/ahmads/personal/pytorch/build/aten/src -I/home/ahmads/personal/pytorch/aten/src -I/home/ahmads/personal/pytorch/build -I/home/ahmads/personal/pytorch -I/home/ahmads/personal/pytorch/cmake/../third_party/benchmark/include -I/home/ahmads/personal/pytorch/third_party/onnx -I/home/ahmads/personal/pytorch/build/third_party/onnx -I/home/ahmads/personal/pytorch/nlohmann -I/home/ahmads/personal/pytorch/third_party/flash-attention/csrc/flash_attn/src -I/home/ahmads/personal/pytorch/aten/src/THC -I/home/ahmads/personal/pytorch/aten/src/ATen/cuda -I/home/ahmads/personal/pytorch/third_party/fmt/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/tools/util/include -I/home/ahmads/personal/pytorch/build/caffe2/aten/src -I/home/ahmads/personal/pytorch/aten/src/ATen/.. -I/home/ahmads/personal/pytorch/build/nccl/include -I/home/ahmads/personal/pytorch/c10/cuda/../.. -I/home/ahmads/personal/pytorch/c10/.. -I/home/ahmads/personal/pytorch/third_party/tensorpipe -I/home/ahmads/personal/pytorch/build/third_party/tensorpipe -I/home/ahmads/personal/pytorch/third_party/tensorpipe/third_party/libnop/include -I/home/ahmads/personal/pytorch/torch/csrc/api -I/home/ahmads/personal/pytorch/torch/csrc/api/include -isystem /home/ahmads/personal/pytorch/build/third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/ahmads/personal/pytorch/third_party/protobuf/src -isystem /home/ahmads/personal/pytorch/third_party/XNNPACK/include -isystem /home/ahmads/personal/pytorch/third_party/ittapi/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/ahmads/personal/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /home/ahmads/personal/pytorch/third_party/ideep/include -isystem /home/ahmads/personal/pytorch/INTERFACE -isystem /home/ahmads/personal/pytorch/third_party/nlohmann/include -isystem /home/ahmads/personal/pytorch/third_party/NVTX/c/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/cudnn_frontend/include -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -D_GLIBCXX_USE_CXX11_ABI=1 -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_90,code=sm_90 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -O3 -DNDEBUG -std=c++17 -Xcompiler=-fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Xcompiler -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -MD -MT caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o -MF caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o.d -x cu -c /home/ahmads/personal/pytorch/aten/src/ATen/native/cuda/layer_norm_kernel.cu -o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o
```
So the new PR is 6 seconds longer compile time.
| true
|
2,970,051,270
|
DISABLED test_special_polygamma_cpu_halide (__main__.HalideCpuTests)
|
clee2000
|
open
|
[
"triaged",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
Platforms: linux
Example
https://hud.pytorch.org/pytorch/pytorch/commit/70b34a42c17cecd316487dc574dce3b8121270cc#39929315792-box
Broke some time when the halide build was failing due to cmake issues, so I don't know when it started
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_halide.py%3A%3AHalideCpuTests%3A%3Atest_special_polygamma_cpu_halide%22%5D)).
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,970,050,448
|
Division by zero in ONNX export with `dynamo=True` leading to NaN outputs
|
novikov-alexander
|
open
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
### Description
When converting [3DMOTFormer](https://github.com/dsx0511/3DMOTFormer) to ONNX using `torch.onnx.export`:
- With `dynamo=False`: Conversion succeeds and model works correctly
- With `dynamo=True`:
- Conversion succeeds but produces invalid ONNX models when input has many tracks (`tracks_in > 200`)
- ONNX Runtime (1.21.0) produces NaN outputs due to division by zero in Div operations
- Manually adding epsilon (1e-6) to divisors in the ONNX model fixes the issue
### Suspected Root Cause
The issue appears to stem from numerical precision/rounding errors in the self-attention computation, specifically:
- Occurs in `propagate` function from torch_geometric (after `collect`)
- Likely related to how Dynamo optimizes/transforms the computation graph
### Reproduction Steps
1. Convert model with `dynamo=True`
2. Find input data with large `tracks_in` absolute values (>200)
3. Run through ONNX Runtime - observe NaN outputs
4. Either:
- Reduce input magnitude, or
- Convert with `dynamo=False`
5. Observe correct outputs
### Workaround
Manually adding small epsilon (1e-6) to divisors in the generated ONNX model prevents NaN outputs.
### Expected Behavior
Model should convert and run correctly with `dynamo=True` regardless of input values.
### Additional Notes
- Reproduction is challenging due to data requirements
- Issue suggests Dynamo's graph transformations may be introducing numerical instability
- Might be related to how attention scores are normalized when processing inputs with large absolute values
### Versions
### Environment
- PyTorch: 2.6.0 (required by torch_geometric 2.4.0 dependency)
- torch_geometric: 2.4.0
- ONNX Runtime: 1.21.0
- Model: 3DMOTFormer
| true
|
2,969,997,018
|
`torch.compile` creates a CUDA context even for CPU based code
|
antoinebrl
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hello 👋! I attempted to use `torch.compile` on a simple code snippet intended for CPU execution in a multi-processing environment. However, I noticed that `torch.compile` allocates GPU memory whenever CUDA is available, even if the execution is strictly on the CPU. When used with multi-processing, each process creates its own context, which quickly adds up and caused an out-of-memory (OOM) error on the GPU device. In our situation, each context is 400MB, and with 50 processes, this results in 20GB being used on our A100 40GB GPU.
The only solution that works for is to set `CUDA_VISIBLE_DEVICES=""`. Is there another way to disable CUDA? Is there an option to restrict `torch.compile` to not register the CUDA device interface?
Here is the code snippet to reproduce this:
```python
import time
import torch
from torch import nn
def main():
# import os
# os.environ["CUDA_VISIBLE_DEVICES"] = ""
print(torch.cuda.is_available())
input = torch.rand((2, 16), device="cpu")
layer = nn.Linear(16, 16, device="cpu")
layer = torch.compile(layer)
print(layer(input).shape)
time.sleep(10)
if __name__ == "__main__":
import torch.multiprocessing as mp
mp.set_start_method("spawn")
processes = []
for i in range(4):
process = mp.Process(target=main)
processes.append(process)
process.start()
for process in processes:
process.join()
```
### Error logs
_No response_
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Mar 11 2025, 17:27:26) [Clang 20.1.0 ] (64-bit runtime)
Python platform: Linux-5.10.233-224.894.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.144.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 7
BogoMIPS: 4999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 24 MiB (24 instances)
L3 cache: 35.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,969,818,076
|
2.8.0 Nightly - "Feature 'cvt with .bf16.f16' requires .target sm_90 or higher"
|
scottmudge
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"upstream triton"
] | 2
|
NONE
|
### 🐛 Describe the bug
Versions of 2.8.0 nightly **after** around ~2.8.0.dev20250326+cu128 are causing this issue during torch compile (inductor):
```
E0403 10:21:19.865000 339 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] ptxas /tmp/tmpg_4sx8hs.ptx, line 705; error : Feature 'cvt with .bf16.f16' requires .target sm_90 or higher
```
2.8.0 nightly versions at or below 2.8.0.dev20250326+cu128 do not exhibit this issue. I'm compiling on a system with sm_89 (RTX 4000 series).
### Error logs
```
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] Triton compilation failed: Placeholder.DESCRIPTIVE_NAME
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] def triton_(in_ptr0, arg_A, in_ptr2, out_ptr0):
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] GROUP_M : tl.constexpr = 8
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] EVEN_K : tl.constexpr = True
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] ALLOW_TF32 : tl.constexpr = False
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] ACC_TYPE : tl.constexpr = tl.float32
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] BLOCK_M : tl.constexpr = 128
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] BLOCK_N : tl.constexpr = 128
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] BLOCK_K : tl.constexpr = 32
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] A = arg_A
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] M = 44640
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] N = 5120
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] K = 5120
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] if M * N == 0:
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] # early exit due to zero-size input(s)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] return
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] stride_am = 5120
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] stride_ak = 1
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] stride_bk = 1
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] stride_bn = 5120
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] # based on triton.ops.matmul
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] pid = tl.program_id(0)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] grid_m = (M + BLOCK_M - 1) // BLOCK_M
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] grid_n = (N + BLOCK_N - 1) // BLOCK_N
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] # re-order program ID for better L2 performance
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] width = GROUP_M * grid_n
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] group_id = pid // width
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] group_size = min(grid_m - group_id * GROUP_M, GROUP_M)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] pid_m = group_id * GROUP_M + (pid % group_size)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] pid_n = (pid % width) // (group_size)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] tl.assume(pid_m >= 0)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] tl.assume(pid_n >= 0)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] if ((stride_am == 1 and stride_ak == M) or (stride_am == K and stride_ak == 1)) and M >= BLOCK_M:
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] offs_a_m = tl.max_contiguous(tl.multiple_of(rm % M, BLOCK_M), BLOCK_M)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] else:
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] offs_a_m = rm % M
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] if ((stride_bk == 1 and stride_bn == K) or (stride_bk == N and stride_bn == 1)) and N >= BLOCK_N:
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] offs_b_n = tl.max_contiguous(tl.multiple_of(rn % N, BLOCK_N), BLOCK_N)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] else:
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] offs_b_n = rn % N
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] offs_k = tl.arange(0, BLOCK_K)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] acc = tl.zeros((BLOCK_M, BLOCK_N), dtype=ACC_TYPE)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] for k_idx in range(0, tl.cdiv(K, BLOCK_K)):
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] a_k_idx_vals = offs_k[None, :] + (k_idx * BLOCK_K)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] b_k_idx_vals = offs_k[:, None] + (k_idx * BLOCK_K)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] idx_m = offs_a_m[:, None]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] idx_n = a_k_idx_vals
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] xindex = idx_n + 5120*idx_m
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] a = tl.load(A + (xindex))
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] idx_m = b_k_idx_vals
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] idx_n = offs_b_n[None, :]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] xindex = idx_n + 5120*idx_m
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] tmp2 = tl.load(in_ptr2 + (tl.broadcast_to(xindex, xindex.shape)), None, eviction_policy='evict_last')
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] tmp3 = tmp2.to(tl.bfloat16)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] b = tmp3.broadcast_to(xindex.shape)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] acc += tl.dot(a, b, allow_tf32=ALLOW_TF32)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] # rematerialize rm and rn to save registers
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] idx_m = rm[:, None]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] idx_n = rn[None, :]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] mask = (idx_m < M) & (idx_n < N)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] # inductor generates a suffix
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] xindex = idx_n + 5120*idx_m
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] tmp0 = tl.load(in_ptr0 + (tl.broadcast_to(idx_n, acc.shape)), mask, eviction_policy='evict_last').to(tl.float32)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] tmp1 = acc + tmp0
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] tl.store(out_ptr0 + (tl.broadcast_to(xindex, acc.shape)), tmp1, mask)
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617]
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] metadata: {'signature': {'in_ptr0': '*bf16', 'arg_A': '*bf16', 'in_ptr2': '*fp8e4nv', 'out_ptr0': '*bf16'}, 'device': 0, 'constants': {}, 'configs': [{(0,): [['tt.divisibility', 16]], (1,): [['tt.divisibility', 16]], (2,): [['tt.divisibility', 16]], (3,): [['tt.divisibility', 16]]}], 'device_type': 'cuda', 'num_warps': 4, 'num_stages': 3, 'debug': True, 'cc': 89}
E0403 10:21:22.723000 353 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] Traceback (most recent call last):
...
E0403 10:21:29.397000 359 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] ptxas /tmp/tmp4fpx1z78.ptx, line 707; error : Feature 'cvt with .bf16.f16' requires .target sm_90 or higher
E0403 10:21:29.397000 359 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] ptxas /tmp/tmp4fpx1z78.ptx, line 708; error : Feature 'cvt with .bf16.f16' requires .target sm_90 or higher
E0403 10:21:29.397000 359 venv/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py:617] ptxas /tmp/tmp4fpx1z78.ptx, line 721; error : Feature 'cvt with .bf16.f16' requires .target sm_90 or higher
```
### Versions
Started seeing the issue **after** versions ~2.8.0.dev20250326+cu128 (most recently seen with 2.8.0.dev20250403+cu128).
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10
| true
|
2,969,686,747
|
Update expected results for pr_time_benchmarks
|
atalman
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Followup after revert : https://github.com/pytorch/pytorch/pull/150572 expected tests results need to be updated
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,969,638,978
|
README: anaconda license violation / no longer recommend anaconda since it's no longer free to use
|
morotti
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
hello,
I was going over the documentation to build pytorch from source.
Unfortunately, the first thing that come up is that you strongly recommend to use anaconda, which shouldn't be used because it's no longer free to use.
Could you please remove that from the doc?
I don't know if you are aware but anaconda is no longer free.
They changed their terms of service in 2020 to restrict commercial usage.
They changed their terms of service in 2024 to forbid downloading anaconda and forbid education and non-profit usage too.
The download is open and doesn't require any registration, but if you download anaconda they will sue you ^^
They started raining lawsuits against users since last year. You may have heard about anaconda vs intel in the news. They started another 5 or so in the last few months.
https://www.reuters.com/legal/litigation/intel-sued-copyright-infringement-over-ai-software-2024-08-09/
You may need to adjust more doc and adjust your build system. The free to use alternatives are miniforge with the conda-forge channel.
| true
|
2,969,576,332
|
`TensorBase.type()` may forget some features of previous Tensor
|
ILCSFNO
|
closed
|
[
"module: docs",
"triaged",
"actionable",
"module: tensor creation"
] | 8
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I found that `.type()` can forget some features of previous Tensor, which I have found is `requires_grad`!
See the repro below, it can run well!
### Repro 1
```python
import torch
window_length = 10
window1 = torch.bartlett_window(window_length, requires_grad=True)
window2 = window1.type(torch.long)
window_np = window2.numpy()
print(window1)
print(window2)
print(window_np)
```
### Output 1
```text
tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000, 0.8000, 0.6000, 0.4000,
0.2000], requires_grad=True)
tensor([0, 0, 0, 0, 0, 1, 0, 0, 0, 0])
[0 0 0 0 0 1 0 0 0 0]
```
As is known to us all, we can't call `numpy()` on Tensor that requires grad, see the repro below (if we delete the step using `.type()`):
### Repro 2
```python
import torch
window_length = 10
window1 = torch.bartlett_window(window_length, requires_grad=True)
window_np = window1.numpy()
print(window1)
print(window_np)
```
### Output 2
```txt
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
```
First, I accept that `Repro 2` should raise Error, but after we use `.type()`, showed in `Repro 1`, why could it run well yet? The expected behavior is to raise RuntimeError just like `Repro 2`. The only explanation is that `window2` doesn't require grad yet!
It shows that `.type` can forget some information of Tensor, in this case it is `requires_grad`.
And also, from the `Output 1`, we can see that `window2` forget the `requires_grad` in `window1`, although the only difference is that `window2` experienced an op `.type()`.
Before we discuss then, I will first show the root of `.type()`:
* The func `torch.bartlett_window` returns a Tensor, which is defined here, inherited from `torch._C.TensorBase`:
https://github.com/pytorch/pytorch/blob/e0d19cf6ccb698e1c6081f5f18f555c972fbd9b4/torch/_tensor.py#L102
* torch._C.TensorBase is defined here, and `.type()` comes from `${tensor_method_hints}`:
https://github.com/pytorch/pytorch/blob/6fa1b171955716002129b2155c79e56e8d9bdf08/torch/_C/__init__.pyi.in#L1773-L1811
From these, we can see that:
* The `.type()` func is of `TensorBase`, that is, `.type()` above can be called as `TensorBase.type()`
* `TensorBase` actually has `requires_grad` argument itself, which does not involve any issues that may arise from subclasses
So it shouldn't forget `requires_grad`! Though I now haven't successfully tracked the codes which achieve `.type()` in cpp yet.
Thanks for noting.
### Versions
Nightly
cc @svekars @sekyondaMeta @AlannaBurke @gchanan @mruberry
| true
|
2,969,494,686
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_int64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_int64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39905713953).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_int64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,969,442,328
|
Why fill dtype of Tensor with torch.tensortype can work well
|
ILCSFNO
|
closed
|
[
"triaged",
"module: python frontend"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Here is a normal usage of `TensorBase.type()` by `torch.bartlett_window`:
### Repro 1
```python
import torch
window_length = 10
window1 = torch.bartlett_window(window_length, requires_grad=True)
window2 = window1.type(torch.long)
print(window1)
print(window2)
```
### Output 1
```txt
tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000, 0.8000, 0.6000, 0.4000,
0.2000], requires_grad=True)
tensor([0, 0, 0, 0, 0, 1, 0, 0, 0, 0])
```
Here is a usage that I thought should raise Error:
### Repro 2
```python
import torch
window_length = 10
window1 = torch.bartlett_window(window_length, requires_grad=True)
window2 = window1.type(torch.LongTensor)
print(window1)
print(window2)
```
### Output 2 (the same to Output 1)
```txt
tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000, 0.8000, 0.6000, 0.4000,
0.2000], requires_grad=True)
tensor([0, 0, 0, 0, 0, 1, 0, 0, 0, 0])
```
The reason I thought it should raise Error contains two aspects:
* The first is that `torch.LongTensor` is of `torch.tensortype`, not `torch.dtype`, they are different at the macro and micro levels.
* The second is that `bartlett_window` expects floating point dtypes, see `Repro 3` and `Repro 4` below:
### Repro 3
```python
import torch
window_length = 10
window1 = torch.bartlett_window(window_length, dtype=torch.float, requires_grad=True)
print(window1)
```
### Output 3
```txt
tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000, 0.8000, 0.6000, 0.4000,
0.2000], requires_grad=True)
```
### Repro 4
```python
import torch
window_length = 10
window1 = torch.bartlett_window(window_length, dtype=torch.long, requires_grad=True)
print(window1)
```
### Output 4
```txt
RuntimeError: bartlett_window expects floating point dtypes, got: TensorOptions(dtype=long int, device=cpu, layout=Strided (default), requires_grad=false (default), pinned_memory=false, memory_format=(nullopt))
```
Through these, we see that `torch.long` is not allowed in `torch.bartlett_window`, but it can be transferred through `.type()`, why?
So above are reasons that I thought `Repro 2` should raise Error.
In all, I want to express:
* When we try to transfer some types to `TensorBase.type()`, the type should be of the data level, not the object level, but `Repro 2` against it.
* When we focus on the `torch.bartlett_window`, floating point dtypes is expected, but `Repro 2` works well, which against it.
I wonder whether these are expected behaviors, that is, whether `Repro 2` should act the same to `Repro 1`.
Thanks for noting.
### Versions
Nightly
cc @albanD
| true
|
2,969,391,508
|
DDP and multi-GPU related issue
|
WenHuiShen-Bio
|
closed
|
[] | 0
|
NONE
|
I am working on graph similarity prediction using the SimGNN model. Since SimGNN requires input as pairs of graphs, I cannot use PyTorch's DataLoader to batch multiple graphs together efficiently. As a result, my GPU utilization is only around 10% per GPU, and I am using 4 GPUs for multi-GPU training.
To improve GPU utilization, I attempted to run multiple processes in parallel on each GPU, so that each GPU could train multiple pairs of graphs at the same time. However, I encountered the following issues:
- I tried using PyTorch's multiprocessing, but due to the DDP (Distributed Data Parallel) environment, each process cannot properly communicate during backpropagation.
- Running multiple processes on each GPU seems to conflict with PyTorch’s DDP, preventing inter-process communication across GPUs.
**My Goal:**
I want to launch multiple processes per GPU, so that each GPU can efficiently process multiple graph pairs in parallel and achieve higher utilization.
Are there any efficient ways to train multiple pairs of graphs in parallel on each GPU? Alternatively, are there other ways to improve GPU utilization in this scenario?
| true
|
2,969,335,276
|
conv2d fp8 support
|
sipie800
|
open
|
[
"module: convolution",
"triaged",
"enhancement",
"module: float8"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
It seems that torch supports fp8 nn.linear now. Any plans to support fp8 nn.Conv2d?
### Alternatives
_No response_
### Additional context
_No response_
cc @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
2,969,293,223
|
Torch compile for `torch.searchsorted` failed when capturing scalar outputs, with scalar `values` taken from `sorted_sequence`
|
HollowMan6
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: decompositions"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This is related to the extraction of specialized integers from data-dependent expressions.
A minimal reproducer:
```python
import torch
torch._dynamo.config.capture_scalar_outputs = True
def test_demo(sorted_seq: torch.Tensor):
return torch.searchsorted(sorted_seq, sorted_seq[-1].item() // 3)
sorted_seq = torch.tensor([0, 1, 3, 7])
print(torch.compile(test_demo)(sorted_seq))
```
### Error logs
```logs
W0403 14:35:39.656000 159345 torch/fx/experimental/symbolic_shapes.py:6307] [0/0] failed during evaluate_expr((u0//3), hint=None, size_oblivious=False, forcing_spec=False
E0403 14:35:39.657000 159345 torch/fx/experimental/recording.py:299] [0/0] failed while running evaluate_expr(*((u0//3), None), **{'fx_node': False})
Traceback (most recent call last):
File "test-search.py", line 9, in <module>
print(torch.compile(test_demo)(sorted_seq))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 153, in aot_dispatch_base
fw_module, updated_flat_args, maybe_subclass_meta = aot_dispatch_base_graph( # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 153, in aot_dispatch_base_graph
fw_module = _create_graph(
^^^^^^^^^^^^^^
File "torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph
fx_g = make_fx(
^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 2196, in wrapped
return make_fx_tracer.trace(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 2134, in trace
return self._trace_inner(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 2105, in _trace_inner
t = dispatch_trace(
^^^^^^^^^^^^^^^
File "torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 1138, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "torch/fx/_symbolic_trace.py", line 843, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 1193, in wrapped
out = f(*tensors) # type:ignore[call-arg]
^^^^^^^^^^^
File "<string>", line 1, in <lambda>
File "torch/_functorch/_aot_autograd/traced_function_transforms.py", line 693, in inner_fn
outs = fn(*args)
^^^^^^^^^
File "torch/_functorch/_aot_autograd/traced_function_transforms.py", line 413, in _functionalized_f_helper
f_outs = fn(*f_args)
^^^^^^^^^^^
File "torch/_functorch/_aot_autograd/traced_function_transforms.py", line 78, in inner_fn
outs = fn(*args)
^^^^^^^^^
File "torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
out = PropagateUnbackedSymInts(mod).run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/symbolic_shapes.py", line 6779, in run_node
result = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/interpreter.py", line 310, in call_function
return target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 1241, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "torch/_subclasses/functional_tensor.py", line 527, in __torch_dispatch__
outs_unwrapped = func._op_dk(
^^^^^^^^^^^^
File "torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 1343, in __torch_dispatch__
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 789, in proxy_call
r = maybe_handle_decomp(proxy_mode, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/proxy_tensor.py", line 2264, in maybe_handle_decomp
out = CURRENT_DECOMPOSITION_TABLE[op](*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/decomposition.py", line 1041, in searchsorted_scalar
torch.tensor([self], device=sorted_sequence.device),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/sym_node.py", line 492, in guard_int
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/symbolic_shapes.py", line 6303, in evaluate_expr
return self._evaluate_expr(
^^^^^^^^^^^^^^^^^^^^
File "torch/fx/experimental/symbolic_shapes.py", line 6493, in _evaluate_expr
raise self._make_data_dependent_error(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
GuardOnDataDependentSymNode: Could not extract specialized integer from data-dependent expression (u0//3) (unhinted: (u0//3)). (Size-like symbols: none)
Caused by: (_inductor/decomposition.py:1041 in searchsorted_scalar)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
While executing %searchsorted : [num_users=1] = call_function[target=torch.searchsorted](args = (%l_sorted_seq_, %floordiv), kwargs = {})
Original traceback:
File "test-search.py", line 6, in test_demo
return torch.searchsorted(sorted_seq, sorted_seq[-1].item() // 3)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.5 (Plow) (x86_64)
GCC version: (conda-forge gcc 12.4.0-2) 12.4.0
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.34
Python version: 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-503.26.1.el9_5.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7413 24-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 3630.8101
CPU min MHz: 1500.0000
BogoMIPS: 5289.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca debug_swap
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-5
NUMA node1 CPU(s): 6-11
NUMA node2 CPU(s): 12-17
NUMA node3 CPU(s): 18-23
NUMA node4 CPU(s): 24-29
NUMA node5 CPU(s): 30-35
NUMA node6 CPU(s): 36-41
NUMA node7 CPU(s): 42-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchmetrics==1.7.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] cuda-cudart 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart-static 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-static_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cupti 12.4.127 he02047a_2 conda-forge
[conda] cuda-cupti-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-libraries 12.4.1 ha770c72_1 conda-forge
[conda] cuda-libraries-dev 12.4.1 ha770c72_1 conda-forge
[conda] cuda-nvrtc 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvrtc-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx-dev 12.4.127 ha770c72_2 conda-forge
[conda] cuda-opencl 12.4.127 he02047a_1 conda-forge
[conda] cuda-opencl-dev 12.4.127 he02047a_1 conda-forge
[conda] cuda-runtime 12.4.1 ha804496_0 conda-forge
[conda] cudnn 9.8.0.87 h81d5506_1 conda-forge
[conda] cusparselt 0.7.1.0 hcd2ec93_1 conda-forge
[conda] libcublas 12.4.5.8 he02047a_2 conda-forge
[conda] libcublas-dev 12.4.5.8 he02047a_2 conda-forge
[conda] libcufft 11.2.1.3 he02047a_2 conda-forge
[conda] libcufft-dev 11.2.1.3 he02047a_2 conda-forge
[conda] libcurand 10.3.5.147 he02047a_2 conda-forge
[conda] libcurand-dev 10.3.5.147 he02047a_2 conda-forge
[conda] libcusolver 11.6.1.9 he02047a_2 conda-forge
[conda] libcusolver-dev 11.6.1.9 he02047a_2 conda-forge
[conda] libcusparse 12.3.1.170 he02047a_2 conda-forge
[conda] libcusparse-dev 12.3.1.170 he02047a_2 conda-forge
[conda] libnvjitlink 12.4.127 he02047a_2 conda-forge
[conda] libnvjitlink-dev 12.4.127 he02047a_2 conda-forge
[conda] nccl 2.26.2.1 ha44e49d_0 conda-forge
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] nvtx 0.2.11 py312h66e93f0_0 conda-forge
[conda] optree 0.14.1 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchmetrics 1.7.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @ezyang @bobrenjc93 @SherlockNoMad
| true
|
2,969,204,886
|
RAM leak during data loading with multiprocessing and Conv3d on CPU in Dataset __getitem__
|
ilyas-sirazitdinov-snkeos
|
open
|
[
"module: dataloader",
"module: cpu",
"module: memory usage",
"triaged",
"module: mkldnn",
"module: data"
] | 7
|
NONE
|
### 🐛 Describe the bug
I have the following use case:
* My custom PyTorch Dataset receives a 3D tensor (these tensors have various shapes).
* It applies Gaussian blur preprocessing on the CPU.
* It returns the processed tensor.
* To speed up processing, I want to use multiprocessing.
The Python snippet below simulates this by generating random tensors and using Conv3d instead of Gaussian blur.
```python3
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
torch.use_deterministic_algorithms(True)
torch.backends.cudnn.benchmark = False
torch.manual_seed(0)
class MyDatasets(Dataset):
def __init__(self, device:str="cpu", n_samples:int=1000, min_shape: int = 150, max_shape: int = 200):
super().__init__()
self.augm = nn.Conv3d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1, padding_mode='zeros', device=device)
self.n_samples = n_samples
self.min_shape = min_shape
self.max_shape = max_shape
self.device = device
def __len__(self):
return self.n_samples
@torch.no_grad()
def __getitem__(self, index):
rand_shape = torch.randint(self.min_shape, self.max_shape, (3,))
rand_tensor = torch.randn((1, *rand_shape), device=self.device)
rand_tensor = self.augm(rand_tensor)
return rand_tensor
def main():
epochs = 50
device = "cpu"
n_samples = 500
min_shape = 150
max_shape = 300
num_workers = 4
dataset = MyDatasets(device=device, n_samples=n_samples, min_shape=min_shape, max_shape=max_shape)
dloader = DataLoader(dataset, batch_size=1, num_workers=num_workers, pin_memory=False, persistent_workers=True, multiprocessing_context="spawn")
for ep in range(epochs):
print(f"Epoch {ep}")
for idx, sample in enumerate(dloader):
if idx % 100 == 0:
print(f"Sample {idx}")
pass
if __name__ == "__main__":
main()
```
After some time of execution my process got killed due to RAM OOM:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1251, in _try_get_data
data = self._data_queue.get(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/queues.py", line 113, in get
if not self._poll(timeout):
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/connection.py", line 440, in _poll
r = wait([self], timeout)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/multiprocessing/connection.py", line 1136, in wait
ready = selector.select(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch/utils/data/_utils/signal_handling.py", line 73, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 1192) is killed by signal: Killed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/snke/workspace/mem_leak.py", line 91, in <module>
main()
File "/home/snke/workspace/mem_leak.py", line 85, in main
for idx, sample in enumerate(dloader):
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 708, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1458, in _next_data
idx, data = self._get_data()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1420, in _get_data
success, data = self._try_get_data()
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1264, in _try_get_data
raise RuntimeError(
RuntimeError: DataLoader worker (pid(s) 1192) exited unexpectedly
```
I used [memprof](https://github.com/jmdana/memprof) to check the RAM and it confirmed RAM leak:
```bash
mprof run --multiprocess main.py
```

Surprisingly, there is no RAM problem when I use tensors of fixed shapes, namely, instead of:
```python3
rand_shape = torch.randint(self.min_shape, self.max_shape, (3,))
```
I use:
```python3
rand_shape = (200, 200, 200)
```

I also found a dependency of RAM used w.r.t. the tensor shapes e.g. here RAM reached some saturation point:
```python
min_shape = 100
max_shape = 200
```

### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.9 (main, Mar 13 2025, 16:44:52) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 5000
Nvidia driver version: 572.83
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2288G CPU @ 3.70GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 13
BogoMIPS: 7392.02
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Unknown: Dependent on hypervisor status
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] mypy==1.6.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @andrewkho @divyanshk @SsnL @VitalyFedyunin @dzhulgakov @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @ezyang @gchanan @zou3519 @kadeng @msaroufim
| true
|
2,968,886,844
|
Fix codegen, change str comparison opeator to == for proper equality …
|
jgrzybek-habana
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 7
|
CONTRIBUTOR
|
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,968,853,387
|
`weight_decay` etc. works contradictory to `params` without grad
|
ILCSFNO
|
closed
|
[
"module: autograd",
"module: optimizer",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The docs of [torch.optim.Adam()](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam), [torch.optim.AdamW()](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW) and [torch.optim.RAdam()](https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html#torch.optim.RAdam) show their shared description as below:
https://github.com/pytorch/pytorch/blob/9e106019f64d668f17f0b50dc46192cff7a37dce/torch/optim/adam.py#L41
and `weight_decay` is described as:
https://github.com/pytorch/pytorch/blob/9e106019f64d668f17f0b50dc46192cff7a37dce/torch/optim/adam.py#L322
For repro below:
### Repro
```python
import torch
import numpy as np
input_data = np.random.rand(10, 10)
input_tensor = torch.from_numpy(input_data)
input_tensor = torch.autograd.Variable(input_tensor, requires_grad=False)
optimizer = torch.optim.Adam([{'params': input_tensor, 'weight_decay': 0.01}])
# optimizer = torch.optim.AdamW([{'params': input_tensor, 'weight_decay': 0.01}])
# optimizer = torch.optim.RAdam([{'params': input_tensor, 'weight_decay': 0.01}])
optimizer.step()
```
It can run well.
But when we focus on the two parameters:
* `requires_grad` in `torch.autograd.Variable`
* `weight_decay` in `torch.optim.Adam`, `torch.optim.AdamW` and `torch.optim.RAdam`
We notice that `params` with `requires_grad=False` is compatible with `weight_decay` set non-zero. However, it shouldn't!
As we all know, `weight_decay` is only useful when grad is allowed. The optimizer attempts to apply `weight_decay` to `input_tensor` which has `requires_grad=False`. Adding `weight_decay` to a parameter with `requires_grad=False` creates invalid gradient modification, which should lead to an error when trying to modify non-trainable parameters during the optimization step.
### Suggestions For Discussion above (Seemed Painful)
* Set initial value of weight_decay to None instead of 0
* Raise ValueError or UserWarning when `weight_decay` is set and `params` not require grad
e.g.
```txt
`params` should require grad when `weight_decay` is set
```
### Further
While we focus on the `weight_decay`, more other arguments should be noted too, however, it's painful to fix yet.
Look back to `Optimizer`, it is a crucial component in deep learning training, responsible for updating model parameters based on the gradient of the loss function, thereby continuously improving model performance.
So another useful path is to reject `params` without grad needed, that is, suggestions below:
### Suggestions Modified (Not Painful)
* Check `requires_grad` parameter of argument `params`, if True, accepted, otherwise, rejected and raise ValueError.
* Add Warning in description of `Optimizer`, from:
https://github.com/pytorch/pytorch/blob/9e106019f64d668f17f0b50dc46192cff7a37dce/torch/optim/optimizer.py#L335-L348
to:
```python
class Optimizer:
r"""Base class for all optimizers.
.. warning::
Parameters need to be specified as collections that have a deterministic
ordering that is consistent between runs. Examples of objects that don't
satisfy those properties are sets and iterators over values of dictionaries.
Args:
params (iterable): an iterable of :class:`torch.Tensor` s or
:class:`dict` s. Specifies what Tensors should be optimized,
which should have grad available.
defaults: (dict): a dict containing default values of optimization
options (used when a parameter group doesn't specify them).
"""
```
Thanks for noting.
### Versions
Nightly
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @vincentqb @jbschlosser @janeyx99 @crcrpar
| true
|
2,968,636,428
|
[Easy] Add `output_size` in forward method of ConvTranspose2d
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 10
|
CONTRIBUTOR
|
Fixes #74593
Add description for `forward` in [ConvTranspose2d](https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html) doc
## Test Result

| true
|
2,968,599,277
|
Refactoring: fix the python constant check
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150608
As the title stated.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,968,571,714
|
`torch.jit.script` does not respect `torch.set_default_dtype`
|
defaultd661
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
# Bug 1: `torch.jit.script`
### 🐛 Describe the bug
When scripting a function that returns an empty tensor, the scripted function does not respect the default dtype set by `torch.set_default_dtype`. Instead, it returns a tensor with torch.float32, even when the expected dtype should be torch.float64.
To reproduce:
```
import torch
def test_bug():
def foo():
# Return an empty tensor
return torch.empty((2, 3))
scripted_foo = torch.jit.script(foo)
# Test for different default dtypes
for default_dtype in [torch.float, torch.double, torch.half]:
print("\ndefault_dtype:", default_dtype)
# Set the current default dtype
torch.set_default_dtype(default_dtype)
# Compute expected outputs using eager execution
eager_out = foo()
# Compute outputs using scripted function
scripted_out = scripted_foo()
# Check if the tensor has the expected dtype
if scripted_out.dtype != eager_out.dtype:
print("scripted_out.dtype:", scripted_out.dtype)
print("eager_out.dtype:", eager_out.dtype)
print("BUG")
return
if __name__ == "__main__":
test_bug()
```
Output:
```
default_dtype: torch.float32
default_dtype: torch.float64
scripted_out.dtype: torch.float32
eager_out.dtype: torch.float64
BUG
```
# Bug 2: `torch.utils.data.default_collate`
### To reproduce
```
import torch
from torch.utils.data import default_collate
def test_bug():
try:
for default_dtype in [torch.float, torch.double, torch.half]:
torch.set_default_dtype(default_dtype)
batch = [1.0, 2.0, 3.0]
collated_output = default_collate(batch)
expected_dtype = torch.tensor(1.0).to(default_dtype).dtype
if collated_output.dtype != expected_dtype:
print(f'Expected dtype: {expected_dtype}, but got: {collated_output.dtype}')
return
except Exception as e:
print('e =', e)
print(f'Other error: {str(e)}')
if __name__ == '__main__':
test_bug()
```
### Output
```
Expected dtype: torch.float32, but got: torch.float64
```
### Versions
torch 2.6.0
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,968,542,345
|
Make `nn.MultiLabelMarginLoss` error message user friendly
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: nn"
] | 1
|
CONTRIBUTOR
|
Fixes #106011
## Test Result
```python
import torch
import torch.nn as nn
input_tensor = torch.rand([20, 9, 20])
target_tensor = torch.rand([2, 2, 2, 2, 2, 2, 2])
loss_function = nn.MultiLabelMarginLoss()
loss = loss_function(input_tensor, target_tensor)
RuntimeError: Expected input tensor to have 0, 1 or 2 dimension, but got: 3, with shape: [20, 9, 20]
input_tensor = torch.rand([1, 3])
target_tensor = torch.rand([2, 2])
loss_function = nn.MultiLabelMarginLoss()
loss = loss_function(input_tensor, target_tensor)
RuntimeError: Expected target tensor to have 2 dimension and shape [1, 3], but got: dim = 2, shape = [2, 2]
input_tensor = torch.rand(3)
target_tensor = torch.rand([2, 2])
loss_function = nn.MultiLabelMarginLoss()
loss = loss_function(input_tensor, target_tensor)
RuntimeError: Expected target tensor to have 0 or 1 dimension, and 3 elements, but got: dim = 2, numel = 4
```
| true
|
2,968,535,490
|
[INDUCTOR] Explanation: Backend compiler `inductor` failed with aten._loc al_scalar_dense.default
|
jiqing-feng
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 5
|
NONE
|
### 🐛 Describe the bug
Backend compiler `inductor` failed with aten._loc al_scalar_dense.default
To reproduce
```python
import torch
from optimum.quanto import ActivationQBytesTensor, absmax_scale, qint8, quantize_activation
device = torch.device("cpu")
input_shape = (10, 32, 32)
a = torch.randn(input_shape).to(device)
def f(x, dtype):
return x.to(dtype)
scale = absmax_scale(a)
qa = quantize_activation(a, qtype=qint8, scale=scale)
compile_f = torch.compile(f)
cqa = compile_f(qa, torch.float16)
assert isinstance(cqa, ActivationQBytesTensor)
assert cqa.qtype == qint8
assert cqa._scale.dtype == torch.float16
```
NOTE: Same code can pass on cuda, just change device to torch.device("cuda")
### Error logs
```
[W403 06:54:01.354259545 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be ov
erridden.
Overriding a previously registered kernel for the same operator and the same dispatch key
operator: aten::_addmm_activation(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, bool use_gelu=Fa
lse) -> Tensor registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
dispatch key: AutocastCPU
previous kernel: registered at /pytorch/aten/src/ATen/autocast_mode.cpp:327
new kernel: registered at /opt/workspace/ipex-cpu-dev/csrc/cpu/autocast/autocast_mode.cpp:112 (function operator())
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0] Backend compiler exception
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0] Explanation: Backend compiler `inductor` failed with aten._loc
al_scalar_dense.default. Adding a graph break.
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0] Hint: Report an issue to the backend compiler repo.
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0]
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0] Developer debug context: Backend: inductor
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0] Exception:aten._local_scalar_dense.default
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0] Traceback:
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0] File "/home/jiqing/test_compile_quanto.py", line 10, in f W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0] return x.to(dtype)
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0]
W0403 06:54:02.808000 364654 torch/_dynamo/exc.py:514] [0/0]
/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:1291: UserWarning: Dynamo does not know how to t
race the builtin `None.TensorBase._make_wrapper_subclass.` This function is either a Python builtin (e.g. _warnings.warn) or
a third-party C/C++ Python extension (perhaps created with pybind).
If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case fo
r a workaround.
If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://p
ytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allo
w_in_graph`.
torch._dynamo.utils.warn_once(explanation + "\n" + "\n".join(hints))
Traceback (most recent call last):
File "/home/jiqing/test_compile_quanto.py", line 15, in <module>
cqa = compile_f(qa, torch.float16)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs) File "/home/jiqing/test_compile_quanto.py", line 10, in f
return x.to(dtype)
File "/usr/local/lib/python3.10/dist-packages/optimum/quanto/tensor/activations/qbytes.py", line 90, in __torch_dispatch__
return qdispatch(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/optimum/quanto/tensor/activations/qbytes_ops.py", line 66, in _to_copy
return ActivationQBytesTensor(t.qtype, t.size(), t.stride(), out_data, out_scale)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1234, in __call__
result = self._inner_convert(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1131, in _compile raise InternalTorchDynamoError(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 782, in compile_inner
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_objec
t
transformations(instructions, code_options)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run super().run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2166, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py", line 1112, in call_function
return handler(tx, args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py", line 791, in <lambda>
tx, [v.realize() for v in args], kwargs
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py", line 791, in <listcomp>
tx, [v.realize() for v in args], kwargs
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/base.py", line 577, in build
return builder.VariableBuilder(tx, source)(value)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 423, in __call__
vt = self._wrap(value)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 617, in _wrap
return self.wrap_tensor(value)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 1766, in wrap_tensor self.assert_not_wrapped_by_this_graph(value)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 1659, in assert_not_wrapped_by_this
_graph
if is_fake(value) and maybe_get_fake_mode(value) is self.tx.fake_mode:
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 192, in is_fake
attrs, _ = type(x).__tensor_flatten__(x)
File "/usr/local/lib/python3.10/dist-packages/optimum/quanto/tensor/activations/qbytes.py", line 64, in __tensor_flatten__
"qtype": self._qtype.name,
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'ActivationQBytesTensor' object has no attribute '_qtype'
from user code:
File "/usr/local/lib/python3.10/dist-packages/optimum/quanto/tensor/activations/qbytes.py", line 55, in __init__
super().__init__(qtype, None, size, stride, data, scale, requires_grad)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For
even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250401+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) 6972P
BIOS Model name: Intel(R) Xeon(R) 6972P
CPU family: 6
Model: 173
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acp
i mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm
pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_faul
t epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad f
sgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb int
el_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local spli
t_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi vnm
i avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid b
us_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 a
mx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 9 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 384 MiB (192 instances)
L3 cache: 960 MiB (2 instances)
NUMA node(s): 6
NUMA node0 CPU(s): 0-31,192-223
NUMA node1 CPU(s): 32-63,224-255
NUMA node2 CPU(s): 64-95,256-287
NUMA node3 CPU(s): 96-127,288-319
NUMA node4 CPU(s): 128-159,320-351
NUMA node5 CPU(s): 160-191,352-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not a
ffected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.8.0+git6daf1d8
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-metric-learning==2.8.1
[pip3] pytorch-msssim==1.0.0
[pip3] pytorchvideo==0.1.5
[pip3] torch==2.8.0.dev20250401+cpu
[pip3] torch-audiomentations==0.11.1
[pip3] torch_pitch_shift==1.2.5
[pip3] torchaudio==2.6.0.dev20250401+cpu
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.22.0.dev20250401+cpu
[conda] Could not collect
```
cc @chauhang @penguinwu
| true
|
2,968,517,052
|
Development docker image contains extra conda PyTorch installation
|
stevenlele
|
closed
|
[] | 2
|
NONE
|
It should contain the built-from-source version (in `/opt/conda/lib/python3.*/site-packages/`) only, but the conda installation (`/opt/conda/pkgs/pytorch*`) is still there. It's because the `COPY --from` directive does not actually override the target folder - it merges them.
https://github.com/pytorch/pytorch/blob/fc674b45d4d8edfd4c630d89f71ea9f85a2f61f2/Dockerfile#L110-L112
```console
$ docker run -it --rm pytorch/pytorch:2.4.0-cuda12.1-cudnn9-devel
# find / -name libtorch*.so
/opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda_linalg.so
/opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_global_deps.so
/opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so
/opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so
/opt/conda/lib/python3.11/site-packages/torch/lib/libtorch.so
/opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_python.so
/opt/conda/lib/python3.11/site-packages/torchaudio/lib/libtorchaudio.so
/opt/conda/lib/python3.11/site-packages/torchaudio/lib/libtorchaudio_sox.so
/opt/conda/pkgs/pytorch-2.4.0-py3.11_cuda12.1_cudnn9.1.0_0/lib/python3.11/site-packages/torch/lib/libtorch_cuda_linalg.so
/opt/conda/pkgs/pytorch-2.4.0-py3.11_cuda12.1_cudnn9.1.0_0/lib/python3.11/site-packages/torch/lib/libtorch_global_deps.so
/opt/conda/pkgs/pytorch-2.4.0-py3.11_cuda12.1_cudnn9.1.0_0/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so
/opt/conda/pkgs/pytorch-2.4.0-py3.11_cuda12.1_cudnn9.1.0_0/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so
/opt/conda/pkgs/pytorch-2.4.0-py3.11_cuda12.1_cudnn9.1.0_0/lib/python3.11/site-packages/torch/lib/libtorch.so
/opt/conda/pkgs/pytorch-2.4.0-py3.11_cuda12.1_cudnn9.1.0_0/lib/python3.11/site-packages/torch/lib/libtorch_python.so
/opt/conda/pkgs/torchaudio-2.4.0-py311_cu121/lib/python3.11/site-packages/torchaudio/lib/libtorchaudio.so
/opt/conda/pkgs/torchaudio-2.4.0-py311_cu121/lib/python3.11/site-packages/torchaudio/lib/libtorchaudio_sox.so
```
| true
|
2,968,508,956
|
[Inductor][CPU] Add GEMM templates for _weight_int4pack_mm_for_cpu with AMX
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150603
**Summary**
It's part of the task to enable max-autotune with GEMM template for WoQ INT4 GEMM on CPU.
This PR adds AMX-based GEMM templates for `torch.ops.aten_weight_int4pack_mm_for_cpu`. It brings performance benefits on platforms where AMX is available.
**Validation results**
We have run GPT-J-6B and Llama-3-8B-Instruct on a 6th gen Xeon with 96 cores. Results show that the AMX-based microkernel outperforms AVX512-based one by >5x for prefill stage with 1024 input length.
**Test plan**
```
python test/inductor/test_cpu_select_algorithm.py -k test_int4_woq_mm_amx
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,968,508,046
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_int32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_int32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39889618952).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_int32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs_', keys=('aten::_foreach_abs_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int32]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_inplace_cuda_int32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,968,442,338
|
Profiler with record_shapes=True and deterministic algorithms enabled causes crash with FlashAttention
|
JungHoyoun
|
open
|
[
"high priority",
"triage review",
"module: crash",
"module: determinism",
"oncall: profiler",
"module: sdpa"
] | 1
|
NONE
|
### 🐛 Describe the bug
When using `torch.profiler.profile(record_shapes=True)` with `torch.use_deterministic_algorithms(True)`, calling `scaled_dot_product_attention` with `SDPBackend::flash_attention` crashes.
This seems to happen only when both profiler shape recording and deterministic mode are on.
---
### 🔁 **Repro**
```python
import torch
import torch.nn.functional as F
from torch.nn.attention import sdpa_kernel, SDPBackend
import os
torch.use_deterministic_algorithms(True)
query = torch.rand(32, 8, 128, 64, dtype=torch.bfloat16, device="cuda")
key = torch.rand(32, 8, 128, 64, dtype=torch.bfloat16, device="cuda")
value = torch.rand(32, 8, 128, 64, dtype=torch.bfloat16, device="cuda")
with torch.profiler.profile(
activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA],
record_shapes=True,
) as prof:
with sdpa_kernel(backends=[SDPBackend.FLASH_ATTENTION]):
F.scaled_dot_product_attention(query, key, value)
print(prof.key_averages().table(sort_by="cuda_time_total", row_limit=10))
```
### 💥 **Observed behavior**
```
Traceback (most recent call last):
File "/user/code/minimal_snippet.py", line 25, in <module>
F.scaled_dot_product_attention(query, key, value)
RuntimeError: value cannot be converted to type int64_t without overflow
```
---
### 🔍 **Notes**
- `CUDNN_ATTENTION` - Raise Error "cuDNN SDPA is not deterministic."
- `EFFICIENT_ATTENTION` - Works fine
- `MATH` + `os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"` - Works fine
- `FLASH_ATTENTION` : RuntimeError: value cannot be converted to type int64_t without overflow
---
### 🧪 Environment
- **PyTorch version:** `2.7.0.dev20250310+cu124`
- **Works fine in:** `2.4.0` (no crash with same code)
- **OS:** Linux (`Ubuntu 22.04.4 LTS`)
---
If possible, I would like to contribute a PR to help address this.
From what I can tell, Context::deterministicAlgorithms() seems to affect only a limited number of native operations directly.
However, I’m not sure if it is expected to influence components like FlashAttention or the profiler, especially when record_shapes=True.
Would appreciate any guidance or confirmation before proceeding with a patch.
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250310+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1074-azure-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480C
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
BogoMIPS: 3999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 avx512vbmi umip waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.5.1
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.14.1
[pip3] pynvjitlink==0.2.3
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250310+cu124
[pip3] torch-tensorrt==2.5.0a0
[pip3] torchdata==0.11.0
[pip3] torchvision==0.22.0.dev20250226+cu124
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @mruberry @kurtamohler @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,968,399,098
|
[CI][docker] Use install_cusparselt when possible in docker image
|
clee2000
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
spot checked builds for line like `Found CUSPARSELT: /usr/local/cuda/lib64/libcusparseLt.so`. I don't know if there's another way to do it
I am slowly trying to reduce the duplicated code in docker image installs
Pros:
* less dup code
Cons:
* more docker copies
| true
|
2,968,286,459
|
test 2
|
laithsakka
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150599
| true
|
2,968,266,535
|
Added A Error Handling Block Around Recovering DynamicLayerStack
|
zero000064
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
Fixes #149801
Added an error handling for DynamicLayerStack to ensure if it's recovered as desired, if not , raise that exception.
In the finally part, https://github.com/pytorch/pytorch/blob/6470b373c16017f5cb8f1aa4060bb60632b18160/torch/_dynamo/eval_frame.py#L675, pop-up method is called to recover the DynamicLayerStack, but if some error is thrown in the middle of DynamicLayerStack's pop and push pair, that recovery logic will cause another exception.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,968,265,301
|
[training] Adding NUMA support for pytorch
|
efiks
|
open
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (c10d)"
] | 12
|
CONTRIBUTOR
|
Test Plan:
build and run tests for modified libraries locally
buck2 build arvr/mode/platform010/opt //xplat/caffe2:pytorch_ovrsource
buck run arvr/mode/win/debug-md -c python.package_style=inplace //xplat/caffe2:pytorch_test_ovrsource
buck test arvr/mode/linux/opt -c python.package_style=inplace //xplat/caffe2:pytorch_test_ovrsource
buck test mode/opt //caffe2/fb/test:_utils_internal_test
Differential Revision: D72321369
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,968,178,341
|
Fix nn.LazyModuleMixin examples
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 7
|
CONTRIBUTOR
|
Fixes #150404
## Test Result


| true
|
2,968,112,619
|
suppress neon missing message on armv8 build
|
nihui
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
| null | true
|
2,968,102,254
|
Add debug_lines of FXGraphCacheKey to AOTAutogradCacheEntry
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150594
Previously we didn't save debug_lines because it's pretty large, but compared to the size of FXGraphCache entries it's still pretty small. So let's add it to AOTAutogradCache for easier debugability.
Differential Revision: [D72361611](https://our.internmc.facebook.com/intern/diff/D72361611/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,968,068,167
|
Make LazyModuleMixin materialize after load_state_dict
|
zeshengzong
|
open
|
[
"triaged",
"open source"
] | 4
|
CONTRIBUTOR
|
Fixes #73009
## Test Result
```bash
pytest -s test/nn/test_lazy_modules.py
```

| true
|
2,967,990,991
|
torch.onnx.export result in opset=1
|
ducknificient
|
closed
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
why the opset version is ignored after exporting from pytorch ?
```py
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained(
pretrained_model_name_or_path="distilbert/distilbert-base-uncased",
)
model = DistilBertModel.from_pretrained(
pretrained_model_name_or_path="distilbert/distilbert-base-uncased",
device_map="cuda"
)
model.eval()
from torch.export import Dim
vocab_size = tokenizer.vocab_size # Get the actual vocab size
# Step 1: Define Input
batch_size = 4
dummy_input_ids = torch.randint(0, vocab_size, (batch_size, 128)) # Batch size 2, sequence length 128
dummy_attention_mask = torch.ones((batch_size, 128), dtype=torch.int64)
# Step 2: Define Dynamic shapes
dynamic_shapes = {
"input_ids": (Dim.DYNAMIC, Dim.DYNAMIC),
"attention_mask": (Dim.DYNAMIC, Dim.DYNAMIC),
}
# Step 3: Define outputh path
output_path = "distilbert-onnx/model-onnx.onnx"
# Step 4: Export to ONNX
torch.onnx.export(
model, # PyTorch model
(dummy_input_ids, dummy_attention_mask),
output_path, # Output file
export_params=True, # Store the trained weights
opset_version=17, # ONNX opset version
do_constant_folding=True,
input_names=['input_ids', 'attention_mask'], # Input names
output_names=['last_hidden_state'], # Output names
dynamic_shapes=dynamic_shapes,
dynamo=True,
verbose=True # Detailed output
)
print(f"Model exported to {output_path}")
```
and when checking the opset version
```
import onnx
onnx_model = onnx.load(output_path)
print("ONNX Opset Version:", onnx_model.opset_import[0].version)
```
```
ONNX Opset Version: 1
```
additional info
Torch version: 2.6.0+cu124
ONNX version: 1.17.0
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Pop!_OS 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.9.3-76060903-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 570.133.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 4700.0000
CPU min MHz: 400.0000
BogoMIPS: 5376.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Vulnerable: No microcode
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-genai==0.6.0
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxscript==0.2.2
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] cudnn 8.9.2.26 cuda11_0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
| true
|
2,967,975,324
|
[CUDA] include nvtx3 header in wheel so downstream torch extension can find it
|
ppham-nv
|
open
|
[
"triaged",
"open source",
"release notes: build",
"topic: build"
] | 3
|
NONE
|
When building pytorch with USE_SYSTEM_NVTX=0 or undefined then there's no information to downstream torch extension to figure out which nvtx3 headers was used with pytorch. This PR packages the nvtx3 header (340kb) into the torch wheel install so torch extension can reference it. This will help with keeping nvtx3 version alignment between pytorch and its extension.
Fixes: https://github.com/pytorch/pytorch/issues/147220
cc @malfet @atalman @ptrblck @eqy @nWEIdia @tinglvv
| true
|
2,967,889,365
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_int16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_int16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39875114980).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_int16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,967,852,288
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
2,967,846,361
|
Make sure torch.compiler._is_compiling_flag=True in aoti
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: See internal Diff summary
Differential Revision: D72355449
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,967,844,138
|
[Inductor] Add Additional Configs for persistent+TMA version of Triton mm and addmm
|
NikhilAPatel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 41
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150587
Summary:
This PR introduces additional autotuning configurations for the persistent+TMA version of Triton `mm` and `addmm` operations. The new configurations are as follows:
* `(128, 128, 64, 5, 8)`
* `(256, 128, 64, 4, 8)`
* `(128, 128, 64, 5, 4)`
These configurations were selected based on exhaustive autotuning performed on commonly used shapes from an internal foundational model.
While these new configs are generally more performant across the board, we see notable gains a few specific cases:
* In scenarios where `n >> m, k`, the configurations `(128, 128, 64, 5, 8)` and `(256, 128, 64, 4, 8)` tend to produce an additional 5-10% speedup over the aten baseline compared to the original configurations.
* Similarly, the configuration `(128, 128, 64, 5, 4)` yields approximately an 8% improvement in scenarios where k >> m, n.
These enhancements are expected to provide performance benefits across diverse use cases, particularly when compared to the original set of configurations.
Test Plan:
contbuild & OSS CI
Reviewers: paulzhan
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,967,834,326
|
[dynamo] context manager/decorator for dynamo config patching during tracing
|
williamwen42
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going",
"ci-no-td"
] | 20
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150586
Implement traceable config patching for Dynamo: enables restricted patching of Dynamo config where user can use a context manager/decorator to change tracing behavior for parts of the code.
The new `dont_skip_tracing` decorator/context manager for ignoring most trace rules is easily implemented with this more generic traceable config patching feature.
Implementation:
- Create a new specialized context manager class representing a wrapper around torch._dynamo.config.patch
- Dynamo doesn't trace into the context manager but updates config at compile time
- Correctness is based on our correctness for handling supported context managers
- Implementation is inspired by how `GradModeVariable` is implemented.
Previous attempts: https://github.com/pytorch/pytorch/pull/148736 (decorator-only global approach) and https://github.com/pytorch/pytorch/pull/149439 (decorator-only traceback approach)
See https://docs.google.com/document/d/1vWNwKL_jpg-PLopifcaSa338wks3GqSVF4GHRguybGg/edit?tab=t.0 for more details on implementation - including previous approaches.
NOTE: this PR fixes a bug where skipped code objects were not tracked by convert_frame.py, leading to cases where code objects would be automatically skipped even after `torch._dynamo.reset()`. This exposed some latent dynamo-wrapped test failures in CI that previously passed in CI but not locally.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,967,791,920
|
[distributed] Crash when trying to use default PG after creating new PG
|
xmfan
|
open
|
[
"oncall: distributed",
"triaged"
] | 6
|
MEMBER
|
### 🐛 Describe the bug
Not sure if I'm doing something dumb, but I couldn't find docs on it and even LLMs were puzzled:
Repro:
```python
# CRASH=1 torchrun --nproc_per_node=8 try_async_pg.py
import os
import torch
import torch.distributed as dist
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
device = torch.device("cuda", int(rank))
torch.cuda.set_device(device)
dist.init_process_group(backend="nccl", device_id=device)
pg2 = torch.distributed.new_group(backend="nccl", device_id=device)
crash = bool(int(os.environ["CRASH"]))
if crash:
dist.barrier()
else:
dist.barrier(group=pg2)
dist.barrier()
dist.destroy_process_group()
```
Error:
```
(/home/xmfan/core/a/pytorch-env) [16:41:24] ~/core/a/modded-nanogpt (ca) > CRASH=1 torchrun --nproc_per_node=8 try_async_pg.py
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
NCCL version 2.25.1+cuda12.4
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620074 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620075 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620076 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620077 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620078 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620079 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620081 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 6 (pid: 2620080) of binary: /home/xmfan/core/a/pytorch-env/bin/python
Traceback (most recent call last):
File "/home/xmfan/core/a/pytorch-env/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/distributed/run.py", line 892, in main
run(args)
File "/home/xmfan/core/a/pytorch/torch/distributed/run.py", line 883, in run
elastic_launch(
File "/home/xmfan/core/a/pytorch/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=========================================================
try_async_pg.py FAILED
---------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
---------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-04-02_16:41:45
host : devvm062.dkl0.facebook.com
rank : 6 (local_rank: 6)
exitcode : -11 (pid: 2620080)
error_file: <N/A>
traceback : Signal 11 (SIGSEGV) received by PID 2620080
=========================================================
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0a0+git78300c8
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: 19.1.7 (CentOS 19.1.7-1.el9)
CMake version: version 3.31.4
Libc version: glibc-2.34
Python version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:50:58) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 368
On-line CPU(s) list: 0-367
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 368
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 23 MiB (368 instances)
L1i cache: 23 MiB (368 instances)
L2 cache: 184 MiB (368 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-367
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.0
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.8.0a0+git78300c8
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.8.0
[pip3] torchaudio==2.6.0a0+c670ad8
[pip3] torchdata==0.12.0a0+d155220
[pip3] torchdata==0.12.0a0+d155220
[pip3] torchmetrics==1.0.3
[pip3] torchmultimodal==0.1.0b0
[pip3] torchpippy==0.2.0+1bcb2bf
[pip3] torchpippy==0.2.0+1bcb2bf
[pip3] torchrec==1.1.0
[pip3] torchtext==0.17.0a0+bde7ecd
[pip3] torchtitan==0.0.2
[pip3] torchvision==0.22.0a0+d462da2
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] blas 1.0 mkl
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py312h5eee18b_2
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.8.0a0+git78300c8 dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchao 0.8.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+c670ad8 dev_0 <develop>
[conda] torchdata 0.12.0a0+d155220 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchpippy 0.2.0+1bcb2bf pypi_0 pypi
[conda] torchrec 1.1.0 pypi_0 pypi
[conda] torchtext 0.17.0a0+bde7ecd dev_0 <develop>
[conda] torchtitan 0.0.2 pypi_0 pypi
[conda] torchvision 0.22.0a0+d462da2 dev_0 <develop>
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,967,779,113
|
[WIP] try always splitting in reshape view
|
pianpwk
|
open
|
[] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,967,769,406
|
fix dynamic shapes for kwargs
|
avikchaudhuri
|
open
|
[
"fb-exported",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 14
|
CONTRIBUTOR
|
Summary:
In this PR we change how `dynamic_shapes` map to the top-level structure of `args` and `kwargs`.
Previously, we would match `dynamic_shapes` to the input signature of a module, using `inspect.Signature.bind`; instead, now we match it with the structure of `args` and `kwargs`.
This has some desirable consequences:
1. Specifying dynamic shapes of variadic args and kwargs becomes less confusing. Previously we would have to have additional structure in `dynamic_shapes` to match the "tuple" / "dict" nature of `*args` / `**kwargs`, now we can directly map inputs.
2. We do not suffer from brittleness when modules are wrapped, typically with `*args, **kwargs`. If your inputs work, your dynamic shapes should also work.
In the new regime, you still have a choice on whether to specify `dynamic_shapes` as a tuple/list or a dict.
* As a tuple/list, you're basically associating with `*args, *kwargs.values()`
* As a dict, you (obviously) must use the same names as `kwargs` at least. Moreover, to avoid silly mistakes you're also expected to use names in the input signature for `args`, as long as it is not variadic—otherwise you can use whatever names you want. This last decision is a subjective choice, striking a balance between error reporting and flexibility.
Fixes #150022
Fixes #150371
Test Plan: added / fixed tests
Differential Revision: D72350333
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,967,754,639
|
[test] DTensor moe compile fixes for dynamic shapes
|
bdhirsh
|
open
|
[
"oncall: distributed",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
(not for landing)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150582
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,967,728,114
|
add unit test for preferred_blas_library settings
|
jeffdaily
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 12
|
COLLABORATOR
|
Follow up to #150212 that was committed without a unit test.
| true
|
2,967,725,526
|
[ROCm] Add support for SymmetricMemory
|
pragupta
|
closed
|
[
"oncall: distributed",
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"rocm",
"keep-going",
"ciflow/rocm-mi300",
"ciflow/periodic-rocm-mi300"
] | 28
|
CONTRIBUTOR
|
This is an attempt to re-land the initial PR https://github.com/pytorch/pytorch/pull/134817 with recent design changes from upstream.
**NOTE:**
ROCm currently does NOT have multicast/multimem hardware support at the moment, so those features are disabled in symmetric memory for ROCm. This also means that we currently do not have a way of lowering add + all_reduce + wait_tensor into one_shot_all_reduce op in inductor as it depends on a multicast buffer support.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,967,707,707
|
Add Chillee as core reviewer
|
zou3519
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150579
| true
|
2,967,652,953
|
ROCm Sparsity through HipSparseLT
|
petrex
|
open
|
[
"module: rocm",
"triaged",
"open source",
"release notes: sparse"
] | 4
|
CONTRIBUTOR
|
TLDR:
- This pull request introduces support for hipSPARSELt in ROCm, current usage would be semi-structure sparsity.
- Require **ROCm 6.3** && **gfx942/gfx950**.
- The average performance uplift (compare to dense operation) is ~ 20% in ROCm 6.4 but expect further performance lift along the way.
### Dense vs. Sparse Performance Comparison
#### **NT (Row-major)**
**Average Uplift**: `1.20`
| M | N | K | hipsparselt-bench (us) | hipblaslt-bench get all (us) | Uplift |
|-------|--------|--------|-------------------------|-------------------------------|--------|
| 14336 | 8 | 4096 | 20.05 | 25.3 | 1.26 |
| 4096 | 8 | 14336 | 21.07 | 25.28 | 1.20 |
| 3072 | 3072 | 10240 | 299.05 | 351.82 | 1.18 |
| 3072 | 1536 | 768 | 18.56 | 20.05 | 1.08 |
| 3072 | 17664 | 768 | 163.13 | 173.91 | 1.07 |
| 3072 | 196608 | 768 | 1717.30 | 1949.63 | 1.14 |
| 3072 | 24576 | 768 | 206.84 | 242.98 | 1.17 |
| 3072 | 6144 | 768 | 53.90 | 56.88 | 1.06 |
| 3072 | 98304 | 768 | 833.77 | 962.28 | 1.15 |
| 768 | 1536 | 768 | 8.53 | 19.65 | 2.30 |
| 768 | 17664 | 768 | 46.02 | 46.84 | 1.02 |
| 768 | 196608 | 768 | 463.15 | 540.46 | 1.17 |
| 768 | 24576 | 768 | 54.32 | 59.55 | 1.10 |
| 768 | 6144 | 768 | 19.47 | 20.15 | 1.03 |
| 768 | 98304 | 768 | 231.88 | 258.73 | 1.12 |
---
#### **NN (Row-major)**
**Average Uplift**: `1.13`
| M | N | K | hipsparselt-bench (us) | hipblaslt-bench get all (us) | Uplift |
|-----|--------|-------|-------------------------|-------------------------------|--------|
| 768 | 1536 | 3072 | 27.50 | 28.78 | 1.05 |
| 768 | 17664 | 3072 | 125.06 | 158.94 | 1.27 |
| 768 | 196608 | 3072 | 1568.38 | 1767.12 | 1.13 |
| 768 | 24576 | 3072 | 171.05 | 203.49 | 1.19 |
| 768 | 6144 | 3072 | 58.72 | 60.39 | 1.03 |
| 768 | 98304 | 3072 | 787.15 | 887.60 | 1.13 |
-------------------------
This pull request introduces support for hipSPARSELt in ROCm, alongside various updates and improvements to the codebase and test suite. The changes primarily involve adding configuration flags, updating conditional checks, and ensuring compatibility with hipSPARSELt.
### ROCm and hipSPARSELt Support:
* [`BUILD.bazel`](diffhunk://#diff-7fc57714ef13c3325ce2a1130202edced92fcccc0c6db34a72f7b57f60d552a3R292): Added `@AT_HIPSPARSELT_ENABLED@` substitution to enable hipSPARSELt support.
* [`aten/CMakeLists.txt`](diffhunk://#diff-0604597797bb21d7c39150f9429d6b2ace10b79ab308514ad03f76153ae8249bR104-R110): Introduced a conditional flag to enable hipSPARSELt support based on ROCm version.
* [`aten/src/ATen/CMakeLists.txt`](diffhunk://#diff-ce80f3115ab2f6be5142f0678a1fc92c6b2d7727766ce44f48726c99e720f777R37): Added `AT_HIPSPARSELT_ENABLED` configuration.
* [`aten/src/ATen/cuda/CUDAConfig.h.in`](diffhunk://#diff-8bb82da825ca87c28233abacffa1b0566c73a54990b7a77f3f5108d3718fea15R11): Defined `AT_HIPSPARSELT_ENABLED` macro.
* `caffe2/CMakeLists.txt`, `cmake/Dependencies.cmake`, `cmake/public/LoadHIP.cmake`: Included hipSPARSELt in the ROCm dependencies. [[1]](diffhunk://#diff-c5ee05f1e918772792ff6f2a3f579fc2f182e57b1709fd786ef6dc711fd68b27R1380) [[2]](diffhunk://#diff-12e8125164bbfc7556b1781a8ed516e333cc0bf058acb7197f7415be44606c72L1084-R1084) [[3]](diffhunk://#diff-b98e27b9a5f196a6965a99ee5a7bb15b3fc633d6375b767635b1b04ccb2fd3d5R153)
### Codebase Updates:
* [`aten/src/ATen/native/sparse/cuda/cuSPARSELtOps.cpp`](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3R1-R6): Added hipSPARSELt support checks and initialization functions. Updated various methods to conditionally handle hipSPARSELt. [[1]](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3R1-R6) [[2]](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3R22-R67) [[3]](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3R78-R85) [[4]](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3R97-R109) [[5]](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3R183-R188) [[6]](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3L134-R200) [[7]](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3R213-R222) [[8]](diffhunk://#diff-ae921dd1584ab98fdd9c25a3521047795de702223f5b65fdaa45a5bd92b4d1f3L217-R285)
### Test Suite Updates:
* [`test/test_sparse_semi_structured.py`](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR50-R65): Added checks for hipSPARSELt availability and updated test conditions to skip tests not supported on ROCm. [[1]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR50-R65) [[2]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR228) [[3]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR239) [[4]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR250) [[5]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR579) [[6]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR624) [[7]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR661) [[8]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR695) [[9]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR730) [[10]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR755) [[11]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR771) [[12]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR809) [[13]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR844) [[14]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cL840-R854) [[15]](diffhunk://#diff-b7b57bc1e34145ef89c7929751d5d26aeecc8edfb37da9c60e9d3f0a1335133cR1005)
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,967,652,613
|
[TorchScript] Enum scripting failures in python 3.11+
|
davidberard98
|
open
|
[
"oncall: jit"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**python's Enums have changed subtly from 3.10 to 3.12**. See the comment below for more details. This comment contains the original bug report (for an enum).
repro:
```python
import torch
from enum import Enum
class MyOptions(str, Enum):
ABC = "abc"
DEF = "def"
def __str__(self) -> str:
return str.__str__(self)
def my_fn() -> dict[str, torch.Tensor]:
return {MyOptions.ABC.value: torch.randn(4, 4)}
torch.jit.script(my_fn)()
```
error:
```
Traceback (most recent call last):
File "/data/users/dberard/pytorch-env2/scripts/jit_mi.py", line 14, in <module>
torch.jit.script(my_fn)()
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/dberard/pytorch-env2/pytorch/torch/jit/_script.py", line 1443, in script
ret = _script_impl(
^^^^^^^^^^^^^
File "/data/users/dberard/pytorch-env2/pytorch/torch/jit/_script.py", line 1214, in _script_impl
fn = torch._C._jit_script_compile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/dberard/pytorch-env2/pytorch/torch/jit/annotations.py", line 491, in try_ann_to_type
scripted_class = torch.jit._script._recursive_compile_class(ann, loc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/dberard/pytorch-env2/pytorch/torch/jit/_script.py", line 1618, in _recursive_compile_class
return _compile_and_register_class(obj, rcb, _qual_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/dberard/pytorch-env2/pytorch/torch/jit/_recursive.py", line 60, in _compile_and_register_class
script_class = torch._C._jit_script_class_compile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError:
attribute lookup is not defined on __str__:
File "/data/users/dberard/pytorch-env2/scripts/jit_mi.py", line 9
def __str__(self) -> str:
return str.__str__(self)
~~~~~~~~~~~ <--- HERE
'MyOptions.__str__' is being compiled since it was called from '__torch__.MyOptions'
File "/data/users/dberard/pytorch-env2/scripts/jit_mi.py", line 12
def my_fn() -> dict[str, torch.Tensor]:
return {MyOptions.ABC.value: torch.randn(4, 4)}
~~~~~~~~~ <--- HERE
'__torch__.MyOptions' is being compiled since it was called from 'my_fn'
File "/data/users/dberard/pytorch-env2/scripts/jit_mi.py", line 12
def my_fn() -> dict[str, torch.Tensor]:
return {MyOptions.ABC.value: torch.randn(4, 4)}
~~~~~~~~~~~~~~~~~~~ <--- HERE
```
### Versions
python 3.12, pytorch from source ~Apr 2 2025, A100 (but I think gpu doesn't matter).
python 3.10 does not repro.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,967,647,901
|
[cuda] Added CUDA kernels for RMSNorm
|
ahmadsharif1
|
open
|
[] | 3
|
CONTRIBUTOR
|
This speeds up RMSNorm in eager mode by 2-5x for both the forward and backward passes.
Example:

TODO: Fix the regressions in the narrow case once CI is green
This PR is still draft
| true
|
2,967,645,348
|
[FlexAttention] Remove dead code
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: flex attention"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150575
cc @Chillee @yanboliang @BoyuanFeng
| true
|
2,967,627,272
|
[mem profiler] mem fragmentation and pynvml view
|
sfc-gh-sbekman
|
open
|
[
"module: cuda",
"module: memory usage",
"triaged",
"module: CUDACachingAllocator"
] | 15
|
NONE
|
### 🚀 The feature, motivation and pitch
As we know what cuda allocator shows doesn't have a 1:1 correlation to free memory usage because of fragmentation, so often one gets OOM while there are many GBs of free memory, except they are fragmented.
I'm trying to figure out how to find out the fragmentation happens in the code and if I dump cuda memory stats and pynvml memory I see a very big discrepancy between what cuda allocator reports and what pynvml reports. ~10-30GB difference in my particular use-case. i.e. mem_profiler shows max 50GB ever allocated, yet pynmvl shows ~80GB and easily OOMs. Where did the additional 30GB go?
It'd be useful to find where such divergence happens, other than by manually inserting prints of memory stats.
So I wonder if perhaps mem_profiler could be extended to overlay pynvml memory usage on top of memory allocations - so it should be easier to see which operation may have led to creating holes / "leaked" some pynvml memory .
But in general it'd be super useful to have a tool that shows the user where in their code memory fragmentation happened. If you have some tips I'm all ears.
Additionally I'm being told that there are components that allocate memory but this is not accounted for in mem tracker. This includes kernels loading on cuda init (easy, since it's fixed and early on), then I've discovered `torch.dist` allocates some 1-2GB - and there is no accounting for it in memory profiler. What else is not being accounted for? **Can we somehow make mem profiler account for all memory allocations including non-`torch.cuda` ones?** (even if it says - wasn't done by me, but surely it could prompt pynvml and check if memory has changed despite it not registering some malloc in the same way it reports blocks as unknown if it's launched after some memory was already allocated).
cc @ptrblck @msaroufim @eqy
| true
|
2,967,595,114
|
[Bugfix] Fix compile error with `torch.Tensor.unsqueeze_` and inplace views called from Tensor Class
|
Lucaskabela
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Fixes #129673
### Summary:
Modifying a tensor by reshaping in place (such as `unsqueeze_`) should cause a graph break; however, when accessed through `torch.Tensor` api as opposed to as self attribute caused the code to crash with an error (see attached issue)
Paths differed when traced due to the stack variable popped, as:
* `self.unsqueeze_` pops a `LazyVariableTracker` which gets resolved to `TensorVariable`, so when looking for the method, triggers the fn call `var_getattr` in `_dynamo/variables/tensor.py`; since this is an inplace view (metadata mutation) on graph input, it is not well supported so should fall back (see [L446](https://github.com/pytorch/pytorch/blob/1017927c83dd95a4be6074c48e0fb38f0a1bd8f3/torch/_dynamo/variables/tensor.py#L446) in that file)
* `torch.Tensor.unsqueeze` pops a `UserDefinedClassVariable` so when looking for the method, triggers the fn call `var_getattr` in `_dynamo/variables/user_defined.py` on [L273](https://github.com/pytorch/pytorch/blob/a8f6b40e36bc4afe4e58568620a008c9a8a8704e/torch/_dynamo/variables/user_defined.py#L273). This path tries to build a variable tracker from the obj popped, which resolves to a trace_rule , and as a Tensor method, is resolved to `TorchInGraphFunctionVariable` on [L3767](https://github.com/pytorch/pytorch/blob/a8f6b40e36bc4afe4e58568620a008c9a8a8704e/torch/_dynamo/trace_rules.py#L3767)
So, one straightforward option is to check if the fn is an inplace_view on a input tensor in `torch.py` when we resolve the `__call__function` for the `TorchInGraphFunctionVariable` instead, which resolves the bug by providing a graph break
### Test
```
pytest test/dynamo/test_functions.py::FunctionTests::test_unsqueeze_inplace
```
Results in
```
Running 1 items in this shard
test/dynamo/test_functions.py . [100%]
=========================================================================================== 1 passed in 9.16s ==========================================================================================
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,967,572,036
|
Revert "[fx] Move Node._prepend/Node._remove_from_list to C++ (#148261)"
|
atalman
|
closed
|
[
"release notes: fx",
"fx",
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
This reverts commit 5d4e7d58b42623a9024a84f0050967ff0318dcdb.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,967,546,762
|
Improve speed of pytorch docs build
|
svekars
|
open
|
[
"module: build",
"module: docs",
"triaged",
"topic: build"
] | 1
|
CONTRIBUTOR
|
### 📚 The doc issue
**Current Situation:**
* The documentation build process takes approximately 35+ minutes.
* 15 minutes for building torch on linux-jammy-py3.9-gcc11.
* 20 minutes for the doc build.
* 10 minutes to upload for preview
**Problem:**
* Rebuilding torch is unnecessary for changes limited to `.rst` or `.md` files, such as symbol changes or adding :noindex: to an autosummary directive.
### Suggest a potential alternative/fix
**Proposed Solution:**
1. **Skip Torch Rebuild:** Implement a check to bypass the torch build when only ``.rst`` or ``.md`` files are modified.
2. **Optimize Sphinx Build:** Investigate caching mechanisms to speed up the Sphinx build process.
Just by skipping the torch rebuild, we will reduce total docs build time to approximately 20 minutes (with all artifacts downloads, etc.) for documentation-only changes. If Sphinx build can be cached as well, the doc build time could be even less.
cc: @clee2000
cc @malfet @seemethere @sekyondaMeta @AlannaBurke
| true
|
2,967,527,621
|
[aoti] Fix cannot determine truth value of Relation error when propagating unbacked symint in lowering
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Summary: Fix cannot determine truth value of Relation error when propagating unbacked symint in lowering
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r aoti_runtime_asserts
```
Differential Revision: D72331070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,967,504,618
|
Enable lazy cloning in `Tensor.to` between CPU and MPS
|
kurtamohler
|
open
|
[
"open source",
"release notes: lazy",
"release notes: mps",
"ciflow/mps"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150569
* #150721
* #148408
| true
|
2,967,502,705
|
Overload unary - operator on at::vec::Vectorized to call neg()
|
swolchok
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: cpp"
] | 19
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150568
* #150380
Makes Vectorized look even more like a scalar type, getting me closer to being able to use the same generic code with scalars and Vectorized (e.g., for sigmoid, which needs `exp(-x)`).
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,967,460,019
|
Initial Implementation of Padded Tensor
|
alexanderb14
|
open
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 4
|
NONE
|
This PR introduces the initial implementation of `PaddedTensor`, a Tensor subclass, enabling `reduce-overhead` performance benefits for workloads with dynamic shapes.
## Background and Motivation
Currently, reduce-overhead requires statically shaped models due to constraints in the CUDAGraphs backend. This limitation leads to significant CUDAGraph rerecordings for every shape change, hindering performance. By introducing `PaddedTensor`, we aim to enable reduce-overhead for dynamically shaped models, unlocking performance potential.
## Design
`PaddedTensor` is a Tensor subclass that automatically pads the model to given multipliers, resulting in fewer CUDAGraph recordings.
## Challenges
`PaddedTensor` comes with a design challenge -- the need to propagate the original tensor's shape along with the computation. For `torch.compile`, this is challenging, as it means that the shape computation needs to part of the traced graph as well.
## Design and Implementation
After carefully iterating on several approaches, the approach in this PR represents the original tensor as an inner meta tensor and dispatches ATen and `function` ops on them, along with the data carrying outer padded tensor, which will be static / CUDAGraph'ed. For the inner meta tensor, no data computation is performed.
Contrasting to previous considered approaches, this design combines the benefits:
1. No per-op shape rule maintenance: Unlike representing the original shape as a `torch.Tensor` with manual propagation rules for each ATen op #149140 , the shapes are automatically formed using existing symbolic shape propagation infrastructure.
2. Use of original (symbolic) shape in subsequent operations: This is a requirement for handling masking inputs to operations, i.e. when different operations have different neutral elements. The FakeTensor approach #149241 doesn't align with this requirement, as FakeTensors are created in a separate inner isolated environment that is not part of the traced graph.
## Example: Pointwise operation, with both dimensions padded
```python
def f(a, b):
return a + b
f = torch.compile(f, fullgraph=True)
multipliers = {0: 16, 1: 16}
for i in range(3, 9):
a_p = PaddedTensor.from_tensor(torch.randn([i, i]), multipliers)
b_p = PaddedTensor.from_tensor(torch.randn([i, i]), multipliers)
y_p = f(a_p, b_p)
print(y_p)
```
### Lowering 1: No masking
```python
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "f32[16, 16]", arg1_1: "f32[s6, s43]", arg2_1: "f32[16, 16]", arg3_1: "f32[s6, s43]"):
# File: /data/users/alexbrauckmann/pytorch-master/test/inductor/test_padded_tensor.py:133 in f, code: return a + b
add: "f32[16, 16]" = torch.ops.aten.add.Tensor(arg0_1, arg2_1); arg0_1 = arg2_1 = None
# No stacktrace found for following nodes
empty_strided_default = torch.ops.aten.empty_strided.default((s6, s43), (s43, 1), dtype = torch.float32, device = device(type='meta'))
return (add, empty_strided_default)
```
### Lowering 2: Masking padded slices
```python
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "f32[16, 16]", arg1_1: "f32[s6, s43]", arg2_1: "f32[16, 16]", arg3_1: "f32[s6, s43]"):
# File: /data/users/alexbrauckmann/pytorch-master/test/inductor/test_padded_tensor.py:133 in f, code: return a + b
add: "f32[16, 16]" = torch.ops.aten.add.Tensor(arg0_1, arg2_1); arg0_1 = arg2_1 = None
sym_size_int: "Sym(s6)" = torch.ops.aten.sym_size.int(arg1_1, 0)
sym_size_int_1: "Sym(s43)" = torch.ops.aten.sym_size.int(arg1_1, 1); arg1_1 = None
full_default: "f32[]" = torch.ops.aten.full.default([], 0.0, dtype = torch.float32, layout = torch.strided, device = device(type='cpu'), pin_memory = False)
slice_1: "f32[16 - s6, 16]" = torch.ops.aten.slice.Tensor(add, 0, sym_size_int, 9223372036854775807)
slice_2: "f32[16 - s6, 16 - s43]" = torch.ops.aten.slice.Tensor(slice_1, 1, sym_size_int_1, 9223372036854775807); slice_1 = None
copy: "f32[16 - s6, 16 - s43]" = torch.ops.aten.copy.default(slice_2, full_default); slice_2 = full_default = None
slice_3: "f32[16 - s6, 16]" = torch.ops.aten.slice.Tensor(add, 0, sym_size_int, 9223372036854775807)
slice_scatter: "f32[16 - s6, 16]" = torch.ops.aten.slice_scatter.default(slice_3, copy, 1, sym_size_int_1, 9223372036854775807); slice_3 = copy = sym_size_int_1 = None
slice_scatter_1: "f32[16, 16]" = torch.ops.aten.slice_scatter.default(add, slice_scatter, 0, sym_size_int, 9223372036854775807); add = slice_scatter = sym_size_int = None
# No stacktrace found for following nodes
empty_strided_default = torch.ops.aten.empty_strided.default((s6, s43), (s43, 1), dtype = torch.float32, device = device(type='meta'))
return (slice_scatter_1, empty_strided_default)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,967,451,151
|
[MPSInductor] Speedup `sum`/`prod` reductions
|
malfet
|
closed
|
[
"Merged",
"topic: performance",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150566
By using cooperative `simd_sum`/`simd_product` instead of a C-style for loop for threadgroup reductions. This also allows significantly reduce amount of shared memory needed to perform those reductions
Using such reduction increases the `torch.compile` performance for gpt-fast using `stories110M` from 29 tokens/sec to 630 tokens/sec on M4 and changes perf of torch.rand as follows:
|size| before | after |
|------------------------|------------|-------------|
| 512x512 | 202.1 | 131.8 |
| 1024x1024 | 780.6 | 176.9 |
| 2048x2048 | 1423.4 | 339.9 |
| 4096x4097 | 2982.2 | 1047.2 |
Unfortunately, none of the SIMDgroup operations are available for 64-bit integers, but one can simulate the behavior using using `simd_shuffle_down` of 64-bit values represented as `int2` types, that yields reduction in $log_2(threadgroup\\_size)$ steps. [`mlx/kernels/reduction/ops.h](https://github.com/ml-explore/mlx/blob/86389bf9707f46101af45d90510e8e97c8a90b93/mlx/backend/metal/kernels/reduction/ops.h#L15-L18) contains an implementation of such algorithm, but alas it yields wrong results on M1/M2(and may be M3 machines) if not all threads in the simdgroup are active which could be observed by running
```python
import torch
lib=torch.mps.compile_shader("""
kernel void do_sum(device int* out, constant int* in, uint idx [[thread_position_in_grid]]) {
out[idx] = metal::simd_shuffle_down(in[idx], 8);
}
""")
x=torch.arange(22, device='mps', dtype=torch.int32)
y=torch.empty_like(x)
lib.do_sum(y, x)
print(y)
```
that returns following on M4
```
tensor([ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 0, 0, 0, 0, 0, 0, 0, 0], device='mps:0', dtype=torch.int32)
```
but same kernel running on M1 returns
```
tensor([ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 14, 15, 16, 17, 18, 19, 20, 21], device='mps:0', dtype=torch.int32)
```
This discrepancy in behavior can be addressed by using `simd_shuffle_and_fill_down`, but any kernels using simd_shuffle_and_fill_down cause an internal compiler error on MacOS-13.2. Considering that OS is to be EOL soon, skip the offending tests.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,967,422,917
|
Move formulas on separate line in loss.py
|
svekars
|
closed
|
[
"module: docs",
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Move formulas on separate line in loss.py for better readability.
cc @sekyondaMeta @AlannaBurke
| true
|
2,967,265,103
|
Experiment with user buffer registration for FSDP2
|
lw
|
open
|
[
"oncall: distributed",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150564
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,967,264,915
|
Fix detection of GPU multicast
|
lw
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150564
* __->__ #150563
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,967,253,836
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_float64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_float64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39857047873).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,967,248,452
|
[invoke_subgraph] Force grad_outs to be contiguous at tracing time
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150561
* #150556
* #150486
* #150450
* #150082
I am unable to come up with a testcase. It passes many end-to-end tests that fail with ReshapeError at https://ossci-raw-job-status.s3.amazonaws.com/log/39717218372

| true
|
2,967,226,014
|
ci: Set minimum cmake version for halide build
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150560
This was failing due to pybind being strict about their cmake version
requirements.
This resolves errors like:
```
652.1 Compatibility with CMake < 3.5 has been removed from CMake.
652.1
652.1 Update the VERSION argument <min> value. Or, use the <min>...<max> syntax
652.1 to tell CMake that the project requires at least <min> but has been updated
652.1 to work with policies introduced by <max> or earlier.
652.1
652.1 Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway.
652.1
652.1
652.1 -- Configuring incomplete, errors occurred!
```
Tested this locally with the following command:
```
./build.sh pytorch-linux-jammy-py3.12-halide -t 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-jammy-py3.12-halide:8a8989876ff1aa1d5b0e465177afebbc7a9da921
```
Closes https://github.com/pytorch/pytorch/issues/150420
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,967,171,305
|
[submodule] [Snapshot/Profiler] Memory Snapshot On Demand
|
sraikund16
|
closed
|
[
"enhancement",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler"
] | 18
|
CONTRIBUTOR
|
Summary:
Profiler side of memory snapshot.
1. Add API to actually do snapshot when client interface is called
2. Add ifdefs to builds so that kineto hooks snapshot correctly.
Design Philosophy: There is one interesting part of this implementation and it is during export. For export we are callign the python impl of the export rather than CPP even though we are already in CPP. This is because it is better to simply have one path of export rather than 2. Personally, I want there to be parity between auto-trace and on-demand so it if we can limit the side paths then we will have an easier time maintaining this relationship
Test Plan: {F1976563426}
Reviewed By: sanrise
Differential Revision: D70733247
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,967,126,866
|
Binary docker builds - use image tagged with folder sha
|
clee2000
|
closed
|
[
"ciflow/binaries",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
It is hard to test the docker images that are built for binaries because the the binary workflows are hard coded to run on an image from docker io, with the main tag. To test, you have to make a change to generate_binary_build_matrix to fetch the correct tag from aws ecr and open a separate PR to test
insert example pr here from when i tested nccl
The main idea is to make the binary docker build more similar to the CI docker builds, where if .ci/docker folder is changed, a new docker image gets built that is identified by the hash of .ci/docker folder. Then CI jobs pull this docker image identified by the folder. For the binary docker images, this includes pushing docker images to ecr in addition to docker.io.
The main change is using calculate docker image everywhere and renaming things to use the new convention with docker tag prefix separate from docker image name
Overview:
* building: if on ciflow/binaries, upload to aws ecr. If on main/version tag, push to aws ecr and docker io
* aws ecr images get the -foldersha tag like CI. docker io gets the original -main tag, the -headsha tag, and a -foldersha tag. foldersha is the hash of .ci/docker
* fetching: Always fetch the foldersha tag. if is the workflow is on ciflow/binaries, fetch from aws ecr. If on main or version tag, pull from docker io
Cons
* This doesn't work for s390x because they would need read/write access to ecr which they don't have. Could be fixed with an OIDC role? s390x docker builds are also just weird in general
* Binary builds will spin wait for a docker image to finish building (most builds are really fast though <15min)
* Idk how to test this on main
Notes:
* Rebuild whenever .ci/docker changes because we need a way to differentiate between docker images built from different changes to the docker script to ensure that we can pull the specific tag for that change. CI does this by tagging every docker image with the hash of .ci/docker, which contains all the docker scripts. The binary docker images usually have a build script in .ci/docker/almalinux or manywheel, but it uses scripts from .ci/docker/common, so we need a hash that includes the common folder. We also need to make sure a docker image exists for every hash, or else the hash calculation is going to point someone to a nonexistent docker image.
* Pro: Get to reuse calculate-docker-image
* Con: This results in more image rebuilds than actually necessary
* Con: The workflow file is not included in the hash
* Cons could be resolved by using custom hash, but that seem like extra work
* Reusable action for binary docker builds - they're all pretty much the same so I wanted to consolidate instead of putting a step for calculate docker image everywhere
* Also converted some things to a matrix - not necessary but maybe we could put every binary docker build into 1 workflow with the matrix?
* Changes to get rid of env vars like GPU_ARCH_TYPE - all the information necessary is already in the image name and tag
* Script simplifications - I moved the docker pushes out of the script and into the workflow, so a lot of the variables are unneeded now. Also made some things more similar to the .ci/docker/build.sh script used for CI images
* changes to calculate docker image action in test infra - accept a custom tag prefix, so the container name will be imagename:tag-prefix-folderhash if tag-prefix is given, and imagename:folderhash if not
| true
|
2,967,083,116
|
torch.compile specific Exceptions are not serializable
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"vllm-compile"
] | 3
|
CONTRIBUTOR
|
See https://github.com/vllm-project/vllm/issues/15592 for motivation. e.g. BackendCompilerFailed is not serializable. We should understand the serializablility constraint and then determine if we want to make these exceptions serializable.
There are issues, like the frame object in BackendCompilerFrame is not serializable.
cc @chauhang @penguinwu
| true
|
2,967,080,506
|
[invoke_subgraph][min-cut partitioner] Fix bug to use the correct root module
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150561
* __->__ #150556
* #150486
* #150450
* #150082
| true
|
2,967,071,724
|
Use 'rocm' naming for rocm-related workflows/jobs
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ciflow/rocm",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300"
] | 3
|
COLLABORATOR
|
Reduces number of places in the workflow files needing update for ROCm version update
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,967,059,245
|
Update torch-xpu-ops commit pin to 98c808d
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
Update the torch-xpu-ops commit to [98c808dea6de7330c415aa777d6921944cf79887](https://github.com/intel/torch-xpu-ops/commit/98c808dea6de7330c415aa777d6921944cf79887), include
- Fixes #150001 by removing pre-CXX11 ABI logic from build script for XPU
- Fixes #150430
- Fixes XCCL build issue caused by PR #150398
| true
|
2,967,036,718
|
[aoti] make a check function for each input
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Summary: make a check function for each input to avoid too large to optimize error on `__check_inputs_outputs`
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r runtime_checks
```
Differential Revision: D72286280
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,967,029,974
|
Fix link formatting in cpp_extension.py
|
svekars
|
open
|
[
"module: docs",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fix link formatting
cc @sekyondaMeta @AlannaBurke
| true
|
2,967,015,168
|
ci: Use cache / progress when local docker build
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150560
* __->__ #150551
It's a bit annoying to try and work on these locally when the cache /
progress isn't being used so let's just set it so that those flags are
only valid when in CI directly.
`${CI}` is a default environment variable that's defined by actions
itself.
See https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables#default-environment-variables
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,966,990,699
|
[Release/2.7][MPS] Warn that torch.compile is a protoype
|
malfet
|
closed
|
[
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
And reference https://github.com/pytorch/pytorch/issues/150121
| true
|
2,966,988,622
|
Address Cmake update issue in windows magma builds
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
1. Fixes Cmake update error: https://github.com/pytorch/pytorch/actions/runs/14223930697/job/39858632864
```
CMake Error at CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.5 has been removed from CMake.
Update the VERSION argument <min> value. Or, use the <min>...<max> syntax
to tell CMake that the project requires at least <min> but has been updated
to work with policies introduced by <max> or earlier.
Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway.
```
2. Removes deprecated CUDA 12.4 build
| true
|
2,966,976,769
|
Segmentation fault when using torch.tensor from a non python created thread
|
jabraham17
|
open
|
[
"module: cpp",
"triaged",
"module: pybind",
"release notes: python_frontend"
] | 2
|
NONE
|
### 🐛 Describe the bug
When trying to run PyTorch from a non-Python created thread, I am finding that using 2.6.0 runs into a segmentation fault. This appears to be a regression from 2.5.0, as 2.5.0 and 2.4.0 both work fine with the exact same code.
<details>
<summary> Full C code to reproduce </summary>
```c
//
// Compile this code:
// clang repro.c -o repro -pthread $(python3.12-config --embed --cflags --ldflags) -g
//
//
// Install torch
// python3.12 -m pip install torch==2.6.0 numpy --target=torch26
// python3.12 -m pip install torch==2.5.0 numpy --target=torch25
// python3.12 -m pip install torch==2.4.0 numpy --target=torch25
//
//
// Run the code:
// torch 2.6 fails
// PYTHONPATH=torch26 ./repro
// torch 2.5 works
// PYTHONPATH=torch25 ./repro
// torch 2.4 works
// PYTHONPATH=torch24 ./repro
//
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <pthread.h>
PyObject* getList() {
PyGILState_STATE gstate = PyGILState_Ensure();
PyObject* list = PyList_New(0);
if (list == NULL) {
PyErr_Print();
PyGILState_Release(gstate);
return NULL;
}
PyObject* sublist1 = PyList_New(0);
if (sublist1 == NULL) {
PyErr_Print();
PyGILState_Release(gstate);
return NULL;
}
PyList_Append(sublist1, PyLong_FromLong(1));
PyList_Append(sublist1, PyLong_FromLong(2));
PyObject* sublist2 = PyList_New(0);
if (sublist2 == NULL) {
PyErr_Print();
PyGILState_Release(gstate);
return NULL;
}
PyList_Append(sublist2, PyLong_FromLong(3));
PyList_Append(sublist2, PyLong_FromLong(4));
PyList_Append(list, sublist1);
PyList_Append(list, sublist2);
PyGILState_Release(gstate);
return list;
}
PyObject* getModule() {
PyGILState_STATE gstate = PyGILState_Ensure();
PyObject* torch_module = PyImport_ImportModule("torch");
if (torch_module == NULL) {
PyErr_Print();
PyGILState_Release(gstate);
return NULL;
}
PyGILState_Release(gstate);
return torch_module;
}
PyObject* getTensorConstructor(PyObject* mod) {
PyGILState_STATE gstate = PyGILState_Ensure();
PyObject* tensor_cons = PyObject_GetAttrString(mod, "tensor");
if (tensor_cons == NULL) {
PyErr_Print();
PyGILState_Release(gstate);
return NULL;
}
PyGILState_Release(gstate);
return tensor_cons;
}
PyObject* getTensor(PyObject* cons, PyObject* list) {
PyGILState_STATE gstate = PyGILState_Ensure();
PyObject* tensor = PyObject_CallOneArg(cons, list);
if (tensor == NULL) {
PyErr_Print();
PyGILState_Release(gstate);
return NULL;
}
PyGILState_Release(gstate);
return tensor;
}
void printTensor(PyObject* tensor) {
PyGILState_STATE gstate = PyGILState_Ensure();
PyObject_Print(tensor, stdout, 0);
printf("\n");
PyGILState_Release(gstate);
}
void* worker(void* arg) {
PyObject* list = getList();
if (list == NULL) {
return NULL;
}
PyObject* mod = getModule();
if (mod == NULL) {
return NULL;
}
PyObject* cons = getTensorConstructor(mod);
if (cons == NULL) {
return NULL;
}
PyObject* tensor = getTensor(cons, list);
if (tensor == NULL) {
return NULL;
}
printTensor(tensor);
return NULL;
}
int main() {
Py_Initialize();
PyGILState_STATE gstate = PyGILState_Ensure();
PyThreadState* tstate = PyEval_SaveThread();
pthread_t thread;
pthread_create(&thread, NULL, worker, NULL);
pthread_join(thread, NULL);
PyEval_RestoreThread(tstate);
PyGILState_Release(gstate);
Py_Finalize();
return 0;
}
```
</details>
To reproduce the problem
1. Compile the code: `clang repro.c -o repro -pthread $(python3.12-config --embed --cflags --ldflags) -g`
2. Install torch 2.6.0, I used `python3.12 -m pip install torch==2.6.0 numpy --target=torch26`
3. Run the code, explicitly passing the PYTHONPATH since the C API interpreter does not understand something like venv out of the box: `PYTHONPATH=torch26 ./repro`
This segfaults on `PyObject* tensor = PyObject_CallOneArg(cons, list);`, when trying to create a new tensor from a Python list. This code works perfectly fine with torch 2.5 and 2.4.
Backtrace from lldb:
```
* thread #2, stop reason = EXC_BAD_ACCESS (code=1, address=0x68)
* frame #0: 0x000000010086ec18 Python`take_gil + 76
frame #1: 0x000000010086f2b4 Python`PyEval_AcquireThread + 28
frame #2: 0x00000001026de75c libtorch_python.dylib`pybind11::gil_scoped_acquire::gil_scoped_acquire() + 104
frame #3: 0x00000001030d6268 libtorch_python.dylib`void std::__1::__call_once_proxy[abi:ue170006]<std::__1::tuple<pybind11::gil_safe_call_once_and_store<pybind11::object>& pybind11::gil_safe_call_once_and_store<pybind11::object>::call_once_and_store_result<torch::get_symfloat_class()::$_1>(torch::get_symfloat_class()::$_1&&)::'lambda'()&&>>(void*) + 56
frame #4: 0x000000018b7e799c libc++.1.dylib`std::__1::__call_once(unsigned long volatile&, void*, void (*)(void*)) + 196
frame #5: 0x00000001030d5f9c libtorch_python.dylib`torch::get_symfloat_class() + 112
frame #6: 0x00000001030e29a4 libtorch_python.dylib`torch::utils::(anonymous namespace)::infer_scalar_type(_object*) + 84
frame #7: 0x00000001030e3778 libtorch_python.dylib`torch::utils::(anonymous namespace)::internal_new_from_data(c10::TensorOptions, c10::ScalarType, std::__1::optional<c10::Device>, _object*, bool, bool, bool, bool) + 2132
frame #8: 0x00000001030e9108 libtorch_python.dylib`torch::utils::tensor_ctor(c10::DispatchKey, c10::ScalarType, torch::PythonArgs&) + 500
frame #9: 0x0000000102c13f94 libtorch_python.dylib`torch::autograd::THPVariable_tensor(_object*, _object*, _object*) + 320
frame #10: 0x000000010079d91c Python`cfunction_call + 72
frame #11: 0x000000010074ad00 Python`_PyObject_MakeTpCall + 128
frame #12: 0x000000010074bd90 Python`PyObject_CallOneArg + 184
frame #13: 0x0000000100003de8 repro`worker [inlined] getTensor(cons=0x0000000100695d50, list=0x00000001001e0a00) at repro.c:89:22 [opt]
frame #14: 0x0000000100003dd4 repro`worker(arg=<unavailable>) at repro.c:124:22 [opt]
frame #15: 0x000000018b8c42e4 libsystem_pthread.dylib`_pthread_start + 136
```
Result from compiling with `-fsanitize=address`. Note that the same executable compiled with torch 2.5.0 works fine and does not trigger asan.
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==20477==ERROR: AddressSanitizer: SEGV on unknown address 0x000800000039 (pc 0x00018b8be8e0 bp 0x00016d3b68b0 sp 0x00016d3b68b0 T1)
==20477==The signal is caused by a READ memory access.
#0 0x18b8be8e0 in pthread_mutex_lock+0xc (libsystem_pthread.dylib:arm64e+0x18e0)
#1 0x1036e9c34 in PyThread_release_lock+0x20 (Python:arm64+0x1b9c34)
#2 0x103752578 in release_sentinel+0x38 (Python:arm64+0x222578)
#3 0x1036d3004 in PyThreadState_Clear+0xbc (Python:arm64+0x1a3004)
#4 0x1076fe8a8 in pybind11::gil_scoped_acquire::dec_ref()+0x24 (libtorch_python.dylib:arm64+0x3e8a8)
#5 0x1076fe860 in pybind11::gil_scoped_acquire::~gil_scoped_acquire()+0x10 (libtorch_python.dylib:arm64+0x3e860)
#6 0x1080f62f8 in void std::__1::__call_once_proxy[abi:ue170006]<std::__1::tuple<pybind11::gil_safe_call_once_and_store<pybind11::object>& pybind11::gil_safe_call_once_and_store<pybind11::object>::call_once_and_store_result<torch::get_symfloat_class()::$_1>(torch::get_symfloat_class()::$_1&&)::'lambda'()&&>>(void*)+0xc8 (libtorch_python.dylib:arm64+0xa362f8)
#7 0x18b7e7998 in std::__1::__call_once(unsigned long volatile&, void*, void (*)(void*))+0xc0 (libc++.1.dylib:arm64e+0xe998)
#8 0x1080f5f98 in torch::get_symfloat_class()+0x6c (libtorch_python.dylib:arm64+0xa35f98)
#9 0x1081029a0 in torch::utils::(anonymous namespace)::infer_scalar_type(_object*)+0x50 (libtorch_python.dylib:arm64+0xa429a0)
#10 0x108103774 in torch::utils::(anonymous namespace)::internal_new_from_data(c10::TensorOptions, c10::ScalarType, std::__1::optional<c10::Device>, _object*, bool, bool, bool, bool)+0x850 (libtorch_python.dylib:arm64+0xa43774)
#11 0x108109104 in torch::utils::tensor_ctor(c10::DispatchKey, c10::ScalarType, torch::PythonArgs&)+0x1f0 (libtorch_python.dylib:arm64+0xa49104)
#12 0x107c33f90 in torch::autograd::THPVariable_tensor(_object*, _object*, _object*)+0x13c (libtorch_python.dylib:arm64+0x573f90)
#13 0x1035dd918 in cfunction_call+0x44 (Python:arm64+0xad918)
#14 0x10358acfc in _PyObject_MakeTpCall+0x7c (Python:arm64+0x5acfc)
#15 0x10358bd8c in PyObject_CallOneArg+0xb4 (Python:arm64+0x5bd8c)
#16 0x102ad3938 in worker repro.c:124
#17 0x103b3d858 in asan_thread_start(void*)+0x40 (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x51858)
#18 0x18b8c42e0 in _pthread_start+0x84 (libsystem_pthread.dylib:arm64e+0x72e0)
#19 0x18b8bf0f8 in thread_start+0x4 (libsystem_pthread.dylib:arm64e+0x20f8)
==20477==Register values:
x[0] = 0x0000000800000039 x[1] = 0x00000001056a36d8 x[2] = 0x00000001239bfe98 x[3] = 0x0000000000000008
x[4] = 0x0000000000000000 x[5] = 0x74616f6c466d7953 x[6] = 0x000000016d334000 x[7] = 0x0000000000000001
x[8] = 0x0000000000000001 x[9] = 0x00000001038e1d00 x[10] = 0x0000007024757fd3 x[11] = 0x0000000024737fd3
x[12] = 0x0000000024737fd3 x[13] = 0x0000000000000000 x[14] = 0x0000000000000000 x[15] = 0x0000000000000000
x[16] = 0x000000018b8be8d4 x[17] = 0x0000000103b94100 x[18] = 0x0000000000000000 x[19] = 0x0000000800000039
x[20] = 0x0000000800000001 x[21] = 0x00006120000101c0 x[22] = 0x0000000000000006 x[23] = 0x0000000000000006
x[24] = 0x0000602000dbd730 x[25] = 0x000000016d3b6dc8 x[26] = 0x00000001039b5900 x[27] = 0x0000602000dbd730
x[28] = 0x0000000000000002 fp = 0x000000016d3b68b0 lr = 0x00000001036e9c38 sp = 0x000000016d3b68b0
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (libsystem_pthread.dylib:arm64e+0x18e0) in pthread_mutex_lock+0xc
Thread T1 created by T0 here:
#0 0x103b381c8 in pthread_create+0x5c (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x4c1c8)
#1 0x102ad3a8c in main repro.c:143
#2 0x18b544270 (<unknown module>)
==20477==ABORTING
```
I did all of this on a M1 Mac. I also have access to a linux64 machine with Python 3.10. That version does not segfault like the Mac one does, but it does have the same ASAN behavior. With torch 2.5.0, ASAN just reports about leaks. With torch 2.6.0, it catches a fault
```
ASAN:DEADLYSIGNAL
=================================================================
==40422==ERROR: AddressSanitizer: SEGV on unknown address 0x00000e800688 (pc 0x7fbefb6d16a3 bp 0x7fbefba8c720 sp 0x7fbef5fe4800 T1)
==40422==The signal is caused by a WRITE memory access.
#0 0x7fbefb6d16a2 in tstate_delete_common Python/pystate.c:894
#1 0x7fbefb6d16a2 in _PyThreadState_DeleteCurrent Python/pystate.c:939
#2 0x7fbee5cb702c in pybind11::gil_scoped_acquire::dec_ref() (/.../torch26/torch/lib/libtorch_python.so+0x51102c)
#3 0x7fbee5cb7038 in pybind11::gil_scoped_acquire::~gil_scoped_acquire() (/.../torch26/torch/lib/libtorch_python.so+0x511038)
#4 0x7fbee6547c0a in void std::call_once<pybind11::gil_safe_call_once_and_store<pybind11::object>& pybind11::gil_safe_call_once_and_store<pybind11::object>::call_once_and_store_result<torch::get_symfloat_class()::{lambda()#1}>(torch::get_symfloat_class()::{lambda()#1}&&)::{lambda()#1}>(std::once_flag&, pybind11::gil_safe_call_once_and_store<pybind11::object>& pybind11::gil_safe_call_once_and_store<pybind11::object>::call_once_and_store_result<torch::get_symfloat_class()::{lambda()#1}>(torch::get_symfloat_class()::{lambda()#1}&&)::{lambda()#1})::{lambda()#2}::_FUN() (/.../torch26/torch/lib/libtorch_python.so+0xda1c0a)
#5 0x7fbefb0545a6 in __pthread_once_slow (/lib64/libpthread.so.0+0x135a6)
#6 0x7fbee6547f34 in torch::get_symfloat_class() (/.../torch26/torch/lib/libtorch_python.so+0xda1f34)
#7 0x7fbee655175d in torch::utils::(anonymous namespace)::infer_scalar_type(_object*) (/.../torch26/torch/lib/libtorch_python.so+0xdab75d)
#8 0x7fbee65536f7 in torch::utils::(anonymous namespace)::internal_new_from_data(c10::TensorOptions, c10::ScalarType, std::optional<c10::Device>, _object*, bool, bool, bool, bool) (/.../torch26/torch/lib/libtorch_python.so+0xdad6f7)
#9 0x7fbee6558573 in torch::utils::tensor_ctor(c10::DispatchKey, c10::ScalarType, torch::PythonArgs&) (/.../torch26/torch/lib/libtorch_python.so+0xdb2573)
#10 0x7fbee6094bd1 in torch::autograd::THPVariable_tensor(_object*, _object*, _object*) (/.../torch26/torch/lib/libtorch_python.so+0x8eebd1)
#11 0x7fbefb5d0e66 in cfunction_call Objects/methodobject.c:543
#12 0x7fbefb573208 in _PyObject_MakeTpCall Objects/call.c:215
#13 0x40152a in _PyObject_VectorcallTstate /.../python3.10/include/python3.10/cpython/abstract.h:112
#14 0x40152a in PyObject_CallOneArg /.../python3.10/include/python3.10/cpython/abstract.h:184
#15 0x40152a in getTensor /.../repro.c:89
#16 0x401623 in worker /.../repro.c:124
#17 0x7fbefb04b6e9 in start_thread (/lib64/libpthread.so.0+0xa6e9)
#18 0x7fbefa610a6e in clone (/lib64/libc.so.6+0x117a6e)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV Python/pystate.c:894 in tstate_delete_common
Thread T1 created by T0 here:
#0 0x7fbefbaccc80 in pthread_create (/usr/lib64/libasan.so.4+0x39c80)
#1 0x400fca in main /.../repro.c:143
==40422==ABORTING
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.6
Libc version: N/A
Python version: 3.12.9 (main, Feb 4 2025, 14:38:38) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[conda] Could not collect
cc @jbschlosser
| true
|
2,966,967,328
|
[BE] Fix triton windows build
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #150480
| true
|
2,966,946,686
|
[export] Refactor strict to pass fake tensors to dynamo
|
angelayi
|
open
|
[
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Currently in the strict-export workflow this is what happens:
1. We take example inputs and dynamic shapes, and pass it to Dynamo
a. Dynamo turns the dynamic shapes spec into constraints
b. Dynamo turns the inputs into fake tensors, some with symbolic shapes depending on the constraints
c. After tracing a graph, Dynamo calls the constraint solver to check the constraints and throw ConstraintViolationErrors
2. We take the Dynamo graph, and trace with AOTAutograd to get the ATen IR
In the non-strict export workflow, this is what happens:
1. We take example inputs and dynamic shapes and turn them into fake tensors, some with symbolic shapes depending on the spec
2. We trace with AOTAutograd to get the ATen IR
3. We call the constraint solver to check the constraints and throw ConstraintViolationErrors
This PR nontrivially refactors strict-export to merge some paths with non-strict export:
1. (same as nonstrict) We take example inputs and dynamic shapes and turn them into fake tensors, some with symbolic shapes depending on the spec
2. We pass the fake tensors to Dynamo and get the Torch IR graph
3. (same as nonstrict) We call the constraint solver to check the constraints and throw ConstraintViolationErrors
4. We take the Dynamo graph, and trace with AOTAutograd to get the ATen IR
Consequences of this change:
1. It's easier to support passing dynamic-integer inputs to export [(doc)](https://docs.google.com/document/d/1GvNRQd8BnxlMay_hrEVgEta6VUeUW_hcFeRuB7q1nDY/edit?tab=t.0#heading=h.k3yzhrq4y9l3)
2. All ConstraintViolationErrors are "ConstraintViolationErrors" and not torch._dynamo.exc.UserError in strict-export.
4. assume_constant_result is no longer supported by strict-export. Confirmed this is ok because (1) dynamo doesn't support this well and (2) non-strict export doesn't support this.
5. We need turn on emitting runtime assertions at the dynamo level to add dynamo guards into the graph.
6. We can eventually remove the duplicate constraint solving logic in [dynamo](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/eval_frame.py#L1700-L1740) and keep the one in [export](https://github.com/pytorch/pytorch/blob/main/torch/_export/non_strict_utils.py#L285-L347)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,966,929,304
|
[aoti] Split ConstantType definition out of model.h
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
Splitting the type definition of ConstantType into a separate header because it's needed by Sigmoid OSS but the entire model.h header include cause the following compilation error:
```
2025-04-01T18:12:42.0391272Z FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/nativert/kernels/AOTICallDelegateKernel.cpp.o
2025-04-01T18:12:42.0417705Z /opt/cache/bin/sccache /opt/cache/bin/clang++ -DAT_PER_OPERATOR_HEADERS -DBUILD_ONEDNN_GRAPH -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_ENABLE_LLVM -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -DXNN_LOG_LEVEL=0 -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/var/lib/jenkins/workspace/build/aten/src -I/var/lib/jenkins/workspace/aten/src -I/var/lib/jenkins/workspace/build -I/var/lib/jenkins/workspace -I/var/lib/jenkins/workspace/cmake/../third_party/benchmark/include -I/opt/llvm/include -I/var/lib/jenkins/workspace/third_party/onnx -I/var/lib/jenkins/workspace/build/third_party/onnx -I/var/lib/jenkins/workspace/nlohmann -I/var/lib/jenkins/workspace/torch/csrc/api -I/var/lib/jenkins/workspace/torch/csrc/api/include -I/var/lib/jenkins/workspace/caffe2/aten/src/TH -I/var/lib/jenkins/workspace/build/caffe2/aten/src/TH -I/var/lib/jenkins/workspace/build/caffe2/aten/src -I/var/lib/jenkins/workspace/build/caffe2/../aten/src -I/var/lib/jenkins/workspace/torch/csrc -I/var/lib/jenkins/workspace/third_party/miniz-3.0.2 -I/var/lib/jenkins/workspace/third_party/kineto/libkineto/include -I/var/lib/jenkins/workspace/third_party/kineto/libkineto/src -I/var/lib/jenkins/workspace/third_party/cpp-httplib -I/var/lib/jenkins/workspace/aten/src/ATen/.. -I/var/lib/jenkins/workspace/third_party/FXdiv/include -I/var/lib/jenkins/workspace/c10/.. -I/var/lib/jenkins/workspace/third_party/pthreadpool/include -I/var/lib/jenkins/workspace/third_party/cpuinfo/include -I/var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/qnnpack/deps/clog/include -I/var/lib/jenkins/workspace/third_party/NNPACK/include -I/var/lib/jenkins/workspace/third_party/fbgemm/include -I/
2025-04-01T18:12:42.0444143Z In file included from /var/lib/jenkins/workspace/torch/csrc/nativert/kernels/AOTICallDelegateKernel.cpp:5:
2025-04-01T18:12:42.0445081Z In file included from /var/lib/jenkins/workspace/torch/csrc/nativert/executor/AOTIDelegateExecutor.h:6:
2025-04-01T18:12:42.0446002Z In file included from /var/lib/jenkins/workspace/torch/csrc/nativert/executor/AOTInductorModelImpl.h:5:
2025-04-01T18:12:42.0447549Z /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runtime/model.h:78:13: error: function 'RAII_cpuMalloc' is not needed and will not be emitted [-Werror,-Wunneeded-internal-declaration]
2025-04-01T18:12:42.0448656Z RAIIDataPtr RAII_cpuMalloc(size_t num_bytes) {
```
model.h defines RAII_malloc functions directly into anonymous namespace which seems pretty sad. we should do something about it but may not in the current diff.
Test Plan: CI
Differential Revision: D72320413
| true
|
2,966,861,696
|
[ONNX] dynamic_axes does not rename dynamic dimension in torch.onnx.export
|
xadupre
|
open
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
Unexpected names for the dynamic dimension when dynamic_axes is used instead of dynamic_shapes in torch.onnx.export. Found in https://github.com/huggingface/optimum/pull/2219.
```python
import onnx
import torch
import transformers
from torch.export import Dim
# Load the model
model = transformers.AutoModel.from_pretrained("bert-base-cased")
# Convert the model to ONNX format
input_ids = torch.randint(0, 10, (4, 128), dtype=torch.int64)
attention_mask = torch.ones((4, 128), dtype=torch.int64)
# old, fail with dynamo
dynamic_axes = {
"input_ids": {
0: "batch_size",
1: "sequence_length",
},
"attention_mask": {
0: "batch_size",
1: "sequence_length",
},
}
onnx_program = torch.onnx.export(
model,
(input_ids, attention_mask),
"torch_exported_model.onnx",
dynamic_axes=dynamic_axes, # --> give s0,s1 for the dynamic names
# dynamic_shapes=dynamic_axes, # --> give batch_size,sequence_length for the dynamic names
# export_params=True,
dynamo=True,
)
# Load and save the ONNX model with safetensors
onnx_model = onnx.load("torch_exported_model.onnx")
```
### Versions
```
ollecting environment information...
PyTorch version: 2.8.0.dev20250327+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] model-explorer-onnx==0.3.4
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-array-api==0.3.0
[pip3] onnx-diagnostic==0.2.1
[pip3] onnx-extended==0.4.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-genai-cuda==0.6.0
[pip3] onnxruntime-training==1.22.0+cu126
[pip3] onnxscript==0.3.0.dev20250301
[pip3] optree==0.14.1
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250327+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250329+cu126
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250329+cu126
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
2,966,827,195
|
add batching rule for `torch.Tensor.scatter_add_`
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: functorch",
"release notes: torch.func"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150543
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,966,815,229
|
Revert "[fx] Move Node._prepend/Node._remove_from_list to C++ (#148261)"
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 12
|
CONTRIBUTOR
|
Reverts #148261 due to possible memory leak
This reverts commit 5d4e7d58b42623a9024a84f0050967ff0318dcdb.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,966,792,337
|
[MPSInductor] Disable mm/bmm decompositions
|
manuelcandales
|
closed
|
[
"Merged",
"topic: performance",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Disables mm/bmm decompositions.
torch.compile on MPS was speeding up stories15M (~4x) but it was making stories110M much slower.
Self-contained reproducer to demonstrate the difference (before the change, after it should be identical)
```python
import torch
import timeit
def bench_mm(f, x, y):
from torch.utils.benchmark import Timer
return Timer(stmt="f(x, y); torch.mps.synchronize()",
globals={"x": x, "y": y, "f": f},
language="python", timer=timeit.default_timer).blocked_autorange()
x = torch.rand(1024, 512, device='mps')
y = torch.rand(512, 1, device='mps')
mm_c = torch.compile(torch.mm, options={"coordinate_descent_tuning": False})
mm_c_cdt = torch.compile(torch.mm, options={"coordinate_descent_tuning": True})
print(f"Compiled torch.mm perf (with cdt disabled) for 1024x512 and 512x1 matrices are {bench_mm(mm_c, x, y).median}")
print(f"Compiled torch.mm perf (with cdt enabled) for 1024x512 and 512x1 matrices are {bench_mm(mm_c_cdt, x, y).median}")
```
Disabling the inductor mm decomposition, speeds up stories15M further (~6x) and speeds up stories110M (~7x)
The table below show average tokens/sec across 5 runs on M1 Pro for stories15M and stories110M:
| | stories15M | stories110M |
|------------------------|------------|-------------|
| without compile | 99.40 | 53.11 |
| compile before change | 367.68 | 19.43 |
| compile after change | 582.96 | 355.07 |
stories110M (without compile)
```
(gptfast) mcandales@mcandales-mbp gpt-fast % python generate.py --checkpoint_path checkpoints/stories110M/stories110M.pt --prompt "Once upon a time" --device mps
[...]
Average tokens/sec: 53.11
```
stories110M (compile before change)
```
(gptfast) mcandales@mcandales-mbp gpt-fast % python generate.py --checkpoint_path checkpoints/stories110M/stories110M.pt --prompt "Once upon a time" --device mps --compile
[...]
Average tokens/sec: 19.43
```
stories110M (compile after change)
```
(gptfast) mcandales@mcandales-mbp gpt-fast % python generate.py --checkpoint_path checkpoints/stories110M/stories110M.pt --prompt "Once upon a time" --device mps --compile
[...]
Average tokens/sec: 355.07
```
stories15M (without compile)
```
(gptfast) mcandales@mcandales-mbp gpt-fast % python generate.py --checkpoint_path checkpoints/stories110M/stories110M.pt --prompt "Once upon a time" --device mps
[...]
Average tokens/sec: 99.40
```
stories15M (compile before change)
```
(gptfast) mcandales@mcandales-mbp gpt-fast % python generate.py --checkpoint_path checkpoints/stories110M/stories110M.pt --prompt "Once upon a time" --device mps --compile
[...]
Average tokens/sec: 367.68
```
stories15M (compile after change)
```
(gptfast) mcandales@mcandales-mbp gpt-fast % python generate.py --checkpoint_path checkpoints/stories110M/stories110M.pt --prompt "Once upon a time" --device mps --compile
[...]
Average tokens/sec: 582.96
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,966,791,485
|
PropagateUnbackedSymInts does not know about shape checks in guards
|
angelayi
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: aotdispatch",
"module: dynamo",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
def test_runtime_asserts(self):
class M(torch.nn.Module):
def forward(self, x, y):
b = x.item()
torch._check_is_size(b)
torch._check(b < y.shape[0])
return y[:b]
ep = torch.export.export(M(), (torch.tensor(4), torch.randn(10)), dynamic_shapes=(None, {0: Dim.DYNAMIC}), strict=True)
print(ep.module()(torch.tensor(4), torch.ones(10)))
# print(ep.module()(torch.tensor(4), torch.ones(3))) # errors, expected
torch._dynamo.config.capture_scalar_outputs = True
print(torch.compile(M(), fullgraph=True)(torch.tensor(4), torch.ones(10))) # fails w/ DDE
```
The torch.compile call fails with a guard on data dependent error:
```
Traceback (most recent call last):
...
File "/data/users/angelayi/pytorch/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/data/users/angelayi/pytorch/torch/_functorch/aot_autograd.py", line 675, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/data/users/angelayi/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 198, in inner
flat_f_outs = f(*flat_f_args)
File "/data/users/angelayi/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 899, in functional_call
out = PropagateUnbackedSymInts(mod).run(
File "/data/users/angelayi/pytorch/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
File "/data/users/angelayi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 7274, in run_node
result = super().run_node(n)
File "/data/users/angelayi/pytorch/torch/fx/interpreter.py", line 240, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/data/users/angelayi/pytorch/torch/fx/interpreter.py", line 320, in call_function
return target(*args, **kwargs)
File "/data/users/angelayi/pytorch/torch/_subclasses/functional_tensor.py", line 525, in __torch_dispatch__
outs_unwrapped = func._op_dk(
File "/data/users/angelayi/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/data/users/angelayi/pytorch/torch/_subclasses/fake_tensor.py", line 1295, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/angelayi/pytorch/torch/_subclasses/fake_tensor.py", line 1915, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/angelayi/pytorch/torch/_subclasses/fake_tensor.py", line 1398, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/data/users/angelayi/pytorch/torch/_subclasses/fake_tensor.py", line 2444, in _dispatch_impl
decomposition_table[func](*args, **kwargs)
File "/data/users/angelayi/pytorch/torch/_decomp/decompositions.py", line 744, in slice_forward
elif statically_known_true(end_val == sys.maxsize) or guard_size_oblivious(
File "/data/users/angelayi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 410, in guard_size_oblivious
return expr.node.guard_size_oblivious("", 0)
File "/data/users/angelayi/pytorch/torch/fx/experimental/sym_node.py", line 588, in guard_size_oblivious
r = self.evaluate(size_oblivious=True)
File "/data/users/angelayi/pytorch/torch/fx/experimental/sym_node.py", line 510, in evaluate
return self.shape_env.evaluate_sym_node(self, size_oblivious)
File "/data/users/angelayi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6700, in evaluate_sym_node
return self.evaluate_expr(
File "/data/users/angelayi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
File "/data/users/angelayi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6716, in evaluate_expr
return self._evaluate_expr(
File "/data/users/angelayi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6985, in _evaluate_expr
raise self._make_data_dependent_error(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
GuardOnDataDependentSymNode: Could not guard on data-dependent expression u0 > 10 (unhinted: u0 > 10). (Size-like symbols: u0)
```
This is because the dynamo graph from torch.compile fails to insert the `torch._check(u0 < s0)` into the graph.
Dynamo graph from export:
```
class GraphModule(torch.nn.Module):
def forward(self, L_x_: "i64[][]cpu", L_y_: "f32[s0][1]cpu"):
l_x_ = L_x_
l_y_ = L_y_
# File: /data/users/angelayi/pytorch/moo.py:512 in forward, code: b = x.item()
item: "Sym(u0)" = l_x_.item(); l_x_ = None
# File: /data/users/angelayi/pytorch/moo.py:513 in forward, code: torch._check_is_size(b)
_check_is_size = torch._check_is_size(item); _check_is_size = None
# File: /data/users/angelayi/pytorch/moo.py:514 in forward, code: torch._check(b < y.shape[0])
size = l_y_.size()
getitem: "Sym(s0)" = size[0]; size = None
lt: "Sym(u0 < s0)" = item < getitem; getitem = None
_check = torch._check(lt); lt = _check = None
# File: /data/users/angelayi/pytorch/moo.py:515 in forward, code: return y[:b]
getitem_1: "f32[u0][1]cpu" = l_y_[slice(None, item, None)]; l_y_ = item = None
return (getitem_1,)
```
Dynamo graph from torch.compile call:
```
class GraphModule(torch.nn.Module):
def forward(self, L_x_: "i64[][]cpu", L_y_: "f32[10][1]cpu"):
l_x_ = L_x_
l_y_ = L_y_
# File: /data/users/angelayi/pytorch/moo.py:512 in forward, code: b = x.item()
item: "Sym(s0)" = l_x_.item(); l_x_ = None
# File: /data/users/angelayi/pytorch/moo.py:513 in forward, code: torch._check_is_size(b)
_check_is_size = torch._check_is_size(item); _check_is_size = None
# File: /data/users/angelayi/pytorch/moo.py:515 in forward, code: return y[:b]
getitem: "f32[s0][1]cpu" = l_y_[slice(None, item, None)]; l_y_ = item = None
return (getitem,)
```
### Versions
main
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @bdhirsh
| true
|
2,966,768,048
|
AOTI drops runtime asserts
|
angelayi
|
closed
|
[
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
AOTI drops runtime asserts that export adds, which is bad if draft-export makes some assumptions and adds asserts into the graph to ensure soundness.
```python
def test_aoti_runtime_asserts(self):
class M(torch.nn.Module):
def forward(self, x, y):
b = x.item()
torch._check_is_size(b)
torch._check(b < y.shape[0])
return y[:b]
ep = torch.export.export(M(), (torch.tensor(4), torch.randn(10)), dynamic_shapes=(None, {0: Dim.DYNAMIC}), strict=True)
print(ep)
path = torch._inductor.aoti_compile_and_package(ep)
compiled_m = torch._inductor.aoti_load_package(path)
print(compiled_m(torch.tensor(4), torch.ones(10)))
print(compiled_m(torch.tensor(4), torch.ones(3))) # should error, but instead returns tensor([1, 1, 1, 0])
print(M()(torch.tensor(4), torch.ones(10)))
print(M()(torch.tensor(4), torch.ones(3))) # errors
```
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1 @yushangdi
| true
|
2,966,690,676
|
.exponential_ has different RNG in nightlies
|
felipemello1
|
closed
|
[
"triaged",
"module: random"
] | 1
|
NONE
|
### 🐛 Describe the bug
Updating to nightlies broke our tests in torchtune. After investigating, we found that the "exponential_" operation had a different rng.
```python
import torch
torch.manual_seed(42)
print("torch.rand(5):", torch.rand(5))
print("torch.empty(5).exponential_(1):", torch.empty(5).exponential_(1))
```
In nightlies 2.8.0.dev20250401+cu126:
NOTE (pinning to 2.7.0.dev20250201 still works, so [dev20250202](https://github.com/pytorch/pytorch/commit/738ebb45cad7cef7fa0878ce4f835be59d186a94#diff-ecbd0bdf6ee8fc4eab27af1ec0b4369962dd340547451ac2f396b8f12f2f41bc) seems to be the cause):
```
torch.rand(5): tensor([0.8823, 0.9150, 0.3829, 0.9593, 0.3904])
torch.empty(5).exponential_(1): tensor([0.5809, 0.0436, 2.0548, 1.9680, 0.3315])
```
In stable 2.6.0:
```
torch.rand(5): tensor([0.8823, 0.9150, 0.3829, 0.9593, 0.3904])
torch.empty(5).exponential_(1): tensor([1.6459, 0.4294, 0.0677, 1.3809, 0.7803])
```
### Mitigation
For now, we are replacing it with rand_like + log, since its equivalent and has the same rng for both stable and nightlies
```
torch.manual_seed(42)
probs = torch.ones(5)
u = torch.rand_like(probs)
q = -torch.log(1 - u) # Transform to exponential distribution
```
### Versions
stable 2.6.0 and 2.8.0.dev20250401+cu126
cc @pbelevich
| true
|
2,966,672,698
|
[pytorch] add experimental TORCH_LIBRARY_THREAD_UNSAFE_LAZY_INIT
|
rmaz
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: mobile"
] | 7
|
CONTRIBUTOR
|
Summary: Add an experimental feature to defer pytorch library initialization cost to post startup. As noted this feature is not thread safe, it requires the client to maintain thread safety at library load time.
Reviewed By: zou3519
Differential Revision: D71917841
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,966,401,236
|
API change for new enum in cusparseltsplitkmode-t for cusparseLT 0.7.0+
|
tinglvv
|
open
|
[
"module: bc-breaking",
"triaged",
"open source",
"release notes: sparse",
"topic: bc breaking"
] | 14
|
COLLABORATOR
|
Changing the bool to int to express split_k_mode. Before 0.7.0 we only have 2 cusparseLtSplitKMode_t enum values ONE_KERNEL and TWO_KERNELS so a boolean is enough but since 0.7.0 there are more.
For Blackwell, there has to be minor change to parameter split_k_one_kernel (https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/sparse/cuda/cuSPARSELtOps.cpp#L103), since there are new values introduced to enum [cusparseLtSplitKMode_t](https://docs.nvidia.com/cuda/cusparselt/types.html#cusparseltsplitkmode-t) and a bool type is not enough for it (would have to be replaced with integer) https://docs.nvidia.com/cuda/cusparselt/types.html#cusparseltsplitkmode-t
Error we see without the change
```
RuntimeError: CUDA error: invalid value when calling `cusparseLtMatmulAlgSetAttribute( &handle, &alg_sel, CUSPARSELT_MATMUL_SPLIT_K_MODE, &splitKMode, sizeof(splitKMode))`
To execute this test, run the following from the base repo dir:
python test/test_sparse_semi_structured.py TestSparseSemiStructuredCUSPARSELTCUDA.test_csrc_cslt_sparse_mm_search_cuda_int8
```
cc @ezyang @gchanan @eqy @ptrblck @malfet @atalman @nWEIdia
| true
|
2,966,323,791
|
[AOTI][dashboard] Update how peak memory is measured
|
desertfire
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150534
Summary: In the dashboard measurement script, AOTI needs to run Eager first to register the output pytree, so the peak memory compression ratio on the dashboard is always close to 1. Update AOTI run to use an extra warmup run, so the peak memory compression ratio measures the result at the run time instead of the compile time.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.