id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,029,037,729
|
submodules: point gloo to new home in pytorch/
|
d4l3k
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
MEMBER
|
Gloo moved to the PyTorch GitHub org. This updates PyTorch to point to the new location.
https://github.com/pytorch/gloo
Test plan:
CI
| true
|
3,029,036,649
|
`nn.CrossEntropyLoss` accepts negative target probabilities
|
meilame-tayebjee
|
open
|
[
"module: performance",
"module: nn",
"module: error checking",
"triaged"
] | 1
|
NONE
|
### 📚 The doc issue
The `CrossEntropyLoss` [documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) mentions that:
> **Target**:
> - If containing **class indices**, the shape should be:
> - `()` (scalar),
> - `(N)`, or
> - `(N, d1, d2, ..., dK)` with `K ≥ 1` (for K-dimensional loss),
> where each value should be in the range `[0, C)`.
> The target data type is required to be `long` (i.e., `torch.int64`) when using class indices.
>
> - If containing **class probabilities**, the target must have the **same shape as the input**, and each value should be in the range `[0, 1]`.
> This means the target data type is required to be `float` (e.g., `torch.float32`) when using class probabilities.
In the second case (when passing target as _probabilities_), nothing seems to enforce that each value should be in the range `[0, 1]`. The user can even pass "negative probabilities", for instance:
```
import torch
import torch.nn as nn
num_classes = 5
batch_size = 10
loss = nn.CrossEntropyLoss(reduction='none')
input = 2*torch.randn(batch_size, num_classes, requires_grad=True)
target = (-2) * torch.ones(batch_size, num_classes)
output = loss(input, target)
output
```
Using version 2.6.0, this code runs without throwing any error or warning.
### Suggest a potential alternative/fix
It would be beneficial to either document this non-enforcement (so that the user keeps in mind they have to do it themselves) or to enforce it.
cc @msaroufim @jerryzh168 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| true
|
3,028,999,374
|
[MPSInductor] Make sure sizevars are computed
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152436
* #152430
Before calling the kernel
This fixes `GPUTests.test_float_repr_dynamic_shapes_mps`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,028,966,850
|
Fx Graph cache hit generates guards that does not exists in the original cached program causing recompilations only at cache hit.
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: inductor",
"module: compile-time",
"compile-cache",
"recompilations"
] | 3
|
CONTRIBUTOR
|
repo:
run the following graph **twice** without fresh inductor cache
```
import math
@torch.compile(dynamic=True)
def func(x):
y= math.ceil((x.numel() // 5) / (math.ceil(math.sqrt(x.numel())))) > 64
if y:
return x*5,y
else:
return x*10,y
# with fresh_inductor_cache():
func(torch.rand(1000000))
func(torch.rand(2000000))
func(torch.rand(3000000))
func(torch.rand(5000000))
func(torch.rand(6000000))
func(torch.rand(7000000))
```
TORCH_LOGS="recompiles" python example8.py
```
V0429 09:47:29.616000 2645098 torch/_dynamo/guards.py:3230] [0/1] [__recompiles] Recompiling function func in /home/lsakka/pytorch/example8.py:428
V0429 09:47:29.616000 2645098 torch/_dynamo/guards.py:3230] [0/1] [__recompiles] triggered by the following guard failure(s):
V0429 09:47:29.616000 2645098 torch/_dynamo/guards.py:3230] [0/1] [__recompiles] - 0/0: torch.sym_float(x.size()[0]) == 1000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [2000000]
done
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [s77]
we are here
done
V0429 09:47:29.721000 2645098 torch/_dynamo/guards.py:3230] [0/2] [__recompiles] Recompiling function func in /home/lsakka/pytorch/example8.py:428
V0429 09:47:29.721000 2645098 torch/_dynamo/guards.py:3230] [0/2] [__recompiles] triggered by the following guard failure(s):
V0429 09:47:29.721000 2645098 torch/_dynamo/guards.py:3230] [0/2] [__recompiles] - 0/1: torch.sym_float(x.size()[0]) == 2000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:29.721000 2645098 torch/_dynamo/guards.py:3230] [0/2] [__recompiles] - 0/0: torch.sym_float(x.size()[0]) == 1000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [3000000]
done
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [s77]
we are here
done
V0429 09:47:29.847000 2645098 torch/_dynamo/guards.py:3230] [0/3] [__recompiles] Recompiling function func in /home/lsakka/pytorch/example8.py:428
V0429 09:47:29.847000 2645098 torch/_dynamo/guards.py:3230] [0/3] [__recompiles] triggered by the following guard failure(s):
V0429 09:47:29.847000 2645098 torch/_dynamo/guards.py:3230] [0/3] [__recompiles] - 0/2: torch.sym_float(x.size()[0]) == 3000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:29.847000 2645098 torch/_dynamo/guards.py:3230] [0/3] [__recompiles] - 0/1: torch.sym_float(x.size()[0]) == 2000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:29.847000 2645098 torch/_dynamo/guards.py:3230] [0/3] [__recompiles] - 0/0: torch.sym_float(x.size()[0]) == 1000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [5000000]
done
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [s77]
we are here
done
V0429 09:47:29.987000 2645098 torch/_dynamo/guards.py:3230] [0/4] [__recompiles] Recompiling function func in /home/lsakka/pytorch/example8.py:428
V0429 09:47:29.987000 2645098 torch/_dynamo/guards.py:3230] [0/4] [__recompiles] triggered by the following guard failure(s):
V0429 09:47:29.987000 2645098 torch/_dynamo/guards.py:3230] [0/4] [__recompiles] - 0/3: torch.sym_float(x.size()[0]) == 5000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:29.987000 2645098 torch/_dynamo/guards.py:3230] [0/4] [__recompiles] - 0/2: torch.sym_float(x.size()[0]) == 3000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:29.987000 2645098 torch/_dynamo/guards.py:3230] [0/4] [__recompiles] - 0/1: torch.sym_float(x.size()[0]) == 2000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:29.987000 2645098 torch/_dynamo/guards.py:3230] [0/4] [__recompiles] - 0/0: torch.sym_float(x.size()[0]) == 1000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [6000000]
done
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [s77]
we are here
done
V0429 09:47:30.139000 2645098 torch/_dynamo/guards.py:3230] [0/5] [__recompiles] Recompiling function func in /home/lsakka/pytorch/example8.py:428
V0429 09:47:30.139000 2645098 torch/_dynamo/guards.py:3230] [0/5] [__recompiles] triggered by the following guard failure(s):
V0429 09:47:30.139000 2645098 torch/_dynamo/guards.py:3230] [0/5] [__recompiles] - 0/4: torch.sym_float(x.size()[0]) == 6000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:30.139000 2645098 torch/_dynamo/guards.py:3230] [0/5] [__recompiles] - 0/3: torch.sym_float(x.size()[0]) == 5000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:30.139000 2645098 torch/_dynamo/guards.py:3230] [0/5] [__recompiles] - 0/2: torch.sym_float(x.size()[0]) == 3000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:30.139000 2645098 torch/_dynamo/guards.py:3230] [0/5] [__recompiles] - 0/1: torch.sym_float(x.size()[0]) == 2000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
V0429 09:47:30.139000 2645098 torch/_dynamo/guards.py:3230] [0/5] [__recompiles] - 0/0: torch.sym_float(x.size()[0]) == 1000000.0 # (_functorch/_aot_autograd/autograd_cache.py:936 in evaluate_guards)
laith code is math.ceil((L['t0'] // 5) / (math.ceil(math.sqrt(torch.sym_float(L['t0']))))) > 64 and 2 <= L['t0']
args are [7000000]
```
Tlparse with caching enabled
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/lsakka/09262c83-e454-4f26-8c5b-09043cdd1451/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
Tlparse with caching disabled
running with out cashing enabled ends up in one graph no recompilation.
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/lsakka/8f57755c-9f46-4070-83dd-6ce311307b7f/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @oulgen @jamesjwu @aorenste @anijain2305 @masnesral
| true
|
3,028,923,628
|
[pt2] [AOTAutogradCache] Allow users to specify non torch functions as cacheable
|
jamesjwu
|
open
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Discussion on https://github.com/pytorch/pytorch/pull/152369 shows that users want the ability to add their own cacheable functions to the list of safe torch functions. The functions here need to already be allowed by the dynamo graph. In order for that to be fully safe, we should provide users with a way to register cacheable functions along with a way to generate a cache key out of the function.
How specific the cache key needs to be for the function is tricky to consider. I think an initial version may just want the ability to register a static string cache key per function name, which catches cases where implementations of pure functions can change across versions of code.
There are more advanced cache key generations that we could support, like having a callable that generates a different cache key per graph, but that is more complicated to support. I'm inclined to just punt that for now and say that we do not support different cache keys per graph per custom user function.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @bdhirsh @pytorch/pt2-dispatcher
| true
|
3,028,881,394
|
[conda] Remove conda from lint-autoformat.yml
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/autoformat"
] | 3
|
CONTRIBUTOR
|
Installs setuptools since I get
https://github.com/pytorch/pytorch/actions/runs/14736804186/job/41364832984#step:5:60
```
+ python3 -m tools.generate_torch_version --is_debug=false
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/ec2-user/actions-runner/_work/pytorch/pytorch/tools/generate_torch_version.py", line 9, in <module>
from setuptools import distutils # type: ignore[import]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'setuptools'
```
It should be a no op in the normal lint workflow since setuptools is in the docker image
Switched from using python3.10 to system python, which should be python3.9
Use venv to put deps not in the base?
| true
|
3,028,825,414
|
[ROCm] cpp_extension allow user to override default flags
|
jithunnair-amd
|
open
|
[
"module: rocm",
"open source",
"release notes: rocm",
"ciflow/rocm"
] | 5
|
COLLABORATOR
|
We need -fgpu-rdc for projects such as DeepEP + rocSHMEM. The default of -no-gpu-rdc doesn't work for such cases.
As per https://github.com/pytorch/pytorch/pull/152432#issuecomment-2840899088:
"rocshmem shares the same global variable in different files, as deepEP uses CUDAExtention to build the project https://github.com/deepseek-ai/DeepEP/blob/65e2a700f0330f3fb1c26f49a0250d1f9d0ac1e3/setup.py#L51 and depends on rocshmem, this -fgpu-rdc is needed. The current logic in Pytorch prevents users from overriding this flag."
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,028,822,721
|
[conda] Remove conda usage from upload test stats while running workflow
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
The original uses python 3.10 and the base is 3.9 but I think that's ok
| true
|
3,028,799,573
|
[MPSInductor] Fix type promotion in `_print_Max`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152436
* __->__ #152430
Run into this problem while re-enabling `test_float_repr_dynamic_shapes`, where `_print_Max` were called for integer and long argument which resulted in the following compilation error
```
error: call to 'max' is ambiguous
out_ptr0[x0 + x1*metal::max(1, ks0)] = static_cast<float>(tmp26);
^~~~~~~~~~
/System/Library/PrivateFrameworks/GPUCompiler.framework/Versions/32023/Libraries/lib/clang/32023.619/include/metal/metal_integer:2477:16: note: candidate function
METAL_FUNC int max(int x, int y)
^
/System/Library/PrivateFrameworks/GPUCompiler.framework/Versions/32023/Libraries/lib/clang/32023.619/include/metal/metal_integer:3686:17: note: candidate function
METAL_FUNC long max(long x, long y)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,028,661,134
|
Fix shadow local variables
|
dsjohns2
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary: Fixing shadow local variables error: P1798875650
Test Plan: CI
Differential Revision: D73853605
| true
|
3,028,625,632
|
Remove unused Manylinux2014 Docker files and builds
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Related to Manylinux 2.28 migration: https://github.com/pytorch/pytorch/issues/123649
Cleanup old Docker files and `manylinuxaarch64-builder:cpu-aarch64` image which has been replaced by `manylinux2_28_aarch64-builder:cpu-aarch64`
| true
|
3,028,574,555
|
Add switch to disable truncation to long list print
|
sanshang-nv
|
open
|
[
"oncall: distributed"
] | 3
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
All numbers are needed for un even all2all bw calcuation. Need one method to disable this truncation.

### Alternatives
_No response_
### Additional context
For bw calcuation and other post processings.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,028,473,132
|
[Manylinux 2.28] Migrate Docker container to use gcc 13, CUDA 12.6 and gcc14 CUDA 12.8
|
atalman
|
open
|
[
"module: binaries",
"triaged"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Latest manylinux_2_28 (AlmaLinux 8 based) containers are using GCC 14 : https://github.com/pypa/manylinux#manylinux_2_28-almalinux-8-based
Our Docker build is still using GCC 11: https://github.com/pytorch/pytorch/blob/main/.ci/docker/manywheel/Dockerfile_2_28#L10
Lets migrate GCC version to default GCC 13 for CUDA 12.6 and GCC 14 for CUDA 12.8, see the comments
### Versions
2.8.0
cc @seemethere @malfet @osalpekar
| true
|
3,028,437,855
|
Silent incorrectness between static torch.compile vs eager
|
bobrenjc93
|
open
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"module: functionalization",
"oncall: pt2",
"module: inductor",
"module: pt2-dispatcher",
"ubn"
] | 5
|
CONTRIBUTOR
|
Similar to #151799 but this time static compile produces the wrong output
```
import torch
def expand(x, n):
a = x.expand((n,))
a[-1] = 3
return a
def f(n: int, device: str):
numbers = torch.arange(2, device=device)
for i in range(len(numbers)):
expanded = expand(numbers[i], n)
print(expanded)
f_dynamic = torch.compile(f, dynamic=True)
f_static = torch.compile(f, dynamic=False)
device = "cuda"
print("eager")
f(2, device)
print("dynamic torch.compile")
f_dynamic(2, device)
print("static torch.compile")
f_static(2, device)
```
results in
```
(/home/bobren/local/a/pytorch-env) [6:48] devgpu009:/home/bobren/local/a/pytorch python expand.py
eager
tensor([3, 3], device='cuda:0')
tensor([3, 3], device='cuda:0')
dynamic torch.compile
tensor([3, 3], device='cuda:0')
tensor([3, 3], device='cuda:0')
static torch.compile
tensor([3, 3], device='cuda:0')
tensor([1, 1], device='cuda:0')
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @bdhirsh @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @chauhang @aakhundov @eellison
| true
|
3,028,167,873
|
Relax tolerance for test_quick_baddbmm_cpu_complex64
|
Flamefire
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
On Zen 2 (AMD EPYC) and Intel Sapphire Rapids this fails with small differences when compiled with native targeted optimizations. I.e. it fails with `-march=znver2` but succeeds with `-march=znver1`.
I assume some operator fusing is being used by GCC. Small differences like using `vmovdqa` can be seen in the minimized code of the baddbmm kernel: https://godbolt.org/z/jsxMa91Wb
The greatest differences are consistent and the same on both CPU architectures:
```
Greatest absolute difference: 3.43852152582258e-05 at index (1, 2, 1) (up to 1e-05 allowed)
Greatest relative difference: 3.6034286949870875e-06 at index (1, 2, 1) (up to 1.3e-06 allowed)
```
Hence I assume this is in the expected tolerances especially as `complex128` and all other types pass.
| true
|
3,028,145,626
|
Invalid handling of nans in compiled torch.quantile / torch.nanquantile on cuda
|
RoepStoep
|
open
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: inductor",
"ubn"
] | 1
|
NONE
|
### 🐛 Describe the bug
It seems both torch.quantile and torch.nanquantile don't handle nans correctly when compiled on cuda. Results are consistent without nans or on cpu. I've tested this on pytorch 2.6 and 2.7, both standard pip install with cuda 12.6.
Minimal code to reproduce:
```
import torch
def eager_quantile(t):
return torch.quantile(t, .5)
def eager_nanquantile(t):
return torch.nanquantile(t, .5)
@torch.compile
def compiled_quantile(t):
return torch.quantile(t, .5)
@torch.compile
def compiled_nanquantile(t):
return torch.nanquantile(t, .5)
print('Tensor without nans:')
for device in ['cpu', 'cuda']:
tensor = torch.tensor(
[0.7308, 0.7053, 0.3349, -0.7158, 0.1985, 0.1234, 1.0284, -0.6513, -1.8767, -0.4369],
device=device
)
print('{} quantile eager / compiled:\n{:.4f} / {:.4f}'.format(
device, eager_quantile(tensor), compiled_quantile(tensor)
))
print('{} nanquantile eager / compiled:\n{:.4f} / {:.4f}'.format(
device, eager_nanquantile(tensor), compiled_nanquantile(tensor)
))
print('\nTensor with nans:')
for device in ['cpu', 'cuda']:
tensor = torch.tensor(
[0.7308, 0.7053, 0.3349, -0.7158, torch.nan, 0.1234, 1.0284, torch.nan, -1.8767, -0.4369],
device=device
)
print('{} quantile eager / compiled:\n{:.4f} / {:.4f}'.format(
device, eager_quantile(tensor), compiled_quantile(tensor)
))
print('{} nanquantile eager / compiled:\n{:.4f} / {:.4f}'.format(
device, eager_nanquantile(tensor), compiled_nanquantile(tensor)
))
```
I get the following output. The bottom two lines show the tensor with nans gives invalid output for compiled quantile and nanquantile on cuda. Eager or compiled on cpu are fine:
```
Tensor without nans:
cpu quantile eager / compiled:
0.1610 / 0.1610
cpu nanquantile eager / compiled:
0.1610 / 0.1610
cuda quantile eager / compiled:
0.1610 / 0.1610
cuda nanquantile eager / compiled:
0.1610 / 0.1610
Tensor with nans:
cpu quantile eager / compiled:
nan / nan
cpu nanquantile eager / compiled:
0.2291 / 0.2291
cuda quantile eager / compiled:
nan / 1.0284
cuda nanquantile eager / compiled:
0.2291 / 0.4271
```
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Ti
Nvidia driver version: 570.86.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU max MHz: 4100,0000
CPU min MHz: 800,0000
BogoMIPS: 5799.77
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1,5 MiB (6 instances)
L3 cache: 9 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-5
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime-training==1.19.2
[pip3] onnxscript==0.2.4
[pip3] torch==2.7.0+cu126
[pip3] torchaudio==2.7.0+cu126
[pip3] torchrl==0.7.2
[pip3] torchvision==0.22.0+cu126
[pip3] triton==3.3.0
[conda] numpy 2.2.5 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.7.0+cu126 pypi_0 pypi
[conda] torchaudio 2.7.0+cu126 pypi_0 pypi
[conda] torchrl 0.7.2 pypi_0 pypi
[conda] torchvision 0.22.0+cu126 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,028,074,742
|
The test 'test_host_memory_stats' is failing in torch2.7.0+cu118
|
1274085042
|
closed
|
[] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When I run test_cuda.py locally, I've found an error in test_host_memory_stats
```
PYTORCH_TESTING_DEVICE_ONLY_FOR="cuda" python test_cuda.py -v
```
The error is similar to https://github.com/pytorch/pytorch/issues/148607
**output**

@ptrblck @msaroufim @eqy @clee2000
### Versions
root@notebook-pytorch-cuda-2-7-0-cuda11-8-cudnn9-1gyp3cn-launcher-0:/workspace# python collect_env.py
Collecting environment information...
PyTorch version: 2.7.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.35
Python version: 3.11.12 | packaged by conda-forge | (main, Apr 10 2025, 22:23:25) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 6
CPU max MHz: 3100.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.6 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 70 MiB (56 instances)
L3 cache: 84 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] optree==0.15.0
[pip3] torch==2.7.0+cu118
[pip3] torchaudio==2.7.0+cu118
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0+cu118
[pip3] triton==3.3.0
[conda] numpy 2.2.5 py311h5d046bc_0 conda-forge
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] optree 0.15.0 pypi_0 pypi
[conda] torch 2.7.0+cu118 pypi_0 pypi
[conda] torchaudio 2.7.0+cu118 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.22.0+cu118 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
| true
|
3,028,060,180
|
torch.nn.functional.ctc_loss raises cuDNN error in PyTorch versions >=2.5.0
|
zhu-han
|
closed
|
[
"module: cudnn",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
In torch.nn.functional.ctc_loss, when log_probs is on gpu, if we set the devices of input_lengths, target_lengths, and targets as cpu and their dtypes as torch.int32, it will raise the following error:
```
File "[...]/lib/python3.10/site-packages/torch/nn/functional.py", line 3069, in ctc_loss
return torch.ctc_loss(
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
```
The script to reproduce is as follows:
```python
import torch
# batch size
N = 50
# audio length
T = 100
# text dimension
C = 80
# max text length
S = 10
prob_device = torch.device("cuda")
other_device = torch.device("cpu")
other_dtype = torch.int32
log_probs = torch.randn(T, N, C).log_softmax(2).to(prob_device)
input_lengths = torch.full((N,), T, dtype=other_dtype).to(other_device)
target_lengths = torch.randint(low=1, high=S, size=(N,), dtype=other_dtype).to(other_device)
targets = torch.randint(low=0, high=C, size=(sum(target_lengths),), dtype=other_dtype).to(other_device)
ctc_loss = torch.nn.functional.ctc_loss(
log_probs=log_probs,
targets=targets,
input_lengths=input_lengths,
target_lengths=target_lengths,
reduction="sum",
)
print(f"CTC Loss: {ctc_loss.item()}")
```
### Versions
I tried multiple versions of Pytorch. This is not an issue in torch2.4.1cuda12.1. But this issue exists in torch2.5.0cuda12.1, torch2.5.1cuda12.1, and torch2.6.0cuda12.4.
The NVIDIA Driver Version: 560.35.03
cc @csarofeen @ptrblck @xwang233 @eqy
| true
|
3,028,008,222
|
Add MTIA memory info into the sharder
|
emasap
|
closed
|
[
"fb-exported"
] | 4
|
CONTRIBUTOR
|
Test Plan: buck2 run //scripts/emmanuelmenage/apf_rec_investigations:planner_investigations now correctly reports OOM if the tables are too big.
Differential Revision: D72565617
| true
|
3,027,965,096
|
[ONNX] scatter_reduce with max reduction not correctly converted to ONNX for 2d input
|
spietz
|
closed
|
[
"module: onnx",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
I discovered while trying to convert a pytorch-geometric based model, that torch.scatter_reduce cannot be correctly converted to ONNX when input is two dimensional.
A minimal example showing this is
```python
import torch
import onnxruntime
# 2d scatter_reduce example
# input: 2x2 tensor
input = torch.tensor([[0,0], [0,0]])
src = torch.tensor([[1,2],[3,4]])
index = torch.tensor([[0,1],[0,1]])
# expected output: 2x2 tensor
output = torch.scatter_reduce(input, 0, index, src, reduce="amax")
print("expected output:\n", output)
# wrap the scatter_reduce in a module
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input, index, src):
return torch.scatter_reduce(input, 0, index, src, reduce="amax")
# export the model to onnx
print("exporting the model:")
input_names = ["input","index", "src"]
output_names = ["output"]
inputs = tuple([input, index, src])
torch.onnx.export(MyModule(), # model being run
inputs,
"scatter_reduce.onnx",
export_params=True,
opset_version=17,
do_constant_folding=True,
input_names=input_names,
output_names=output_names,
verbose=True, # turn on to get debugging info
)
# get onnx prediction
print("loading the model:")
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL
ort_session = onnxruntime.InferenceSession("scatter_reduce.onnx", sess_options=sess_options)
# pair names with inputs
input_dic = {"input": input.numpy(),
"index": index.numpy(),
"src": src.numpy()}
# run ort
result = ort_session.run(None, input_dic)[0]
print(result)
```
Resulting in the output with errors
```
expected output:
tensor([[3, 0],
[0, 4]])
exporting the model:
Exported graph: graph(%input : Long(2, 2, strides=[2, 1], requires_grad=0, device=cpu),
%index : Long(2, 2, strides=[2, 1], requires_grad=0, device=cpu),
%src : Long(2, 2, strides=[2, 1], requires_grad=0, device=cpu)):
%/Shape_output_0 : Long(2, strides=[1], device=cpu) = onnx::Shape[onnx_name="/Shape"](%input), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/Size_output_0 : Long(device=cpu) = onnx::Size[onnx_name="/Size"](%/Shape_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/Constant_output_0 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={0}, onnx_name="/Constant"](), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/Equal_output_0 : Bool(device=cpu) = onnx::Equal[onnx_name="/Equal"](%/Size_output_0, %/Constant_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/If_output_0 : Long(4, strides=[1], device=cpu), %/If_output_1 : Long(4, strides=[1], device=cpu), %/If_output_2 : Long(4, strides=[1], device=cpu) = onnx::If[onnx_name="/If"](%/Equal_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
block0():
%/Constant_1_output_0 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={-1}, onnx_name="/Constant_1"](), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/Reshape_output_0 : Long(4, strides=[1], device=cpu) = onnx::Reshape[onnx_name="/Reshape"](%input, %/Constant_1_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/Reshape_1_output_0 : Long(4, strides=[1], device=cpu) = onnx::Reshape[onnx_name="/Reshape_1"](%index, %/Constant_1_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/Reshape_2_output_0 : Long(4, strides=[1], device=cpu) = onnx::Reshape[onnx_name="/Reshape_2"](%src, %/Constant_1_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
-> (%/Reshape_output_0, %/Reshape_1_output_0, %/Reshape_2_output_0)
block1():
%/Identity_output_0 : Long(2, 2, strides=[2, 1], device=cpu) = onnx::Identity[onnx_name="/Identity"](%input), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/Identity_1_output_0 : Long(2, 2, strides=[2, 1], device=cpu) = onnx::Identity[onnx_name="/Identity_1"](%index), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%/Identity_2_output_0 : Long(2, 2, strides=[2, 1], device=cpu) = onnx::Identity[onnx_name="/Identity_2"](%src), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
-> (%/Identity_output_0, %/Identity_1_output_0, %/Identity_2_output_0)
%/ScatterElements_output_0 : Long(4, strides=[1], device=cpu) = onnx::ScatterElements[axis=0, reduction="max", onnx_name="/ScatterElements"](%/If_output_0, %/If_output_1, %/If_output_2), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
%output : Long(4, strides=[1], device=cpu) = onnx::If[onnx_name="/If_1"](%/Equal_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
block0():
%/Squeeze_output_0 : Long(4, strides=[1], device=cpu) = onnx::Squeeze[onnx_name="/Squeeze"](%/ScatterElements_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
-> (%/Squeeze_output_0)
block1():
%/Identity_3_output_0 : Long(4, strides=[1], device=cpu) = onnx::Identity[onnx_name="/Identity_3"](%/ScatterElements_output_0), scope: __main__.MyModule:: # /workspace/test_scatter_reduce.py:21:0
-> (%/Identity_3_output_0)
return (%output)
loading the model:
2025-04-29 11:55:18.280661043 [E:onnxruntime:, sequential_executor.cc:572 ExecuteKernel] Non-zero status code returned while running Identity node. Name:'/Identity_3' Status Message: /onnxruntime_src/onnxruntime/core/framework/execution_frame.cc:171 onnxruntime::common::Status onnxruntime::IExecutionFrame::GetOrCreateNodeOutputMLValue(int, int, const onnxruntime::TensorShape*, OrtValue*&, const onnxruntime::Node&) shape && tensor.Shape() == *shape was false. OrtValue shape verification failed. Current shape:{4} Requested shape:{2,2}
2025-04-29 11:55:18.280698645 [E:onnxruntime:, sequential_executor.cc:572 ExecuteKernel] Non-zero status code returned while running If node. Name:'/If_1' Status Message: Non-zero status code returned while running Identity node. Name:'/Identity_3' Status Message: /onnxruntime_src/onnxruntime/core/framework/execution_frame.cc:171 onnxruntime::common::Status onnxruntime::IExecutionFrame::GetOrCreateNodeOutputMLValue(int, int, const onnxruntime::TensorShape*, OrtValue*&, const onnxruntime::Node&) shape && tensor.Shape() == *shape was false. OrtValue shape verification failed. Current shape:{4} Requested shape:{2,2}
Traceback (most recent call last):
File "/workspace/test_scatter_reduce.py", line 51, in <module>
result = ort_session.run(None, input_dic)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 270, in run
return self._sess.run(output_names, input_feed, run_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running If node. Name:'/If_1' Status Message: Non-zero status code returned while running Identity node. Name:'/Identity_3' Status Message: /onnxruntime_src/onnxruntime/core/framework/execution_frame.cc:171 onnxruntime::common::Status onnxruntime::IExecutionFrame::GetOrCreateNodeOutputMLValue(int, int, const onnxruntime::TensorShape*, OrtValue*&, const onnxruntime::Node&) shape && tensor.Shape() == *shape was false. OrtValue shape verification failed. Current shape:{4} Requested shape:{2,2}
```
Anyone experienced this or have a workaround? It seems like scatter_reduce is being used extensively in pytorch-geometric.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+7c8ec84dab.nv25.03
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10900 CPU @ 2.80GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 5
BogoMIPS: 5616.02
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 2.5 MiB (10 instances)
L3 cache: 20 MiB (1 instance)
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Unknown: Dependent on hypervisor status
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cudnn-frontend==1.10.0
[pip3] nvtx==0.2.11
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.21.1
[pip3] optree==0.14.1
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.2.0+gitb2684bf3b.nvinternal
[pip3] torch==2.7.0a0+7c8ec84dab.nv25.3
[pip3] torch-geometric==2.6.1
[pip3] torch_tensorrt==2.7.0a0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0a0
[conda] Could not collect
| true
|
3,027,744,829
|
[Inductor][CPP] Enable vectorized fp8 quant dequant
|
leslie-fang-intel
|
open
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152418
* #152417
**Summary**
This PR enables the vectorization codegen with Inductor CPP backend for `FP8_E4M3` `quant` from `float32` and `dequant` to `float32`.
**Test Plan**
```
python test/inductor/test_cpu_repro.py -k test_dequant_quant_lowering_fp8_e4m3
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,027,744,691
|
Add Vectorized FP8 E4M3
|
leslie-fang-intel
|
open
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152418
* __->__ #152417
**Summary**
This PR mainly adding the `Vectorized<Float8_e4m3fn>` class to support the vectorization of `FP8 E4M3` with methods:
- Convert to/from `Vectorized<float>`
- Common vectorized methods like: `mul`, `abs`, `eq` and etc.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,027,737,898
|
DISABLED test_comprehensive_index_select_cuda_int32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 14
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_index_select_cuda_int32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41325481121).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_index_select_cuda_int32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 862, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 846, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1460, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1347, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2209, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmplroq8pl8/yn/cynbkbjy6g37b72sqe3kdp6gmjvfx6673ty3ubytjyhpoohblpej.py", line 124, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpa4txb5vb/triton/IELU6ZIDXWXCR4HDNI74BGDGD4RRGBYFRFN4HI4QRWFXD6MT4W4A/triton_poi_fused_index_select_1.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.int32], args=(0,Tensor[size=(1,), device="cuda:0", dtype=torch.int64]), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_index_select_cuda_int32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,027,737,486
|
DISABLED test_input_moved_to_cuda_device_script (__main__.TensorPipeCudaRemoteModuleTest)
|
pytorch-bot[bot]
|
open
|
[
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_input_moved_to_cuda_device_script&suite=TensorPipeCudaRemoteModuleTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41321274671).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_input_moved_to_cuda_device_script`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 899, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 1 terminated or timed out after 300.03948187828064 seconds
```
</details>
Test file path: `distributed/rpc/cuda/test_tensorpipe_agent.py`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @clee2000
| true
|
3,027,590,597
|
[CUDA] Add new architectures
|
Aidyn-A
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
CUDA 12.9 will introduce a couple of new architectures `sm_103` and `sm_121`. We do not need to build for them, because they are going to be compatible with`sm_100` and `sm_120` respectively (similar to `sm_86` and `sm_89`), but PyTorch must be "aware" of them.
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,027,436,630
|
[wip] use base tensor storage offset in gen_alias_from_base
|
bobrenjc93
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152413
Fixes #151799
We use the aliased base tensor's storage offset because the target_meta_tensor's offset
storage can be incorrect since we often times clone_preserve_strides to fix alignment. See
copy_misaligned_inputs in _inductor/utils.py
| true
|
3,027,433,846
|
[Dynamo][Typing] Enable typing hints for `tx` in `misc.py`
|
shink
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: dynamo"
] | 4
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,027,423,356
|
[Quant][X86] add ops to compute uint8 pointwise add/add_relu
|
Xia-Weiwen
|
open
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"release notes: quantization",
"intel"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152811
* __->__ #152411
**Summary**
This PR adds two new ops, `onednn.qadd.tensor` and `onednn.qadd_relu.tensor`, for int8 elementwise add, which accepts inputs on CPU device (instead of QuantizedCPU).
The new ops are implemented with AVX512 instructions and it provides similar or better performance, depending on shape, than its counterpart for QuantizedCPU device `quantized.add` and `quantized.add_relu`.
The new op supports output dtypes other than uint8 (fp32, fp16 and bf16 are supported).
**Test plan**
```
pytest test/quantization/core/test_quantized_op.py -k test_int8_add_onednn
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,027,328,382
|
[Hierarchical Compile] Add mutation dependencies to topological sorting
|
mlazos
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152589
* #152572
* #152570
* #152506
* __->__ #152410
* #152505
* #152389
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,027,324,032
|
Cleanup DeviceInterface in triton test
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
- Remove inherited functions
- Return valid device_count (1 device: idx=0)
- Remove unused function `triton_supported`
Followup to #144399
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @GeorgeWigley
| true
|
3,027,250,710
|
[Inductor][CPU] bug fix for int8 GEMM compensation epilogue
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Fixes #152398
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,027,229,439
|
Call torch.distributed.destroy_process_group() at the end of the example
|
wangkuiyi
|
open
|
[
"oncall: distributed",
"open source",
"release notes: distributed (dtensor)"
] | 3
|
CONTRIBUTOR
|
Address the comment https://github.com/pytorch/pytorch/pull/152027#pullrequestreview-2800075775
## Test Plan
Running the following command
```shell
torchrun --nproc-per-node=4 torch/distributed/tensor/examples/visualize_sharding_example.py
```
should not print the following warning at the end of the execution:
```
Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,027,189,923
|
[DTensor] Calling .item() on DTensor with Partial placement results in local value
|
dest1n1s
|
open
|
[
"oncall: distributed",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
Currently when calling `.item()` method on a 0-dim `DTensor` with a `Partial` placement, it will directly give the local part of the distributed tensor as the result, without calling the reduction method:
```python
device_mesh = init_device_mesh("cuda", (2,))
t = torch.arange(8, dtype=torch.float32, device="cuda")
dt = distribute_tensor(t, device_mesh, [Shard(0)])
dt_sum = dt.sum().item() # dt_sum = 6 = sum(0, 1, 2, 3) on rank 0
```
This result is somehow unintuitive and will lead to hard-to-find bugs since it's not consistent with that of a normal tensor. It'll be better to redistribute to `Replicate` implicitly in `.item()`, or to throw an error when calling `.item()` on tensors with `Partial` placements.
### Versions
```plaintext
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 6 2024, 19:59:28) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] Could not collect
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.0 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.5.39 0 nvidia
[conda] cuda-runtime 12.4.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.2.65 0 nvidia
[conda] libcufft 11.2.0.44 0 nvidia
[conda] libcurand 10.3.6.82 0 nvidia
[conda] libcusolver 11.6.0.99 0 nvidia
[conda] libcusparse 12.3.0.142 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.99 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.26.4 py311h08b1b3b_0
[conda] numpy-base 1.26.4 py311hf175353_0
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch 2.4.0 py3.11_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.0 py311_cu124 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtriton 3.0.0 py311 pytorch
[conda] torchvision 0.19.0 py311_cu124 pytorch
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,027,115,839
|
[Do not merge] poke CI with FX IR always on
|
blaine-rister
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Testing the CI with FX IR conversion always enabled, to find bugs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,027,005,924
|
use cutlass native BroadcastPtrArray in scaled group gemm
|
ngimel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 5
|
COLLABORATOR
|
After cutlass update to 3.9 we can use BroadcastPtrArray instead of a local copy with small changes.
| true
|
3,026,996,578
|
[ROCm][TunableOp] Fix ScaledGEMM rowwise
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 4
|
COLLABORATOR
|
Fixes TunableOp ScaledGEMM regression for rowwise scaling caused by this https://github.com/pytorch/pytorch/pull/147548
Credit goes to @mawong-amd for fix.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
3,026,986,425
|
[PowerPC] Fix vec256 for complex float and double in Power system
|
Tiwari-Avanish
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"topic: build",
"release notes: cpu (x86)"
] | 5
|
CONTRIBUTOR
|
Power System build is failing with below error.
After this commit it is failing:
https://github.com/pytorch/pytorch/commit/912102b4ecf776711436f95d2fe62d78e39ad880
Fix the build error along with test cases that are failing for complex double and complex float data type.
Build Failure Logs:
```
vec_base.h:790:6: error: use of deleted function ‘at::vec::DEFAULT::ComplexDbl& at::vec::DEFAULT::Vectorized<c10::complex >::operator’
790 | c[i] = a[i] * b[i];
| ~^
error: use of deleted function ‘at::vec::DEFAULT::ComplexDbl& at::vec::DEFAULT::Vectorized<c10::complex >::oper
ator’
802 | c[i] = a[i] / b[i];
| ~^
error: use of deleted function ‘at::vec::DEFAULT::ComplexFlt& at::vec::DEFAULT::Vectorized<c10::complex >::opera
tor’
790 | c[i] = a[i] * b[i];
| ~^
error: use of deleted function ‘at::vec::DEFAULT::ComplexFlt& at::vec::DEFAULT::Vectorized<c10::complex >::opera
tor’
802 | c[i] = a[i] / b[i];
| ~^
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,026,947,999
|
[NFC] [inductor] [compile async] Warn exception if pickler failed
|
ChuanqiXu9
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 8
|
CONTRIBUTOR
|
A NFC to help us to find issues
See https://github.com/pytorch/pytorch/issues/151904
CC @aorenste
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,898,940
|
Compilation Issues with sm_129 (RTX 5070 Ti) on WSL - Seeking Advice
|
kaworukevin
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
**Environment:**
- OS: Windows 11 with WSL (Ubuntu 22.04)
- GPU: NVIDIA GeForce RTX 5070 Ti (sm_129)
- CUDA: 12.8
- PyTorch Version: 2.8.0a0+gitc8b4a39 (compiled from source)
**Description:**
Hi PyTorch team,
I’m trying to compile PyTorch from source on WSL (Ubuntu 22.04) to support an NVIDIA GeForce RTX 5070 Ti (sm_129) for a project called FramePack. Over the past three days, I’ve been troubleshooting with the help of Grok (an AI by xAI) and referencing approaches from an engineer named "woctordho." I’d like to share my process and seek advice on whether my approach is correct, as I’ve faced several compilation challenges.
### Steps Taken
1. **Initial Setup and Issues:**
- Installed PyTorch 2.8.0+cu126 (nightly) with CUDA 12.6, but encountered:
- Dependency conflict: `xformers 0.0.29.post1` required `torch<=2.5.1`, incompatible with my version.
- PyTorch didn’t support `sm_129`, only supporting `sm_50` to `sm_90`.
- Tried upgrading PyTorch and reinstalling `xformers` (following woctordho’s dependency debugging approach), but the architecture issue persisted.
2. **Compiling PyTorch from Source:**
- Cloned PyTorch and set `TORCH_CUDA_ARCH_LIST="12.9"`:
```
git clone --recursive https://github.com/pytorch/pytorch.git
export CUDA_HOME=/usr/local/cuda-12.6
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export TORCH_CUDA_ARCH_LIST="12.9"
python setup.py develop
```
- Faced `git` errors (resolved by fixing directories) and `Killed` errors due to memory issues, mitigated by reducing parallel jobs (`MAX_JOBS=4`), a method suggested by woctordho.
3. **Adjusting GPU Architecture:**
- NCCL compilation failed with `nvcc fatal: Unsupported gpu architecture 'compute_129'` as CUDA 12.8 doesn’t support `sm_129`.
- Upgraded to CUDA 12.8 and set `TORCH_CUDA_ARCH_LIST="9.0"` (sm_90, supported by CUDA 12.8), allowing NCCL to compile (per woctordho’s fallback architecture advice).
- Flash Attention compilation failed with `Killed`. Reduced parallel jobs further and fixed syntax issues with Grok’s help.
- Currently running: `python setup.py develop > pytorch_compile_log_sm90_j4_corrected.txt 2>&1` with `TORCH_CUDA_ARCH_LIST="9.0"` and `MAX_JOBS=4`.
### Questions
1. Is using `sm_90` instead of `sm_129` a reasonable approach? Will this significantly impact performance on my RTX 5070 Ti?
2. Should I try a newer CUDA version (e.g., CUDA 12.9 or 13.0) to fully support `sm_129`?
3. Are there better ways to manage memory issues during compilation on WSL (e.g., adjusting WSL memory allocation)?
4. Any suggestions for compiling PyTorch with Flash Attention on WSL without memory issues?
I’ve been saving all outputs to logs for debugging, as suggested by Grok. Any feedback would be greatly appreciated!
Thanks,
Kevin
### Versions
**Issue Title:**
**Environment:**
- OS: Windows 11 with WSL (Ubuntu 22.04)
- GPU: NVIDIA GeForce RTX 5070 Ti (sm_129)
- CUDA: 12.8
- PyTorch Version: 2.8.0a0+gitc8b4a39 (compiled from source)
**Description:**
Hi PyTorch team,
I’m trying to compile PyTorch from source on WSL (Ubuntu 22.04) to support an NVIDIA GeForce RTX 5070 Ti (sm_129) for a project called FramePack. Over the past three days, I’ve been troubleshooting with the help of Grok (an AI by xAI) and referencing approaches from an engineer named "woctordho." I’d like to share my process and seek advice on whether my approach is correct, as I’ve faced several compilation challenges.
### Steps Taken
1. **Initial Setup and Issues:**
- Installed PyTorch 2.8.0+cu126 (nightly) with CUDA 12.6, but encountered:
- Dependency conflict: `xformers 0.0.29.post1` required `torch<=2.5.1`, incompatible with my version.
- PyTorch didn’t support `sm_129`, only supporting `sm_50` to `sm_90`.
- Tried upgrading PyTorch and reinstalling `xformers` (following woctordho’s dependency debugging approach), but the architecture issue persisted.
2. **Compiling PyTorch from Source:**
- Cloned PyTorch and set `TORCH_CUDA_ARCH_LIST="12.9"`:
```
git clone --recursive https://github.com/pytorch/pytorch.git
export CUDA_HOME=/usr/local/cuda-12.6
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export TORCH_CUDA_ARCH_LIST="12.9"
python setup.py develop
```
- Faced `git` errors (resolved by fixing directories) and `Killed` errors due to memory issues, mitigated by reducing parallel jobs (`MAX_JOBS=4`), a method suggested by woctordho.
3. **Adjusting GPU Architecture:**
- NCCL compilation failed with `nvcc fatal: Unsupported gpu architecture 'compute_129'` as CUDA 12.8 doesn’t support `sm_129`.
- Upgraded to CUDA 12.8 and set `TORCH_CUDA_ARCH_LIST="9.0"` (sm_90, supported by CUDA 12.8), allowing NCCL to compile (per woctordho’s fallback architecture advice).
- Flash Attention compilation failed with `Killed`. Reduced parallel jobs further and fixed syntax issues with Grok’s help.
- Currently running: `python setup.py develop > pytorch_compile_log_sm90_j4_corrected.txt 2>&1` with `TORCH_CUDA_ARCH_LIST="9.0"` and `MAX_JOBS=4`.
### Questions
1. Is using `sm_90` instead of `sm_129` a reasonable approach? Will this significantly impact performance on my RTX 5070 Ti?
2. Should I try a newer CUDA version (e.g., CUDA 12.9 or 13.0) to fully support `sm_129`?
3. Are there better ways to manage memory issues during compilation on WSL (e.g., adjusting WSL memory allocation)?
4. Any suggestions for compiling PyTorch with Flash Attention on WSL without memory issues?
I’ve been saving all outputs to logs for debugging, as suggested by Grok. Any feedback would be greatly appreciated!
Thanks,
Kevin
| true
|
3,026,893,525
|
Remove 3.13 hack when installing TIMM
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default"
] | 3
|
CONTRIBUTOR
|
A Docker build failure showing up at this step triggered by the landing of https://github.com/pytorch/pytorch/pull/152362. Here is the example logs https://github.com/pytorch/pytorch/actions/runs/14718029881/job/41305891896:
```
#37 29.72 + as_jenkins conda run -n py_3.13 pip install --progress-bar off --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu124
#37 29.72 + sudo -E -H -u jenkins env -u SUDO_UID -u SUDO_GID -u SUDO_COMMAND -u SUDO_USER env PATH=/opt/conda/envs/py_3.13/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 conda run -n py_3.13 pip install --progress-bar off --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu124
#37 49.50 ERROR: Cannot install torch and torchvision==0.22.0.dev20250226+cu124 because these package versions have conflicting dependencies.
```
This happens because we have stopped building 12.4 nightly for sometime. This hack doesn't apply anymore, so let's just remove it.
| true
|
3,026,872,250
|
[CPU][UT] 16 UT of test/inductor/test_cpu_select_algorithm.py failed with PyTorch 2025-04-028 nightly wheel
|
LifengWang
|
closed
|
[
"oncall: cpu inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
16 UT of test/inductor/test_cpu_select_algorithm.py failed.
The suspected guilty commit: d70490ecfee849149a05541008c2601487cf0012
```
FAILED [0.2334s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_per_channel_quant_True_reshape_a_False_expand_a_scale_False_dynamic_False_M_1_cpu_bfloat16
FAILED [0.2222s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_per_channel_quant_True_reshape_a_False_expand_a_scale_False_dynamic_True_M_1_cpu_bfloat16
FAILED [0.2302s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_per_channel_quant_True_reshape_a_True_expand_a_scale_False_dynamic_False_M_1_cpu_bfloat16
FAILED [0.2296s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_per_channel_quant_True_reshape_a_True_expand_a_scale_False_dynamic_True_M_1_cpu_bfloat16
FAILED [0.2215s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_per_channel_quant_True_reshape_a_False_expand_a_scale_False_dynamic_False_M_1_cpu_float32
FAILED [0.2186s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_per_channel_quant_True_reshape_a_False_expand_a_scale_False_dynamic_True_M_1_cpu_float32
FAILED [0.2209s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_per_channel_quant_True_reshape_a_True_expand_a_scale_False_dynamic_False_M_1_cpu_float32
FAILED [0.2230s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_per_channel_quant_True_reshape_a_True_expand_a_scale_False_dynamic_True_M_1_cpu_float32
FAILED [0.2319s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_per_channel_quant_True_reshape_a_False_expand_a_scale_False_dynamic_False_M_1_cpu_bfloat16
FAILED [0.2300s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_per_channel_quant_True_reshape_a_False_expand_a_scale_False_dynamic_True_M_1_cpu_bfloat16
FAILED [0.2417s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_per_channel_quant_True_reshape_a_True_expand_a_scale_False_dynamic_False_M_1_cpu_bfloat16
FAILED [0.2435s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_per_channel_quant_True_reshape_a_True_expand_a_scale_False_dynamic_True_M_1_cpu_bfloat16
FAILED [0.2282s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_per_channel_quant_True_reshape_a_False_expand_a_scale_False_dynamic_False_M_1_cpu_float32
FAILED [0.2265s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_per_channel_quant_True_reshape_a_False_expand_a_scale_False_dynamic_True_M_1_cpu_float32
FAILED [0.2409s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_per_channel_quant_True_reshape_a_True_expand_a_scale_False_dynamic_False_M_1_cpu_float32
FAILED [0.2403s] test/inductor/test_cpu_select_algorithm.py::TestSelectAlgorithmCPU::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_per_channel_quant_True_reshape_a_True_expand_a_scale_False_dynamic_True_M_1_cpu_float32
```
Detailed Error log:
[gemm_ut.log](https://github.com/user-attachments/files/19950963/gemm_ut.log)
cc @chuanqi129 @leslie-fang-intel
### Versions
PyTorch version: 2.8.0a0+git0c03652
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.17 | packaged by conda-forge | (main, Apr 10 2025, 22:19:12) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-138-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) 6972P
CPU family: 6
Model: 173
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 9 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 384 MiB (192 instances)
L3 cache: 960 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.15.0
[pip3] mypy_extensions==1.1.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] torch==2.8.0a0+git0c03652
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0a0+d60ce09
[pip3] torchdata==0.7.0a0+11bb5b8
[pip3] torchmultimodal==0.1.0b0
[pip3] torchtext==0.16.0a0+b0ebddc
[pip3] torchvision==0.19.0a0+d23a6e1
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] mkl-include 2025.1.0 hf2ce2f3_809 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.8.0a0+git0c03652 dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+d60ce09 pypi_0 pypi
[conda] torchdata 0.7.0a0+11bb5b8 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchtext 0.16.0a0+b0ebddc pypi_0 pypi
[conda] torchvision 0.19.0a0+d23a6e1 pypi_0 pypi
| true
|
3,026,787,590
|
Build Issue for power issue related to vec complex double and float
|
Tiwari-Avanish
|
closed
|
[
"module: cpu",
"open source"
] | 4
|
CONTRIBUTOR
|
Power System build is failing with below error.
After this commit it is failing:
912102b4ecf776711436f95d2fe62d78e39ad880
Fix the build error along with test cases that are failing for complex double and complex float data type.
**Build Failure Logs:**
vec_base.h:790:6: error: use of deleted function ‘at::vec::DEFAULT::ComplexDbl& at::vec::DEFAULT::Vectorized<c10::complex<double> >::operator[](int)’
790 | c[i] = a[i] * b[i];
| ~^
error: use of deleted function ‘at::vec::DEFAULT::ComplexDbl& at::vec::DEFAULT::Vectorized<c10::complex<double> >::oper
ator[](int)’
802 | c[i] = a[i] / b[i];
| ~^
error: use of deleted function ‘at::vec::DEFAULT::ComplexFlt& at::vec::DEFAULT::Vectorized<c10::complex<float> >::opera
tor[](int)’
790 | c[i] = a[i] * b[i];
| ~^
error: use of deleted function ‘at::vec::DEFAULT::ComplexFlt& at::vec::DEFAULT::Vectorized<c10::complex<float> >::opera
tor[](int)’
802 | c[i] = a[i] / b[i];
| ~^
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,026,753,818
|
set thread_work_size to 4 for unrolled kernel
|
ngimel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 16
|
COLLABORATOR
|
Previous PRs enabling 8-vectorization inadvertently regressed unrolled kernel perf.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,656,726
|
[dynamo] Relax guard introduced when tracing `__call__` on user defined object
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152395
* #152369
This relaxes the guard introduced in #100444 (which aggressively guard
on the object id, despite Dynamo is just tracing its `__call__` method.
This allows users to bypass the high compilation time issue in #150706
by compiling transformer blocks only. Without this patch, we'd get lots
of unnecessary recompilation, as the block has difference attention
processor instances.
Compiling blocks only _significantly_ speeds up compilation process
(from ~310s to ~32s), and even speeds up e2e performance for some reason
(7.83s to 7.67s).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,026,652,990
|
[Accelerator] Fix Python typing in accelerator
|
cyyever
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
There are some changes:
1. Some accelerator APIs require an accelerator device (that is, without `cpu`). In such cases, Optional device typing causes confusion. Therefore, `ExplicitDevice` is introduced and used in `set_device_index`.
2. Use keywords for arguments if possible.
3. `__exit__ ` of `device_index` is changed to return None.
| true
|
3,026,652,517
|
[cutlass backend] Add (limited) bmm dynamic shape support
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Differential Revision: D73626732
In this PR, we add support for bmm dynamic shape, provided that the batch stride is the biggest in the stride for A, B, and D. For example, for A of size `(B, M, K)`, we support stride `(M*K, K, 1)` and `(M*K, 1, M)`. With this assumption, we can infer the batch stride from existing arguments.
The reason is we don't want to add 2-3 more runtime params. The concerns are complexity and possible perf regression, though we didn't verify the latter.
We can revisit this if there is a need for that.
We also remove `B = 1` for normal mm and addmm. We tested it and didn't see perf regression. But open to revisiting this as well.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,644,650
|
[Inductor] Use `torch._dynamo.utils.same` in block pointer tests, adding atol/rtol kwargs to it.
|
blaine-rister
|
open
|
[
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
This refactor consolidates test utilities by calling `torch._dynamo.utils.same` in Inductor's block pointer tests. To facilitate this, it also adds `atol` and `rtol` kwargs to the function, which previously supported only the `tol` kwarg assigning the same value to both.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,637,236
|
[Inductor] Wrapper code refactors to prepare for FX codegen
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
This PR contains some refactors from https://github.com/pytorch/pytorch/pull/146942, which help to enable Wrapper FX codegen:
1. Remove `OutputLine`, which is unused.
2. Add an attribute to the backend classes specifying whether they support caching.
3. Before compiling a graph, query the registered backends and check whether caching is supported.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,605,874
|
[Inductor] Fix typing in cuda_template.py
|
mlazos
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* __->__ #152390
* #150909
* #150907
* #151406
* #150906
| true
|
3,026,562,141
|
[Hierarchical Compilation] Track node mutations
|
mlazos
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152589
* #152572
* #152570
* #152506
* #152410
* #152505
* __->__ #152389
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,026,553,818
|
Add vec_reduce_all specialization for std::plus on AArch64
|
swolchok
|
open
|
[
"module: cpu",
"fb-exported",
"ciflow/trunk"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152388
* #152366
* #152365
* #152364
AArch64 has an instruction for this.
Differential Revision: [D73817183](https://our.internmc.facebook.com/intern/diff/D73817183/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,026,551,461
|
[easy] Fix test_dynamo_timed
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152387
Summary: I'm just trying to fix the test again. It's out of date because it's disabled and some dynamo_timed-related fields are gone now.
Test Plan: `python test/dynamo/test_utils.py -k dynamo_timed`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,026,549,331
|
[PT2]: fix add_passes and remove_passes naming issue
|
kqfu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
When defining pre_grad passes, they are initially defined as empty functions, then overriden in [customized_triton_kernel_passes.py](https://www.internalfb.com/code/fbsource/[b4eea3dcd7f22421e68a3c1533fd09a4281bc291]/fbcode/caffe2/torch/_inductor/fx_passes/fb/customized_triton_kernel_passes.py?lines=71-73). This causes issues for add_passes and remove_passes because `p.__name__` now may be prefixed by _.
This diff removes the leading _ to match the pass name.
Test Plan: Tested together with the next diff in the stack.
Reviewed By: oniononion36
Differential Revision: D73809937
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,538,621
|
Illegal Instruction Caused by `grid_sample` Under Windows
|
ericspod
|
open
|
[
"high priority",
"module: build",
"module: windows",
"module: cpu",
"triaged",
"module: regression"
] | 15
|
NONE
|
### 🐛 Describe the bug
In Windows 10, Python 3.12.9, Pytorch 2.7.0+cu118, CUDA 12.2, the following code produces an "illegal instruction" causing an immediate crash:
```python
import torch
src = torch.rand((1, 1, 128, 64), dtype=torch.float64)
grid = torch.rand((1, 256, 256, 2), dtype=torch.float64)
dst = nn.functional.grid_sample(
input=src.contiguous(),
grid=grid,
mode="bilinear",
padding_mode="border",
align_corners=False
)
```
This is specific to float64 tensors, float32 tensor format for both src and grid allow this function to execute correctly.
This issue with the current version of PyTorch is the source of CI/CD failures using Github Windows runners and seen in [this PR](https://github.com/Project-MONAI/MONAI/pull/8429). These tests fail with PyTorch 2.7 specifically, previous versions do not exhibit this issue.
Output from `/proc/cpuinfo` in case any more detail is relevant:
```
processor : 0
vendor_id : AuthenticAMD
cpu family : 25
model : 33
model name : AMD Ryzen 9 5900X 12-Core Processor
stepping : 2
microcode : 0xA20120A
cpu MHz : 3700.000
cache size : 65536 KB
physical id : 0
siblings : 24
core id : 0
cpu cores : 24
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 17
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmuldq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw wdt topoext cpb hw_pstate ibrs ibpb stibp fsgsbase bmi1 avx2 smep bmi2 erms cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr overflow_recov succor smca
bogomips : 7400.00
TLB size : 2560 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp hwpstate cpb eff_freq_ro
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro (10.0.19045 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:49:16) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 536.23
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 9 5900X 12-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3701
MaxClockSpeed: 3701
L2CacheSize: 6144
L2CacheSpeed: None
Revision: 8450
Versions of relevant libraries:
[pip3] flake8==7.2.0
[pip3] flake8-bugbear==24.2.6
[pip3] flake8-comprehensions==3.16.0
[pip3] mypy==1.11.2
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.2.5
[pip3] onnx==1.17.0
[pip3] onnx_graphsurgeon==0.5.8
[pip3] pytorch-ignite==0.4.11
[pip3] torch==2.7.0+cu118
[pip3] torchio==0.20.7
[pip3] torchvision==0.22.0
[conda] numpy 2.2.5 pypi_0 pypi
[conda] pytorch-ignite 0.4.11 pypi_0 pypi
[conda] torch 2.7.0+cu118 pypi_0 pypi
[conda] torchio 0.20.7 pypi_0 pypi
[conda] torchvision 0.22.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,026,536,528
|
[inductor][invoke_subgraph] Remove assertion checks for outputs of invoke_subgraph
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152494
* #152490
* #152383
* __->__ #152384
* #152581
* #152547
For invoke_subgraph, input assertions are good. We don't need output assertions. This is the tlparse
Before

After

https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmppQg3F8/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,536,446
|
[inductor][subgraph] Simplify the resulting output code for subgraph
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152675
* #152494
* #152490
* __->__ #152383
Check out output code
Before this PR - - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp3iXDVs/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

After this PR - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpRgUJvq/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,517,901
|
fix: outdated contents in dynamo overview
|
huijjj
|
open
|
[
"triaged",
"open source"
] | 4
|
NONE
|
Fixes #152381
| true
|
3,026,517,542
|
Outdated contents in dynamo overview
|
huijjj
|
open
|
[
"module: docs",
"triaged"
] | 1
|
NONE
|
### 📚 The doc issue
contents in dynamo overview [document](https://pytorch.org/docs/stable/torch.compiler_dynamo_overview.html) are outdated, especially the ones regarding guards.

cache_entry does not have check_fn anymore, rather it has guard_manager to manage and check guards.

Also the printed code parts of the guards are quite outdated too.
### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
3,026,510,940
|
[aten] Enable vectorized 8byte copy for fp16/bf16 for index select kernel
|
jeetkanjani7
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: performance"
] | 4
|
CONTRIBUTOR
|
## Summary
Enable aligned vector loading for 2 bytes data types for index select. Specifically:
- **4 element fp16/bf16 packing**: added 8-byte vector load/store to move 4 half values at once.
- **warp-wide predicate (__all_sync)**: decide fast vs fallback path per warp, eliminating lane level divergence
- **alignment guard**: fast or vectorized path only executes when src and dst are 8 byte aligned, preventing mis aligned address faults.
- **Safe for loop fallback**: for misaligned, strid > 1, or tail elements we recompute offsets per element to avoid memory corruption.
- **Bound checks**: fast or vectorized path is skipped when less than 4 elements are remaining, guaranteeing bounded access.
- **Stride remapping**: Redirect calls to inner contiguous dim which has stride = 1 so copies occur along memory coalesced axes.
- **AMD support**: Ensured portability and correctness across CUDA and HIP platforms.
## Perf testing
We note a 2.5x improvement in memory bandwidth after this change when the tensor dim is a multiple of 4 for 2 byte data types (fp16/bf16).
<img width="625" alt="image" src="https://github.com/user-attachments/assets/909b04a3-98f2-4c30-8c29-c36e1beeea0f" />
With input tensor dimension not being a multiple of 4, we see a smaller improvement (~1.2x) due to warp divergence.
<img width="624" alt="image" src="https://github.com/user-attachments/assets/f3ed16f4-b091-48bd-9889-093f6a90688d" />
## Perf testing code
```
# pyre-strict
from typing import List, Optional, Tuple
import click
import pandas as pd
import torch
# @manual=//triton:triton
import triton
@click.command()
@click.option("--data-type", type=str, default="bf16")
@click.option("--return-result", type=bool, default=False)
def main(
data_type: str,
return_result: bool,
) -> Optional[Tuple[List[triton.testing.Benchmark], List[pd.DataFrame]]]:
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True
data_types = {"fp32", "fp16", "bf16"}
if data_type not in data_types:
raise ValueError(f"Unsupported data type: {data_type}.")
dtype = {
"fp32": torch.float32,
"fp16": torch.float16,
"bf16": torch.bfloat16
}[data_type]
D1 = 192
D2 = 156
configs: List[triton.testing.Benchmark] = [
triton.testing.Benchmark(
x_names=["B"],
x_vals=[24],
line_arg="provider",
line_vals=[
"repeat_interleave",
"repeat_interleave_int32",
],
line_names=["repeat_interleave", "repeat_interleave_int32"],
styles=[("red", "-"), ("purple", "-")],
ylabel="ms",
plot_name=f"torch-repeat_interleave-D1-{D1}-D2-{D2}-dtype-{dtype}",
args={
"D1": D1,
"D2": D2,
"dtype": dtype,
},
)
]
@triton.testing.perf_report(configs)
def bench_repeat_interleave(
B: int,
D1: int,
D2: int,
dtype: torch.dtype,
provider: str,
) -> float:
warmup = 20
rep = 100
torch.manual_seed(42)
torch.cuda.manual_seed(42)
a = torch.randn(24, D1, D2)
a = a.to(dtype).to("cuda")
input_bytes = a.numel() * a.element_size()
repeats = torch.randint(low=100, high=1600, size=(24,), device="cuda")
output_bytes = (
repeats.sum() * a.shape[1] * a.shape[2] * repeats.element_size()
)
total_bytes = input_bytes + output_bytes
def torch_repeat_interleave(
input_tensor: torch.Tensor, repeats: torch.Tensor
) -> torch.Tensor:
res = input_tensor.repeat_interleave(repeats, dim=0)
return res
def torch_repeat_interleave_int32(
input_tensor: torch.Tensor, repeats: torch.Tensor
) -> torch.Tensor:
dim = 0
if torch.is_tensor(repeats):
idx64 = torch.repeat_interleave(
torch.arange(
0,
input_tensor.shape[dim or 0],
device=input_tensor.device,
),
repeats,
dim=0,
)
else:
idx64 = (
torch.arange(
input_tensor.shape[dim or 0] * repeats,
device=input_tensor.device,
)
.reshape(-1, repeats)
.flatten()
)
idx32 = idx64.to(torch.int32)
res = torch.index_select(input_tensor, 0, idx32)
return res
def expand_flatten(input_tensor: torch.Tensor) -> torch.Tensor:
return input_tensor[:, None].expand(-1, 4, -1).flatten(0, 1)
if provider == "repeat_interleave":
fn = lambda: torch_repeat_interleave(a, repeats) # noqa E731
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
if provider == "repeat_interleave_int32":
fn = lambda: torch_repeat_interleave_int32(a, repeats)
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
elif provider == "expand_flatten":
fn = lambda: expand_flatten(a)
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
else:
raise ValueError(f"unsupported provider: {provider}")
df = bench_repeat_interleave.run(print_data=True, return_df=True)
if return_result:
return configs, df
if __name__ == "__main__":
main()
```
| true
|
3,026,506,465
|
[inductor] if unbacked symint in old-size or new-size skip mark_reuse check
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 18
|
CONTRIBUTOR
|
Probably can run the `mark_reuse` check work with unbacked sizes under certain conditions.
For e.g. `x.repeat(u0, 2).repeat(2, u0)`.
But I think cases like those are rare so skipping the check for now.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152379
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,504,555
|
[FP8][CUTLASS] xFail `honor_sm_carveout` on `sm100`
|
eqy
|
open
|
[
"module: cuda",
"triaged",
"open source",
"topic: not user facing",
"matrix multiplication",
"module: float8"
] | 2
|
COLLABORATOR
|
CUTLASS only supports SM carveout via green contexts on `sm100`
cc @ptrblck @msaroufim @jerryzh168 @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
3,026,502,071
|
Run link linters on modified files only or on everything when scheduled
|
shoumikhin
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
| null | true
|
3,026,477,854
|
[aten] Enable vectorized 8byte copy for fp16/bf16 for index select kernel
|
jeetkanjani7
|
closed
|
[
"release notes: cuda"
] | 2
|
CONTRIBUTOR
|
## Summary
Enable aligned vector loading for 2 bytes data types for index select. Specifically:
- **4 element fp16/bf16 packing**: added 8-byte vector load/store to move 4 half values at once.
- **warp-wide predicate (__all_sync)**: decide fast vs fallback path per warp, eliminating lane level divergence
- **alignment guard**: fast or vectorized path only executes when src and dst are 8 byte aligned, preventing mis aligned address faults.
- **Safe for loop fallback**: for misaligned, strid > 1, or tail elements we recompute offsets per element to avoid memory corruption.
- **Bound checks**: fast or vectorized path is skipped when less than 4 elements are remaining, guaranteeing bounded access.
- **Stride remapping**: Redirect calls to inner contiguous dim which has stride = 1 so copies occur along memory coalesced axes.
- **AMD support**: Ensured portability and correctness across CUDA and HIP platforms.
## Perf testing
We note a 2.5x improvement in memory bandwidth after this change when the tensor dim is a multiple of 4 for 2 byte data types (fp16/bf16).
<img width="625" alt="image" src="https://github.com/user-attachments/assets/909b04a3-98f2-4c30-8c29-c36e1beeea0f" />
With input tensor dimension not being a multiple of 4, we see a smaller improvement (~1.2x) due to warp divergence.
<img width="624" alt="image" src="https://github.com/user-attachments/assets/f3ed16f4-b091-48bd-9889-093f6a90688d" />
## Perf testing code
```
# pyre-strict
from typing import List, Optional, Tuple
import click
import pandas as pd
import torch
# @manual=//triton:triton
import triton
@click.command()
@click.option("--data-type", type=str, default="bf16")
@click.option("--return-result", type=bool, default=False)
def main(
data_type: str,
return_result: bool,
) -> Optional[Tuple[List[triton.testing.Benchmark], List[pd.DataFrame]]]:
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True
data_types = {"fp32", "fp16", "bf16"}
if data_type not in data_types:
raise ValueError(f"Unsupported data type: {data_type}.")
dtype = {
"fp32": torch.float32,
"fp16": torch.float16,
"bf16": torch.bfloat16
}[data_type]
D1 = 192
D2 = 156
configs: List[triton.testing.Benchmark] = [
triton.testing.Benchmark(
x_names=["B"],
x_vals=[24],
line_arg="provider",
line_vals=[
"repeat_interleave",
"repeat_interleave_int32",
],
line_names=["repeat_interleave", "repeat_interleave_int32"],
styles=[("red", "-"), ("purple", "-")],
ylabel="ms",
plot_name=f"torch-repeat_interleave-D1-{D1}-D2-{D2}-dtype-{dtype}",
args={
"D1": D1,
"D2": D2,
"dtype": dtype,
},
)
]
@triton.testing.perf_report(configs)
def bench_repeat_interleave(
B: int,
D1: int,
D2: int,
dtype: torch.dtype,
provider: str,
) -> float:
warmup = 20
rep = 100
torch.manual_seed(42)
torch.cuda.manual_seed(42)
a = torch.randn(24, D1, D2)
a = a.to(dtype).to("cuda")
input_bytes = a.numel() * a.element_size()
repeats = torch.randint(low=100, high=1600, size=(24,), device="cuda")
output_bytes = (
repeats.sum() * a.shape[1] * a.shape[2] * repeats.element_size()
)
total_bytes = input_bytes + output_bytes
def torch_repeat_interleave(
input_tensor: torch.Tensor, repeats: torch.Tensor
) -> torch.Tensor:
res = input_tensor.repeat_interleave(repeats, dim=0)
return res
def torch_repeat_interleave_int32(
input_tensor: torch.Tensor, repeats: torch.Tensor
) -> torch.Tensor:
dim = 0
if torch.is_tensor(repeats):
idx64 = torch.repeat_interleave(
torch.arange(
0,
input_tensor.shape[dim or 0],
device=input_tensor.device,
),
repeats,
dim=0,
)
else:
idx64 = (
torch.arange(
input_tensor.shape[dim or 0] * repeats,
device=input_tensor.device,
)
.reshape(-1, repeats)
.flatten()
)
idx32 = idx64.to(torch.int32)
res = torch.index_select(input_tensor, 0, idx32)
return res
def expand_flatten(input_tensor: torch.Tensor) -> torch.Tensor:
return input_tensor[:, None].expand(-1, 4, -1).flatten(0, 1)
if provider == "repeat_interleave":
fn = lambda: torch_repeat_interleave(a, repeats) # noqa E731
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
if provider == "repeat_interleave_int32":
fn = lambda: torch_repeat_interleave_int32(a, repeats)
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
elif provider == "expand_flatten":
fn = lambda: expand_flatten(a)
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
else:
raise ValueError(f"unsupported provider: {provider}")
df = bench_repeat_interleave.run(print_data=True, return_df=True)
if return_result:
return configs, df
if __name__ == "__main__":
main()
```
| true
|
3,026,457,608
|
Feature/enable 8 byte vector loading
|
jeetkanjani7
|
closed
|
[
"release notes: cuda"
] | 3
|
CONTRIBUTOR
|
## Summary
Enable aligned vector loading for 2 bytes data types for index select. Specifically:
- **4 element fp16/bf16 packing**: added 8-byte vector load/store to move 4 half values at once.
- **warp-wide predicate (__all_sync)**: decide fast vs fallback path per warp, eliminating lane level divergence
- **alignment guard**: fast or vectorized path only executes when src and dst are 8 byte aligned, preventing mis aligned address faults.
- **Safe for loop fallback**: for misaligned, strid > 1, or tail elements we recompute offsets per element to avoid memory corruption.
- **Bound checks**: fast or vectorized path is skipped when less than 4 elements are remaining, guaranteeing bounded access.
- **Stride remapping**: Redirect calls to inner contiguous dim which has stride = 1 so copies occur along memory coalesced axes.
- **AMD support**: Ensured portability and correctness across CUDA and HIP platforms.
## Perf testing
We note a 2.5x improvement in memory bandwidth after this change when the tensor dim is a multiple of 4 for 2 byte data types (fp16/bf16).
<img width="625" alt="image" src="https://github.com/user-attachments/assets/909b04a3-98f2-4c30-8c29-c36e1beeea0f" />
With input tensor dimension not being a multiple of 4, we see a smaller improvement (~1.2x) due to warp divergence.
<img width="624" alt="image" src="https://github.com/user-attachments/assets/f3ed16f4-b091-48bd-9889-093f6a90688d" />
## Perf testing code
```
# pyre-strict
from typing import List, Optional, Tuple
import click
import pandas as pd
import torch
# @manual=//triton:triton
import triton
@click.command()
@click.option("--data-type", type=str, default="bf16")
@click.option("--return-result", type=bool, default=False)
def main(
data_type: str,
return_result: bool,
) -> Optional[Tuple[List[triton.testing.Benchmark], List[pd.DataFrame]]]:
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True
data_types = {"fp32", "fp16", "bf16"}
if data_type not in data_types:
raise ValueError(f"Unsupported data type: {data_type}.")
dtype = {
"fp32": torch.float32,
"fp16": torch.float16,
"bf16": torch.bfloat16
}[data_type]
D1 = 192
D2 = 156
configs: List[triton.testing.Benchmark] = [
triton.testing.Benchmark(
x_names=["B"],
x_vals=[24],
line_arg="provider",
line_vals=[
"repeat_interleave",
"repeat_interleave_int32",
],
line_names=["repeat_interleave", "repeat_interleave_int32"],
styles=[("red", "-"), ("purple", "-")],
ylabel="ms",
plot_name=f"torch-repeat_interleave-D1-{D1}-D2-{D2}-dtype-{dtype}",
args={
"D1": D1,
"D2": D2,
"dtype": dtype,
},
)
]
@triton.testing.perf_report(configs)
def bench_repeat_interleave(
B: int,
D1: int,
D2: int,
dtype: torch.dtype,
provider: str,
) -> float:
warmup = 20
rep = 100
torch.manual_seed(42)
torch.cuda.manual_seed(42)
a = torch.randn(24, D1, D2)
a = a.to(dtype).to("cuda")
input_bytes = a.numel() * a.element_size()
repeats = torch.randint(low=100, high=1600, size=(24,), device="cuda")
output_bytes = (
repeats.sum() * a.shape[1] * a.shape[2] * repeats.element_size()
)
total_bytes = input_bytes + output_bytes
def torch_repeat_interleave(
input_tensor: torch.Tensor, repeats: torch.Tensor
) -> torch.Tensor:
res = input_tensor.repeat_interleave(repeats, dim=0)
return res
def torch_repeat_interleave_int32(
input_tensor: torch.Tensor, repeats: torch.Tensor
) -> torch.Tensor:
dim = 0
if torch.is_tensor(repeats):
idx64 = torch.repeat_interleave(
torch.arange(
0,
input_tensor.shape[dim or 0],
device=input_tensor.device,
),
repeats,
dim=0,
)
else:
idx64 = (
torch.arange(
input_tensor.shape[dim or 0] * repeats,
device=input_tensor.device,
)
.reshape(-1, repeats)
.flatten()
)
idx32 = idx64.to(torch.int32)
res = torch.index_select(input_tensor, 0, idx32)
return res
def expand_flatten(input_tensor: torch.Tensor) -> torch.Tensor:
return input_tensor[:, None].expand(-1, 4, -1).flatten(0, 1)
if provider == "repeat_interleave":
fn = lambda: torch_repeat_interleave(a, repeats) # noqa E731
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
if provider == "repeat_interleave_int32":
fn = lambda: torch_repeat_interleave_int32(a, repeats)
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
elif provider == "expand_flatten":
fn = lambda: expand_flatten(a)
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
bw = total_bytes / (ms * 1e6)
# print("Bandwidth[GB/s]: ", total_bytes / (ms * 1e6))
return bw.item()
else:
raise ValueError(f"unsupported provider: {provider}")
df = bench_repeat_interleave.run(print_data=True, return_df=True)
if return_result:
return configs, df
if __name__ == "__main__":
main()
```
| true
|
3,026,419,012
|
TORCH_COMPILE_DEBUG=1 does not consistently generate debug logs
|
xuxalan
|
open
|
[
"module: logging",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
NONE
|
### 🐛 Describe the bug
I am trying to collect log files generated during `torch.compile` execution for debugging purposes, but the files do not always appear.
I created the following simple test script:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
device = torch.device("cuda")
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.proj1 = nn.Linear(10, 10)
self.proj2 = nn.Linear(10, 10)
def forward(self, x):
x = F.relu(self.proj1(x))
x = self.proj2(x)
return x
torch.compiler.reset()
model = SimpleModel()
model.to(device)
compiled_model = torch.compile(model)
x = torch.randn(1, 10).to(device)
y = compiled_model(x)
```
**Observed behavior:**
After a reboot, the debug log files are correctly generated under `torch_compile_debug/<log_dir>/torchinductor/model__0_forward_1.0/`. However, no new log files are generated when running the script again without rebooting. This might be related to a cache issue.
**Expected behavior:**
`TORCH_COMPILE_DEBUG=1` would consistently generate debug logs, without requiring a system reboot.
### Error logs
No debug files generated.
### Versions
[env.txt](https://github.com/user-attachments/files/19949677/env.txt)
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,026,374,148
|
complex.pow(2) on GPU by replacing with complex * complex to avoid numerical instability
|
Raman-Kumar
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Fixes #150951
Summary:
For complex.pow(2) on GPU:
Uses complex * complex directly.
Produces results consistent with CPU implementation.
Eliminates spurious imaginary components for real inputs.
🧪 Tests
Added unit tests to verify correctness of the new kernel path.
Verified numerical consistency with CPU results.
This change is backward-compatible and only affects the specific case of pow(2) on complex tensors on GPU.
| true
|
3,026,362,260
|
[Relandx2] Rewrite the guts of torch::jit::Lexer to speed it up
|
swolchok
|
open
|
[
"oncall: jit",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: jit",
"ci-no-td",
"ciflow/s390"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152372
Reapplying with fix for linux-manylinux-2_28-py3-cpu-s390x / build
failure
(https://github.com/pytorch/pytorch/actions/runs/14716285820/job/41300304223#logs),
which is to just update a pair of static_assert constants I got wrong.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,026,350,175
|
[MPS] fix memory leak in sdpa float32
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 5
|
COLLABORATOR
|
Fixes #152344
Leak seems to be on the MPS Graph side, even though there is an identity tensor it seems like it's no longer enough to bypass the SDPA sequence which seems to leak memory.
Even adding 0.0f seems to be optimized to be ignored and still take the sdpa sequence(that's the reason for adding 1e-20)
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,026,336,931
|
DISABLED test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41293298598).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 323, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,228,048
|
[AOTAutogradCache] Allow `torch.Tensor` and a non-torch op from einops
|
StrongerXi
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152395
* __->__ #152369
This addresses part of #150706.
Specifically, it reduces the warm start `torch.compile` overhead by
40~50% for GGUF models on
1. HuggingFace diffusers: [tlparse before, 224s](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpqgbdva/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) v.s. [tlparse after, 126s](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp950PFy/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000)
2. ComfyUI: [tlparse before, 93s](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp7SeJb4/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) v.s. [tlparse after, 51s](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpRwGNqA/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000)
The improvements should generalize to all other GGUF models on these
platforms, because the cache miss was induced by framework code, which
will be hit by every GGUF model.
| true
|
3,026,221,694
|
Remove conda refs in tools
|
Camyll
|
closed
|
[
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: dataloader"
] | 3
|
CONTRIBUTOR
|
Fixes #152126
Did not find references in the two .ipynb files
| true
|
3,026,210,520
|
DISABLED test_reduce_stress_cuda (__main__.ProcessGroupGlooTest)
|
jithunnair-amd
|
open
|
[
"module: rocm",
"triaged",
"skipped"
] | 1
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/actions/runs/14713239294/job/41293256180
The stress_cuda tests seem to be flaky.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,026,165,767
|
vec::map: directly process reduced-precision floats when reasonable
|
swolchok
|
open
|
[
"module: cpu",
"fb-exported"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152388
* __->__ #152366
* #152365
* #152364
The immediate motivation is to make map support match
ExecuTorch so we can delete ExecuTorch-specific mapping functions, but
this should also straightforwardly improve performance.
Testing: there is existing coverage for this in
vec_test_all_types.cpp. Verified that it really does cover the newly
enabled "don't convert through float" paths by temporarily adding a
TORCH_INTERNAL_ASSERT(false).
Differential Revision: [D73802126](https://our.internmc.facebook.com/intern/diff/D73802126/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,026,165,374
|
add is_vec_specialized_for
|
swolchok
|
open
|
[
"module: cpu",
"fb-exported",
"ciflow/trunk",
"release notes: cpp"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152388
* #152366
* __->__ #152365
* #152364
Let people detect at compile time whether Vectorized is specialized for a given type. See vec_base.h.
Differential Revision: [D73802129](https://our.internmc.facebook.com/intern/diff/D73802129/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,026,165,241
|
Format all headers under ATen/cpu/vec, not just top-level
|
swolchok
|
open
|
[
"module: cpu",
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152388
* #152366
* #152365
* __->__ #152364
not formatting these seems like an oversight. Had to add a few clang-format suppressions to keep includes in the same order to avoid breaking builds.
This PR was generated using `lintrunner --paths-cmd "rg --files -g '*.h' aten/src/ATen/cpu/vec/" format`
Differential Revision: [D73802128](https://our.internmc.facebook.com/intern/diff/D73802128/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @albanD
| true
|
3,026,133,365
|
[MPSInductor][BE] Make all reductions cacheable
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152363
By moving actual implementaiton to `_reduction_nocache` and make reduction a caching wrapper
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,086,804
|
Add CUDA 12.8 almalinux image, remove CUDA 12.4 almalinux
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This is general purpose image located in: https://hub.docker.com/r/pytorch/almalinux-builder
Updating it to match our supported CUDA matrix
Adding this build to use as general purpose image and use for Magma build
| true
|
3,026,077,531
|
[Will This Work?] Build libgomp (gcc-11) from src on AArch64
|
fadara01
|
open
|
[
"open source",
"module: arm",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152361
cc @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,026,066,837
|
Cast to unsigned char to avoid UB
|
io-no
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
The standard requires that the argument to functions like `isdigit`, `isalpha`, and similar must be either `EOF` or an `unsigned char`; otherwise, the behavior is undefined (UB).
To avoid out-of-bounds reads, modern implementations of some libraries (such as glibc) deliberately pad their internal tables to guarantee valid memory access even for negative values. However, this is implementation-specific, and other libraries may not do this.
Properly casting the argument to `unsigned char` is good practice to avoid potential issues on some platforms.
| true
|
3,026,050,317
|
fix:Update padding_mode to use Literal for type checking
|
sujeet4010
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
NONE
|
Fixes #152280
| true
|
3,026,039,978
|
Use almalinux docker files for building Magma
|
atalman
|
closed
|
[
"module: cuda",
"Merged",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Resolves https://github.com/pytorch/pytorch/issues/151707 for CUDA Nvidia Magma builds.
Removes deprecated cuda 12.4 build.
Using `pytorch/manylinux2_28-builder` image for magma build creates circular dependency.
For a while for magma builds we used `conda-builder` image since it does not have circular dependency:
https://github.com/pytorch/builder/blob/release/2.4/magma/Makefile#L13
However during migration to pytorch/pytorch: https://github.com/pytorch/pytorch/pull/139888 we introduced circular dependency using Manylinux 2.28 docker image.
Hence using almalinux image which suppose to be general usage image
Please note: Magma builds using Docker build : https://github.com/pytorch/pytorch/blob/main/.ci/magma/README.md we can look into migrating them to Docker images if required as a followup BE change if needed
TODO: Make same change for rocm builds. I believe some more work for rocm is required, since maga-rocm is requires rocm dev, utils and lib to be installed : https://github.com/pytorch/pytorch/blob/main/.ci/docker/common/install_rocm.sh
cc @ptrblck @msaroufim @eqy @jerryzh168 @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,026,027,616
|
[invoke_subgraph] Cache on tangent metadata and retrace if needed
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152547
* #152494
* #152490
* #152383
* #152384
* __->__ #152357
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,930,546
|
[AOTI] Package lowered with package_constants_in_so=False still uses lots of memory when loaded
|
henrylhtsang
|
closed
|
[
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I am lowering a model with AOTI, with package_constants_in_so=False. So I expect the output .pt2 archive to be small (True) and no extra memory usage when I load it up (False).
The output .pt2 archive is 1.7MB which is good. But I notice a memory jump when I load the model with aoti_load_package.
repro:
```
import os
os.environ["PYTORCH_NO_CUDA_MEMORY_CACHING"] = "1"
import logging
import torch
from torch import nn
class M(torch.nn.Module):
def __init__(self, n):
super().__init__()
self.a = nn.Parameter(
torch.randn(n, n, device="cuda", dtype=torch.float16), requires_grad=False
)
def forward(self, b):
return self.a @ b
def main():
n = 1024 * 64
input1 = (torch.rand(n, device="cuda", dtype=torch.float16),)
model = M(n).cuda()
logging.warning(
f"Memory used by GPU: {torch.cuda.device_memory_used() / 1000 / 1000 / 1000} GB."
)
_ = model(*input1)
ep = torch.export.export(model, input1, strict=False)
inductor_configs = {
"aot_inductor.package_constants_in_so": False,
}
path = torch._inductor.aoti_compile_and_package(
ep, inductor_configs=inductor_configs
)
logging.warning(f"path: {path}")
logging.warning(
f"Memory used by GPU: {torch.cuda.device_memory_used() / 1000 / 1000 / 1000} GB."
)
aot_model = torch._inductor.aoti_load_package(path)
logging.warning(
f"Memory used by GPU: {torch.cuda.device_memory_used() / 1000 / 1000 / 1000} GB."
)
aot_model.load_constants(
model.state_dict(), check_full_update=True, user_managed=True
)
logging.warning(
f"Memory used by GPU: {torch.cuda.device_memory_used() / 1000 / 1000 / 1000} GB."
)
print("done")
if __name__ == "__main__":
main()
```
logs:
```
# after model instantiation
WARNING:root:Memory used by GPU: 9.787342847999998 GB.
# after aoti lowering
WARNING:root:Memory used by GPU: 9.888006143999998 GB.
# after loading with aoti_load_package
WARNING:root:Memory used by GPU: 18.477940736 GB.
# after load_constants
WARNING:root:Memory used by GPU: 18.477940736 GB.
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1 @muchulee8
### Versions
trunk
| true
|
3,025,922,019
|
Pin setuptools runtime dependency
|
atalman
|
open
|
[
"module: binaries",
"module: build",
"module: cpp-extensions",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This is related to https://github.com/pytorch/pytorch/issues/152276
We would like to pin setuptools dynamic dependency.
Currently setuptools version is not pinned in PyTorch METADATA:
```
Requires-Dist: setuptools; python_version >= "3.12"
```
We need to pin setuptools version to less than 80.
### Versions
2.7.1
cc @seemethere @malfet @osalpekar @zou3519
| true
|
3,025,873,311
|
Add codeowner for merge rules
|
albanD
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
To ensure changes to merge rights are properly reviewed
Also make the codeowner file valid by removing invalid users
| true
|
3,025,825,393
|
[inductor][dynamo] Include operator name in size/stride/alignment assertion
|
karthickai
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"keep-going",
"skip-url-lint"
] | 26
|
COLLABORATOR
|
Fixes #151930
This PR updates the `assert_size_stride` and `assert_alignment` functions in [guards.cpp](https://github.com/pytorch/pytorch/blob/main/torch/csrc/dynamo/guards.cpp) to accept an optional `op_name` argument and includes it in the error messages.
The corresponding type stubs in [guards.pyi](https://github.com/pytorch/pytorch/blob/main/torch/_C/_dynamo/guards.pyi) are updated to match the new function arg.
In [inductor/ir.py](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/ir.py) extracts the operator name from the FX graph and passes it into the `codegen_size_asserts` and `codegen_alignment_asserts` functions, so that generated assertions in Triton code include the op name for better debugging.
Added unit tests inside [test_torchinductor.py](https://github.com/pytorch/pytorch/blob/main/test/inductor/test_torchinductor.py).
- Verified both successful and failing assertion cases include the operator name.
- Verified that generated Triton code contains the op name inside the asserts.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @shunting314 @eellison
| true
|
3,025,782,716
|
Provide list of files to link linters if desired
|
shoumikhin
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
| null | true
|
3,025,711,953
|
Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit
|
anantwag19
|
closed
|
[
"needs reproduction",
"module: mps"
] | 3
|
NONE
|
### 🐛 Describe the bug
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/transformers/trainer.py", line 2560, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/transformers/trainer.py", line 3782, in training_step
self.accelerator.backward(loss, **kwargs)
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/accelerate/accelerator.py", line 1964, in backward
loss.backward(**kwargs)
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/torch/_tensor.py", line 624, in backward
torch.autograd.backward(
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: MPS backend out of memory (MPS allocated: 11.14 GB, other allocations: 6.65 GB, max allowed: 18.13 GB). Tried to allocate 375.40 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
0%| | 1/1020 [00:10<3:01:25, 10.68s/it]
### Versions
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/transformers/trainer.py", line 2560, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/transformers/trainer.py", line 3782, in training_step
self.accelerator.backward(loss, **kwargs)
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/accelerate/accelerator.py", line 1964, in backward
loss.backward(**kwargs)
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/torch/_tensor.py", line 624, in backward
torch.autograd.backward(
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/opt/anaconda3/envs/huggingface/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: MPS backend out of memory (MPS allocated: 11.14 GB, other allocations: 6.65 GB, max allowed: 18.13 GB). Tried to allocate 375.40 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
0%| | 1/1020 [00:10<3:01:25, 10.68s/it]
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,025,685,570
|
Add latex settings
|
svekars
|
closed
|
[
"module: docs",
"Merged",
"ciflow/trunk",
"topic: docs",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
- Fixes #147027
- Only lualatex can build our 3K pages PDF with reasonable quality, xelatex runs out of memory and pdflatex just fails.
- Move notes under the same toctree as python-api which is needed for the PDF but doesn't change how the HTML is generated.
This is the produced PDF:
[pytorch.pdf](https://github.com/user-attachments/files/19945450/pytorch.pdf)
cc @sekyondaMeta @AlannaBurke
| true
|
3,025,671,776
|
DISABLED test_e2e_compile_True_model_type1 (__main__.TestE2ESaveAndLoad)
|
jithunnair-amd
|
open
|
[
"module: rocm",
"triaged",
"skipped"
] | 1
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it's failing on the [MI300 runners](https://hud.pytorch.org/failure?name=periodic-rocm-mi300%20%2F%20linux-focal-rocm-py3.10%20%2F%20test%20(distributed%2C%201%2C%203%2C%20linux.rocm.gpu.mi300.4.test-2%2C%20module%3Arocm%2C%20oncall%3Adistributed)&jobName=linux-focal-rocm-py3.10%20%2F%20test%20(distributed%2C%201%2C%203%2C%20linux.rocm.gpu.mi300.4.test-2%2C%20module%3Arocm%2C%20oncall%3Adistributed)&failureCaptures=distributed%2Fcheckpoint%2Fe2e%2Ftest_e2e_save_and_load.py%3A%3ATestE2ESaveAndLoad%3A%3Atest_e2e_compile_True_model_type1)
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,025,656,863
|
[WIP] DeadCodeEliminator Mark(block) improvement
|
shinyehtsai
|
open
|
[
"oncall: jit",
"fb-exported",
"release notes: jit"
] | 10
|
NONE
|
Summary:
This diff seeks to optimize the DeadCodeEliminator within the mark(block) function.
The primary concept is to prevent redundant traversals of a fully marked block, particularly in the markLoop scenario, if all nodes within a block are marked, we can subsequently mark the block as fully marked.
Test Plan: Existing unittest. Will add new soon
Differential Revision: D73476431
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,025,654,637
|
torch.nonzero_static is not documented on the website
|
albanD
|
open
|
[
"module: docs",
"triaged"
] | 4
|
COLLABORATOR
|
See https://pytorch.org/docs/main/search.html?q=nonzero_static
cc @svekars @sekyondaMeta @AlannaBurke @ngimel
| true
|
3,025,653,907
|
compile generates inefficient code for mutations on small slices of inputs
|
bdhirsh
|
open
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
easy repro from slack:
```
import torch
def plus_one(x):
x[0].add(1.0)
return x
x_og = torch.randn(32 * 1024, 1024, device="cuda", dtype=torch.float32)
x = x_og.clone()
plus_one(x)
plus_one_compiled = torch.compile(plus_one)
x = x_og.clone()
plus_one_compiled(x)
```
If you run the above with `TORCH_LOGS="output_code"`, you will see two sources of inefficiency:
(1) the mutation above happens on a slice of x, but the generated code involves a write to *all* of x (`x.copy_(x_updated)`).
This is due to a limitation of input mutation handling in AOTDispatcher. We have support for input mutations, but AOTDispatcher only has access to the graph inputs themselves. When we detect an input was mutated, we issue an entire `old_inp.copy_(new_inp)` node
(2) there are two kernels in the output, not a single kernel. This is a consequence of (1) that @eellison pointed out. If we can change the graph to only issue a copy on the input slice, then it should be easier for inductor to fuse the `add()` and `copy_()` into a single kernel
cc @chauhang @penguinwu
| true
|
3,025,649,684
|
Magma build for Docker build
|
atalman
|
closed
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,025,623,679
|
MPS SDPA `float32` memory leak
|
SalmanMohammadi
|
closed
|
[
"module: memory usage",
"triaged",
"module: mps"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
# usage python test_mps_leak.py {dtype}
import sys
import torch
import torch.nn.functional as F
def get_mps_memory_usage():
current_allocated = torch.mps.current_allocated_memory() / (1024 * 1024)
driver_allocated = torch.mps.driver_allocated_memory() / (1024 * 1024)
return current_allocated, driver_allocated
if __name__ == "__main__":
args = sys.argv[1:]
if args[0] == "float16":
dtype = torch.float16
elif args[0] == "bfloat16":
dtype = torch.bfloat16
elif args[0] == "float32":
dtype = torch.float32
device = torch.device("mps")
batch_size = 4
seq_len = 1024
num_heads = 8
head_dim = 512
print(f"\nInitial memory usage:")
current_mem, driver_mem = get_mps_memory_usage()
print(f" Current Allocated: {current_mem:.2f} MB")
print(f" Driver Allocated: {driver_mem:.2f} MB")
print("-" * 30)
query = torch.randn(batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype)
key = torch.randn(batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype)
value = torch.randn(batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype)
print(f"Memory after tensor creation:")
current_mem, driver_mem = get_mps_memory_usage()
print(f" Current Allocated: {current_mem:.2f} MB")
print(f" Driver Allocated: {driver_mem:.2f} MB")
print("-" * 30)
iterations = 100
for i in range(iterations):
output = F.scaled_dot_product_attention(query, key, value)
if (i + 1) % 10 == 0:
current_mem, driver_mem = get_mps_memory_usage()
print(f"Iteration {i + 1}/{iterations} - Memory Usage:")
print(f" Current Allocated: {current_mem:.2f} MB")
print(f" Driver Allocated: {driver_mem:.2f} MB")
print("\nFinished.")
print("Final memory usage:")
current_mem, driver_mem = get_mps_memory_usage()
print(f" Current Allocated: {current_mem:.2f} MB")
print(f" Driver Allocated: {driver_mem:.2f} MB")
```
This script demonstrates a memory leak when using SDPA on MPS, which only occurs with `float32` tensors. I found that `torch==2.4.0` was the last stable release in which this issue does not occur.
### Versions
```bash
2025-04-28 18:19:05 (22.2 MB/s) - ‘collect_env.py’ saved [24497/24497]
Collecting environment information...
PyTorch version: 2.8.0.dev20250428
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.8 (main, Jan 5 2025, 06:55:30) [Clang 19.1.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] torch==2.8.0.dev20250428
[pip3] torchao==0.10.0+cpu
[pip3] torchaudio==2.6.0.dev20250428
[pip3] torchdata==0.11.0
[pip3] torchtune==0.0.0
[pip3] torchvision==0.22.0.dev20250428
[conda] No relevant packages
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,025,622,097
|
cudagraphs: `static_input_indices` incorrectly including SymInt graph args when using tensor subclasses + dynamic shapes
|
bdhirsh
|
open
|
[
"triaged",
"module: cuda graphs",
"module: __torch_dispatch__",
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
See the comment here for more details: https://github.com/pytorch/pytorch/pull/152287/files#r2064120003
cc @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @Chillee @zou3519 @albanD @samdow @chauhang @bobrenjc93
| true
|
3,025,611,482
|
[Memento] Enable on-demand mode
|
mzzchy
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Summary:
# Context
Post: https://fb.workplace.com/groups/ai.efficiency.tools.users/permalink/2020094788475989/
On CUDA side, Memento enables the on-demand mode to trace a remote process without requiring code changes. In this diff, we want to enable the same features.
Overall, we follow the same approach to leverage kineto-dyno integration to trigger the process on remote process. We just need to implement our own Python Tracer and register it based on available device
Test Plan:
# Local Test
Check next diff.
P1797938330
```
https://www.internalfb.com/pytorch_memory_visualizer/mtia_traces/tree/gpu_traces/dynocli/0/1745855907/localhost/memory_snapshot_3215421.pickle
```
{F1977499533}
# Remote Test
Differential Revision: D73680135
| true
|
3,025,582,293
|
[ROCm][Inductor][CK] Add ck-tile based universal gemm kernels to torch.mm autotune choices
|
tenpercent
|
open
|
[
"module: rocm",
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 7
|
COLLABORATOR
|
This PR adds code generation for CK-tile based universal gemm kernels to the CK backend for Inductor, and adds these kernels to autotune choices.
Unlike legacy-CK based kernels (which are generated by parsing the CK instances from CK library), we generate the set of instances by manually specifying the tuning parameters.
This PR introduces a new template for code generation, and compilation/autotuning is handled by the existing infrastructure.
Points of discussion:
* For simplicity and reduced coupling with CK, the instance filter checks only data type and layout, and doesn't check the alignment requirement - meaning that more instances will be compiled than necessary - while keeping the code generation independent from internal CK logic which checks the alignment validity at runtime
* CK-tile instances are enabled whenever legacy-CK instances are enabled. A config knob could be introduced to differentiate between the instance types if that's needed
* Whether gemm problem size K is ever dynamic, since whenever it's not a compile-time constant, we need to perform a runtime dispatch between several kernels
** Testing **
Use the existing tests in `test/inductor/test_ck_backend.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
cc @zjing14 @coconutruben @ThomasNing @amd-khushbu
| true
|
3,025,572,654
|
Add a label to skip URL lint if needed
|
shoumikhin
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"skip-url-lint"
] | 6
|
CONTRIBUTOR
|
Some URLs may be down due to server side issues we can't control
| true
|
3,025,508,291
|
[ROCm] Unskipped test_rnn_dropout_state for ROCm
|
iupaikov-amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 9
|
CONTRIBUTOR
|
Unskipping the test, should work fine now.
Related PR: https://github.com/pytorch/pytorch/pull/144572
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.