id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,006,122,585
|
[Easy] Fix the compilation warning of BlasKernel.
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 16
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151736
As the title stated.
Change Before:
```C++
[2/21] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/BlasKernel.cpp.o
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/BlasKernel.cpp:346:6: warning: ‘void at::native::blas_impl::gemv_fast_path(const char*, const int*, const int*, const scalar_t*, const scalar_t*, const int*, const scalar_t*, const int*, const scalar_t*, scalar_t*, const int*) [with scalar_t = c10::Half]’ defined but not used [-Wunused-function]
346 | void gemv_fast_path<at::Half>(
| ^~~~~~~~~~~~~~~~~~~~~~~~
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/BlasKernel.cpp:329:6: warning: ‘bool at::native::blas_impl::gemv_use_fast_path(char, int64_t, int64_t, scalar_t, int64_t, int64_t, scalar_t, int64_t) [with scalar_t = c10::Half]’ defined but not used [-Wunused-function]
329 | bool gemv_use_fast_path<at::Half>(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/BlasKernel.cpp:301:6: warning: ‘void at::native::blas_impl::gemv_fast_path(const char*, const int*, const int*, const scalar_t*, const scalar_t*, const int*, const scalar_t*, const int*, const scalar_t*, scalar_t*, const int*) [with scalar_t = c10::BFloat16]’ defined but not used [-Wunused-function]
301 | void gemv_fast_path<at::BFloat16>(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/BlasKernel.cpp:273:6: warning: ‘bool at::native::blas_impl::gemv_use_fast_path(char, int64_t, int64_t, scalar_t, int64_t, int64_t, scalar_t, int64_t) [with scalar_t = c10::BFloat16]’ defined but not used [-Wunused-function]
273 | bool gemv_use_fast_path<at::BFloat16>(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
| true
|
3,006,111,862
|
[inductor] [silent incorrectness] `.flatten()-torch.vdot` outputs incorrect results when meeting edge case inputs
|
shaoyuyoung
|
closed
|
[
"oncall: pt2",
"module: inductor"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `.flatten()-torch.vdot` outputs incorrect results when meeting **edge case inputs** on **cuda**. **0.1** in `inputs` is the key factor in incorrect calculation.
**device backend**: only triton
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x_flat = x.flatten()[0:5]
y = torch.ones_like(x_flat)
x = torch.vdot(x_flat, y)
return x
model = Model()
x = torch.tensor([[0.0001, 1000000.0], [-1000000.0, 0.1]]) # 0.1 is the key factor
inputs = [x]
def run_test(model, inputs, device, backend):
torch.manual_seed(0)
model = model.to(device)
inputs = [x.to(device) for x in inputs]
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
device = 'cuda'
output = run_test(model, inputs, device, 'eager')
c_output = run_test(model, inputs, device, 'inductor')
print(output)
print(c_output)
```
### Error logs
CPP
```
tensor(0.1250)
tensor(0.1250)
```
triton
```
tensor(0.1250, device='cuda:0')
tensor(0.1000, device='cuda:0')
```
### Versions
nightly 20250414
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @ezyang @gchanan @zou3519 @msaroufim
| true
|
3,006,104,536
|
[inductor] [cuda] [silent incorrectness] `torch.ormqr-torch.index_add` outputs incorrect calculation when meeting internal `indices` and `values`
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `torch.ormqr-torch.index_add` outputs incorrect calculation when meeting **internal** `indices` and `values`. If using **external** `indices` or `values` (i.e., using them in `inputs`), calculation is correct.
**device backend**: triton
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input_a, input_tau, input_tensor):
output = torch.ormqr(input_a, input_tau, input_tensor, left=True, transpose=False)
indices = torch.tensor([0, 2], dtype=torch.long).to(output.device)
values = torch.randn(2, output.size(1), output.size(2)).to(output.device)
output = torch.index_add(output.to(input_a.dtype), 0, indices, values.to(input_a.dtype))
return output
model = Model()
input_a = torch.randn(3, 5, 5)
input_tau = torch.randn(3, 5)
input_tensor = torch.randn(3, 5, 2)
inputs = [input_a, input_tau, input_tensor]
def run_test(model, inputs, device, backend):
torch.manual_seed(0)
model = model.to(device)
inputs = [x.to(device) for x in inputs]
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'cuda', 'eager')
c_output = run_test(model, inputs, 'cuda', 'inductor')
print(torch.allclose(output, c_output, rtol=1e-3, atol=1e-3))
print(torch.max(torch.abs(c_output - output)))
fp64 = run_test(model.to(dtype=torch.float64), [x.to(dtype=torch.float64) for x in inputs], 'cuda', 'eager')
print(torch._dynamo.utils.same(output, c_output, fp64))
```
### Error logs
CPP
```
True
tensor(0.)
True
```
triton
```
False
tensor(3.1147, device='cuda:0')
E0419 10:29:43.045000 1586594 site-packages/torch/_dynamo/utils.py:2930] RMSE (res-fp64): 1.15523, (ref-fp64): 0.00000 and shape=torch.Size([3, 5, 2]). res.dtype: torch.float32, multiplier: 3.000000, tol: 0.000100, use_larger_multiplier_for_smaller_tensor: 0
False
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
3,006,094,694
|
Add explict type info in the try-catch for dynamo logging
|
houseroad
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 16
|
MEMBER
|
Differential Revision: D73295871
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,006,079,281
|
[Testing] Make test_add_complex3 run on different devices
|
malfet
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 8
|
CONTRIBUTOR
|
By constructing tensor on that device, because it does not call `self.common` but rather executes test directly.
Otherwise `test_add_complex3_mps` will test CPU inductor, rather than MPS one
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,006,020,000
|
[ca] mark scalar int sizes as dynamic via tensor wrapping
|
xmfan
|
open
|
[
"Merged",
"Reverted",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compiled autograd",
"ci-no-td"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151860
* #152119
* #151962
* __->__ #151731
This is the only way to support dynamic shapes on scalars right now.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,996,151
|
[Inductor] Update should_decompose_mm condition for CPU
|
hl475
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
Similar to what we did previously in D70033166
Previously, for cpu we decompose addmm if
```
check_device(mat1, mat2, device="cpu")
and statically_known_true(mat1.shape[0] == 1)
and statically_known_true(mat2.shape[0] <= 64)
and statically_known_true(mat2.shape[1] <= 512)
```
We have a new case where `mat1.shape[0] = 80`, and benchmark shows that it will beneficial if we decompose, so update the condition to
```
check_device(mat1, mat2, device="cpu")
and statically_known_true(mat1.shape[0] == 1)
and statically_known_true(mat2.shape[0] <= 128)
and statically_known_true(mat2.shape[1] <= 512)
```
Differential Revision: D73292985
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,986,284
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 18
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
3,005,986,132
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 21
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
3,005,981,547
|
[ROCm] Maxpool forward NHWC Perf Improvement targeting Resnet scenarios
|
amd-hhashemi
|
open
|
[
"module: rocm",
"triaged",
"open source",
"ciflow/trunk",
"release notes: rocm",
"ciflow/rocm"
] | 10
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,005,961,196
|
Exported Module cannot call train() or eval()
|
supercharleszhu
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hi Team, we are trying to load an exported module for continuous training and evaluation, but seems that we cannot call module.train() of module.eval(). Do you have any suggestions on what can be the best way to support this or workaround with this issue?
To reproduce:
```python
saved_exported_program = torch.export.load(f"{checkpoint_path}/exported_model.pt")
internal_model = saved_exported_program.module()
internal_model = internal_model.to(device)
model = DDP(internal_model, device_ids=[device])
model.eval()
```
stacktrace
```
[rank0]: Traceback (most recent call last):
[rank0]: File "train.py", line 809, in <module>
[rank0]: train(metric_logging_function=expt.log_metrics, **args_dict)
[rank0]: File "train.py", line 499, in train
[rank0]: model.eval()
[rank0]: File "/home/jobuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2865, in eval
[rank0]: return self.train(False)
[rank0]: File "/home/jobuser/.local/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1663, in train
[rank0]: super().train(mode)
[rank0]: File "/home/jobuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2846, in train
[rank0]: module.train(mode)
[rank0]: File "/home/jobuser/.local/lib/python3.10/site-packages/torch/export/exported_program.py", line 965, in _train
[rank0]: raise NotImplementedError("Calling train() is not supported yet.")
[rank0]: NotImplementedError: Calling train() is not supported yet.
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1.20+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: CBL-Mariner/Linux (x86_64)
GCC version: (GCC) 11.2.0
Clang version: Could not collect
CMake version: version 3.21.4
Libc version: glibc-2.35
Python version: 3.10.14 (main, Jul 14 2024, 22:24:12) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.1-1.cm2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.7
/usr/lib/libcudnn_adv_infer.so.8.9.7
/usr/lib/libcudnn_adv_train.so.8.9.7
/usr/lib/libcudnn_cnn_infer.so.8.9.7
/usr/lib/libcudnn_cnn_train.so.8.9.7
/usr/lib/libcudnn_ops_infer.so.8.9.7
/usr/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9554 64-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3762.9880
CPU min MHz: 1500.0000
BogoMIPS: 6200.05
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.5.1.20+cu126
[pip3] torchmetrics==1.7.1
[pip3] torchvision==0.20.1.4+cu126
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,005,958,976
|
run lintrunner for Export d68846308
|
Camyll
|
closed
|
[
"module: lint",
"Merged",
"ciflow/trunk",
"topic: deprecation",
"topic: not user facing",
"oncall: fx",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 3
|
CONTRIBUTOR
|
fixes broken lint tests in https://github.com/pytorch/pytorch/pull/151481
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,955,277
|
[ROCm] AtomicAdd specialization on AMD for fp64.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: not user facing"
] | 14
|
COLLABORATOR
|
Fixes https://github.com/pytorch/pytorch/issues/151039
Improve scatter add performance on MI250X.
Some numbers from the reporter's benchmark:
```
Before: dtype torch.float64 time = 3.577979326248169
After: dtype torch.float64 time = 0.0031385421752929688
```
No perf. improvement to MI300 or MI100.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
3,005,942,011
|
Add api to enable/disable NaN detector per-PG
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151723
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k @pavanbalaji
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
3,005,939,919
|
Add maxcount Parameter to torch.unique and torch.unique_consecutive
|
cora-codes
|
open
|
[
"module: performance",
"triaged",
"enhancement",
"needs design",
"module: python frontend"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
My hope is that this feature would prevent a synchronization, make both operations more easily compose with `torch.compile` and mirror the escape hatch provided by `torch.bincount`
### Alternatives
_No response_
### Additional context
_No response_
cc @msaroufim @jerryzh168 @albanD
| true
|
3,005,939,754
|
[Benchmarking] Enable HF_GPT2 benchmarking on Metal
|
malfet
|
closed
|
[
"Merged",
"release notes: releng",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
By building wheel with USE_DISTRIBUTED=1
Otherwise attempt to run
```
python3 benchmarks/dynamo/torchbench.py --performance --only hf_T5 --backend inductor --inference --devices mps
```
wil fail with
```
File "/Users/nshulga/Library/Python/3.10/lib/python/site-packages/transformers/modeling_utils.py", line 40, in <module>
import torch.distributed.tensor
File "/Users/nshulga/git/pytorch/pytorch/torch/distributed/tensor/__init__.py", line 4, in <module>
import torch.distributed.tensor._ops # force import all built-in dtensor ops
File "/Users/nshulga/git/pytorch/pytorch/torch/distributed/tensor/_ops/__init__.py", line 2, in <module>
from ._conv_ops import * # noqa: F403
File "/Users/nshulga/git/pytorch/pytorch/torch/distributed/tensor/_ops/_conv_ops.py", line 5, in <module>
from torch.distributed.tensor._dtensor_spec import DTensorSpec, TensorMeta
File "/Users/nshulga/git/pytorch/pytorch/torch/distributed/tensor/_dtensor_spec.py", line 6, in <module>
from torch.distributed.tensor.placement_types import (
File "/Users/nshulga/git/pytorch/pytorch/torch/distributed/tensor/placement_types.py", line 8, in <module>
import torch.distributed._functional_collectives as funcol
File "/Users/nshulga/git/pytorch/pytorch/torch/distributed/_functional_collectives.py", line 9, in <module>
import torch.distributed.distributed_c10d as c10d
File "/Users/nshulga/git/pytorch/pytorch/torch/distributed/distributed_c10d.py", line 23, in <module>
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
```
| true
|
3,005,939,496
|
Maxpool Perf Improvement targeting resnet scenarios
|
amd-hhashemi
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)",
"release notes: inductor (aoti)"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mingfeima @XiaobingSuper @ashokei @jingxu10 @jerryzh168 @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,919,836
|
flex attention: fix dispatch order for tensor subclasses, avoid hardcoding call to faketensor impl in dynamo
|
bdhirsh
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
This is enough to get @XilunWu 's stack in a state where his flex_attention DTensor implementations worked E2E for me. It also required these changes on the DTensor side, to properly add a DTensor rule for flex backward: P1789852198
There are two problems:
(1) in the normal dispatcher, we have a precedence ordering between modes and subclasses. Modes are dispatched to first, but modes are allowed to return NotImplemented, giving subclasses a chance to run.
This normally happens automatically in `FakeTensorMode.__torch_dispatch__` and `FunctionalTensorMode.__torch_dispatch__`. However, since HOPs implement these two modes themselves, HOPs do not get this benefit. For now, I ended up hardcoding this `NotImplemented` logic directly into the functional/fake rules for flex attention.
Having to do this for every HOP seems a bit painful. If we could plumb every HOP through `Fake[|Functional]TensorMode.__torch_dispatch__` then we would get this support. Another option could be to just assume that most HOP <> mode implementations want the same treatment by default, and hardcode this `NotImplemented` logic into `torch/_ops.py`. I'm not sure if we'd need a way for the HOP to opt out of this though.
(2) We were hardcoding a call to flex attention's fake implementation in dynamo to run fake prop. This is technically wrong for subclasses, because it doesn't give subclasses the chance to interpose on the op and desugar it before fake prop runs. I tweaked dynamo's logic to call the op, and let the dispatcher handle invoking the fake implementation.
**Testing** Xilun is adding some DTensor tests in his PR that will end up testing this logic. If folks would prefer, though, I can try to add a test that uses another subclass instead that is maybe more basic.
This is the tlparse that his DTensor test gnerated for me: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/hirsheybar/0196c1d3-a9a2-46ea-a46d-aa21618aa060/custom/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151719
* #152688
* #152195
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,005,903,996
|
[WIP][draft_export] suppress pending unbacked for divisibility symbol
|
pianpwk
|
open
|
[
"fb-exported",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D73287751
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,005,874,014
|
[NJT] Dropout with p=0 still drops values
|
imh
|
open
|
[
"triaged",
"module: nestedtensor"
] | 3
|
NONE
|
I'm trying to track down discrepancies where an NJT version of my model diverges in training while the padded version converges nicely.
I found a discrepancy in dropout between NJT and normal tensors that was interfering with reproducibility efforts: Setting dropout to 0 does not prevent dropout from happening with NJTs.
Here's as minimal of a repro as I've been able to make:
```python
from collections import Counter
import torch
import torch.nn.functional as F
# set up data and state
values = torch.randn((6598, 384), device='cuda', dtype=torch.float32)
offsets = torch.arange(129, device='cuda', dtype=torch.int64)
offsets[-1] = values.shape[0]
rng_state = torch.tensor([0, 0, 0, 0, 0, 0, 0, 0, 176, 0, 0, 0, 0, 0, 0, 0], dtype=torch.uint8)
torch.cuda.set_rng_state(rng_state)
# before and after dropout
before = torch.nested.nested_tensor_from_jagged(values, offsets)
after = F.dropout(before, p=0.0)
# expect them to be the same, but they aren't
print(f"{(before != after).sum()=}") # 1 discrepancy
idx = (before != after)._values.nonzero().squeeze(0) # at [189, 342], an insignificant location
print(f"{before._values[*idx]=}") # 1.3 before
print(f"{after._values[*idx]=}") # 0 after
```
It happens about 7% of calls with NJT and exactly 0% with normal tensors:
```python
# happens about 7% of the runs
discrepancies = Counter()
for i in range(10_000):
after = F.dropout(before, p=0.0)
num_mismatched = (before != after).sum().item()
discrepancies[num_mismatched] += 1
print(f"{discrepancies=}") # {0: 9275, 1: 707, 2: 18}
# but its limited to NJTs
discrepancies = Counter()
for i in range(10_000):
after = F.dropout(before._values, p=0.0)
num_mismatched = (before._values != after).sum().item()
discrepancies[num_mismatched] += 1
print(f"{discrepancies=}") # {0: 10000}
```
I'd be more than happy to help fix it if someone could guide me a bit on the C++ side. I haven't ventured into that side of torch.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 565.57.01
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
CPU family: 6
Model: 94
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 3
CPU(s) scaling MHz: 96%
CPU max MHz: 4200.0000
CPU min MHz: 800.0000
BogoMIPS: 7999.96
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] Could not collect
[conda] Could not collect
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
3,005,869,583
|
[symmem] Add some code comments to rendezvous code
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151716
While reading and learning the rendezvous code, I just want to add some comments to explain the code.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
3,005,869,142
|
Use gather in index_select
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151715
| true
|
3,005,859,330
|
[ONNX] Auto-fallback in torch.onnx.export(..., dynamo=True) with deprecation warning
|
titaiwangms
|
open
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 0
|
COLLABORATOR
|
In the current implementation of `torch.onnx.export(..., dynamo=True)`, certain TorchScript-based operations are not supported. To ensure a smooth user experience, we should implement an automatic fallback mechanism with a deprecation warning. This fallback should handle cases where only the TorchScript-based exporter can process the export, such as:
1. When the input model is a `torch.jit.ScriptModule` or `torch.jit.ScriptFunction`, as these are specifically designed for TorchScript.
2. When `custom_opsets` is provided, since `torch.onnx.export(..., dynamo=True)` uses `custom_translation_table` instead.
3. When `dynamic_axes` is provided, because the automatic conversion of `dynamic_axes` to `dynamic_shapes` does not cover all scenarios.
| true
|
3,005,820,553
|
[Cutlass] Only run EVT tests on sm90
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #152390
* #150909
* #150908
* #150907
* #151406
* #150906
* __->__ #151713
* #151405
* #150905
* #152306
* #152305
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,811,791
|
DISABLED test_cublas_addmm_reduced_precision_fp16_accumulate_size_100_cuda_float16 (__main__.TestMatmulCudaCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: linear algebra",
"skipped"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cublas_addmm_reduced_precision_fp16_accumulate_size_100_cuda_float16&suite=TestMatmulCudaCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40792474674).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cublas_addmm_reduced_precision_fp16_accumulate_size_100_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_matmul_cuda.py", line 166, in test_cublas_addmm_reduced_precision_fp16_accumulate
self.cublas_addmm(size, dtype, False, True)
File "/var/lib/jenkins/workspace/test/test_matmul_cuda.py", line 128, in cublas_addmm
res_cuda = torch.addmm(m_input, m_1, m_2, beta=m_beta.item())
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Cdesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`
To execute this test, run the following from the base repo dir:
python test/test_matmul_cuda.py TestMatmulCudaCUDA.test_cublas_addmm_reduced_precision_fp16_accumulate_size_100_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_matmul_cuda.py`
cc @clee2000 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,005,790,183
|
Remove unnecessary recompile
|
tugsbayasgalan
|
open
|
[
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Summary: Title
Test Plan: CI
Differential Revision: D73277480
| true
|
3,005,783,617
|
[c10] add #pragma once to leftright
|
dolpm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 17
|
CONTRIBUTOR
|
Summary: i am getting duplicate defn's when including in my binary that already includes the dispatcher.
Test Plan: CI
Differential Revision: D73237748
| true
|
3,005,782,045
|
[provenance_tracking][reland] Fix UT error and re-land `ExternKernel` support
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 10
|
CONTRIBUTOR
|
Summary:
ATT.
reverted previous diff : D72572050
Test Plan:
```
TORCH_LOGS="+inductor, output_code" buck2 run -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=h100 @//mode/opt fbcode//caffe2/test/inductor:provenance_tracing -- -r test_triton_kernel_to_post_grad_tracing_extern_kernel
```
Differential Revision: D73281217
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,746,511
|
[DO NOT MERGE] Test new mi300 node capacity.
|
saienduri
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"topic: not user facing",
"keep-going",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300",
"ciflow/periodic-rocm-mi300"
] | 9
|
CONTRIBUTOR
|
This commit is to merely test additional mi300 capacity.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,005,740,744
|
difficulty creating magma tarball when new rocm or cuda versions are deployed
|
jeffdaily
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"better-engineering"
] | 4
|
COLLABORATOR
|
There is a chicken/egg problem with magma tarballs. Building magma for rocm or cuda is done in the manylinux image, for example:
pytorch/manylinux2_28-builder:rocm${DESIRED_CUDA}-main
but this image is built using a Dockerfile that calls install_magma.sh (for cuda) or install_rocm_magma.sh. These scripts just fetch the tarball. magma needs the image to exist in order to build the tarball, but for the image to build properly it needs the magma tarball. It's a circular dependency.
The recent ROCm 6.4 upgrade required 3 PRs in sequence to update the magma packages. PR 1 created the new builder image but temporarily allowed the magma tarball fetch to fail with a warning. PR 2 updated the magma workflows to add the new ROCm version. PR 3 reverted the changes from 1 and 2 while also updating the GHA nightly wheel workflows to build rocm 6.4.
1. https://github.com/pytorch/pytorch/pull/151236
2. https://github.com/pytorch/pytorch/pull/151345
3. https://github.com/pytorch/pytorch/pull/151355
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,005,732,745
|
Use folder tagged docker images for binary builds
|
clee2000
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Should be the last part of https://github.com/pytorch/pytorch/pull/150558, except for maybe s390x stuff, which I'm still not sure what's going on there
For binary builds, do the thing like we do in CI where we tag each image with a hash of the .ci/docker folder to ensure a docker image built from that commit gets used. Previously it would use imagename:arch-main, which could be a version of the image based on an older commit
After this, changing a docker image and then tagging with ciflow/binaries on the same PR should use the new docker images
Release and main builds should still pull from docker io
Cons:
* if someone rebuilds the image from main or a PR where the hash is the same (ex folder is unchanged, but retrigger docker build for some reason), the release would use that image instead of one built on the release branch
* spin wait for docker build to finish
| true
|
3,005,727,697
|
Enable TorchInductor to Generate Matmuls Natively via `tl.dot`
|
nullplay
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 8
|
NONE
|
### 🚀 The feature, motivation and pitch
TorchInductor currently relies on hand-written templates for matrix multiply variants, such as:
- [`bmm.py`](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/kernel/bmm.py)
- [`mm.py`](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/kernel/mm.py)
- [`mm_plus_mm.py`](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/kernel/mm_plus_mm.py)
While these templates are effective, they make it difficult to fuse surrounding operations, even though Inductor supports prologue/epilogue fusion (see https://github.com/pytorch/pytorch/issues/142315 and https://github.com/pytorch/pytorch/issues/142315).
### Proposed Feature
This proposal enables Inductor to generate performant matrix multiplication kernels directly, without relying on the hand-written templates. A prototype implementation is available here:
https://github.com/pytorch/pytorch/compare/main...nullplay:pytorch:jaeyeon_fuse
### Implementation Overview
#### 1. Emit Tensor Core `tl.dot` for Matrix Multiply Patterns
When Inductor is forced to generate a matmul kernel using `(A.unsqueeze(2) * B.unsqueeze(0)).sum(dim=1)`, it currently emits something like the following:
```python
@triton.jit
def triton_v1(A, B, C):
YXBLOCK : tl.constexpr = 32 * 32
RBLOCK : tl.constexpr = 32
yxoffset = tl.program_id(0) * YXBLOCK
yx = yxoffset + tl.arange(0, YXBLOCK)[:, None] # (YX, 1)
r_base = tl.arange(0, RBLOCK)[None, :] # (1, R)
y = yx // 2048
x = yx % 2048
acc = tl.full([YXBLOCK, RBLOCK], 0.0) # (YX, R)
for r_offset in range(0, 2048, RBLOCK):
r = r_offset + r_base # (1, R)
A_yr = tl.load(A + 2048 * y + r) # (YX, R)
B_rx = tl.load(B + 2048 * r + x) # (YX, R)
acc += A_yr * B_rx # (YX, R)
acc = tl.sum(acc, 1)[:, None] # (YX, R) → (YX, 1)
tl.store(C + yx, acc)
```
Here, matrix multiplication is expressed as a loop with elementwise multiplication and sum, without using `tl.dot`.
To address this, a new `ops.dot` node is introduced in Inductor IR to capture the matmul pattern, enabling codegen to emit `tl.dot` instead. The resulting kernel looks like:
```python
@triton.jit
def triton_v2(A, B, C):
YBLOCK : tl.constexpr = 32
XBLOCK : tl.constexpr = 32
RBLOCK : tl.constexpr = 32
yoffset = tl.program_id(1) * YBLOCK
xoffset = tl.program_id(0) * XBLOCK
y = yoffset + tl.arange(0, YBLOCK)[:, None, None] # (Y, 1, 1)
x = xoffset + tl.arange(0, XBLOCK)[None, :, None] # (1, X, 1)
r_base = tl.arange(0, RBLOCK)[None, None, :] # (1, 1, R)
acc = tl.full([YBLOCK, XBLOCK], 0.0) # (Y, X)
for r_offset in range(0, 2048, RBLOCK):
r = r_offset + r_base # (1, 1, R)
A_yxr = tl.load(A + 2048 * y + r) # (Y, 1, R)
B_yxr = tl.load(B + 2048 * r + x) # (1, X, R)
A_yr = tl.view(A_yxr, [YBLOCK, RBLOCK]) # (Y, R)
B_xr = tl.view(B_yxr, [XBLOCK, RBLOCK]) # (X, R)
acc += tl.dot(A_yr, tl.trans(B_xr)) # (Y, R) x (R, X) → (Y, X)
acc = acc[:, :, None] # (Y, X) → (Y, X, 1)
tl.store(C + 2048 * y + x, acc)
```
This version uses `tl.dot` and reshapes inputs appropriately, with the output accumulator remaining output-stationary.
#### 2. Lazy Broadcasting to Avoid Reshape and Transpose
To match the performance of PyTorch’s hand-written Triton templates, it's important to avoid reshapes and transposes. Instead of eagerly broadcasting across all axes (i.e., assigning each loop dimension to a distinct Triton axis), we lazily broadcast only the reduction axis (RBLOCK) to align with `tl.dot` semantics. For example:
```python
@triton.jit
def triton_v3(A, B, C):
YBLOCK : tl.constexpr = 32
XBLOCK : tl.constexpr = 32
RBLOCK : tl.constexpr = 32
yoffset = tl.program_id(1) * YBLOCK
xoffset = tl.program_id(0) * XBLOCK
y = yoffset + tl.arange(0, YBLOCK)[:, None] # (Y, 1) -- eager broadcast
x = xoffset + tl.arange(0, XBLOCK)[None, :] # (1, X) -- eager broadcast
r_base = tl.arange(0, RBLOCK) # (R)
acc = tl.full([YBLOCK, XBLOCK], 0.0) # (Y, X)
for r_offset in range(0, 2048, RBLOCK):
r = r_offset + r_base
A_yr = tl.load(A + 2048 * y + r[None, :]) # (Y, R) — lazy broadcast
B_rx = tl.load(B + 2048 * r[:, None] + x) # (R, X) — lazy broadcast
acc += tl.dot(A_yr, B_rx) # (Y, R) x (R, X) → (Y, X)
tl.store(C + 2048 * y + x, acc)
```
This approach eliminates the need for transposing and reshaping inputs, while still matching the expected layout for `tl.dot`.
A nice thing of this feature is that it enables automatic fusion of operations around tl.dot, without requiring major changes to the Inductor. For instance, consider the following PyTorch program:
```python
# z[w[m],n] += x[w[m],k] * y[k,n] + 3
def f(x,y,z,w):
intm = x[w,:] @ y + 3
return z.index_add_(dim=0, index=w, source=intm)
```
With this feature enabled, TorchInductor generates a fully fused Triton kernel with Tensor Core:
```python
@triton.jit
def triton_red_fused_add_index_add_mm_0(in_ptr0, in_ptr1, in_ptr2, out_ptr1, ynumel, xnumel, r0_numel, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr, R0_BLOCK : tl.constexpr):
ynumel = 128
xnumel = 128
r0_numel = 128
rnumel = r0_numel
RBLOCK: tl.constexpr = R0_BLOCK
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[:,None]
ymask = yindex < ynumel
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[None,:]
xmask = xindex < xnumel
r0_base = tl.arange(0, R0_BLOCK)
rbase = r0_base
y0 = yindex
tmp0 = tl.load(in_ptr0 + (y0), ymask, eviction_policy='evict_last')
x1 = xindex
_tmp10 = tl.full([YBLOCK, XBLOCK], 0, tl.float32)
for r0_offset in range(0, r0_numel, R0_BLOCK):
r0_index = r0_offset + r0_base
r0_mask = r0_index < r0_numel
roffset = r0_offset
rindex = r0_index
r0_2 = r0_index
tmp7 = tl.load(in_ptr2 + (x1 + 128*r0_2[:,None]), r0_mask[:,None] & xmask, eviction_policy='evict_last', other=0.0)
tmp1 = 128
tmp2 = tmp0 + tmp1
tmp3 = tmp0 < 0
tmp4 = tl.where(tmp3, tmp2, tmp0)
tl.device_assert(((0 <= tmp4) & (tmp4 < 128)) | ~(ymask), "index out of bounds: 0 <= tmp4 < 128")
tmp6 = tl.load(in_ptr1 + (r0_2[None,:] + 128*tmp4), ymask & r0_mask[None,:], eviction_policy='evict_first', other=0.0)
tmp8 = tl.dot(tmp6, tmp7, allow_tf32=False)
tmp9 = tl.broadcast_to(tmp8, [YBLOCK, XBLOCK])
tmp11 = _tmp10 + tmp9
_tmp10 = tmp11
tmp10 = _tmp10
tmp12 = 128
tmp13 = tmp0 + tmp12
tmp14 = tmp0 < 0
tmp15 = tl.where(tmp14, tmp13, tmp0)
tl.device_assert(((0 <= tmp15) & (tmp15 < 128)) | ~(ymask), "index out of bounds: 0 <= tmp15 < 128")
tmp17 = 3.0
tmp18 = tmp10 + tmp17
tl.atomic_add(out_ptr1 + (x1 + 128*tmp15), tmp18, ymask & xmask, sem='relaxed')
```
### Performance and Benefits
- Matches performance of hand-written `mm` and `bmm` templates on both `fp16` and `fp32`
- Can generate fused kernels for compound expressions such as `A @ B + C @ D`
- Achieves up to 5–10× speedup on gather–matmul–scatter patterns by eliminating intermediate tensors
- Supports multiple dtypes (`fp16`, `fp32`, `bf16`)—though not exhaustively tested.
- (maybe) more maintainable alternative to hardcoded templates
### How to Enable
You can test this feature by setting:
```python
torch._inductor.config.triton.use_dot_reduction = True
```
Prototype fork: https://github.com/nullplay/pytorch/tree/jaeyeon_fuse
Test cases: https://github.com/nullplay/pytorch/blob/jaeyeon_fuse/test/inductor/test_dot_reduction.py
Since this is a prototype, there are some limitations like 1. Prototype is implemented in a hacky way and needs refactoring,
2. Excessive fusion can sometimes reduce performance (need better fusion heuristics), and
3. Need to implement autotuning for these kernels more robust
---
I would appreciate feedback from PyTorch developers on this direction. Do you think enabling native `tl.dot` codegen in Inductor is a reasonable and maintainable path forward for high-performance matmul fusion?
@jansel @eellison @drisspg @blaine-rister
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,005,725,783
|
Added to docs for out_dtype arg in torch gemms
|
PaulZhang12
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151704
| true
|
3,005,699,979
|
[ONNX] Improve and sort out fallback mechanism
|
titaiwangms
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 3
|
COLLABORATOR
|
In `torch.onnx.export(..., dynamo=True)`, when `fallback=True`, the current behavior involves attempting four different strategies within the FX-based exporter (exported program-based exporter) before falling back to the TorchScript-based exporter. These strategies include `TorchExportNonStrictStrategy`, `TorchExportStrategy`, `TorchExportDraftExportStrategy`, and `JitTraceConvertStrategy`:
https://github.com/pytorch/pytorch/blob/1b267a58a1659f272d446c86751a360a2046c8c8/torch/onnx/_internal/exporter/_core.py#L1263-L1271
The original intent was to maximize export coverage by trying all known strategies. However, the additional strategies (`TorchExportStrategy`, `TorchExportDraftExportStrategy`, and `JitTraceConvertStrategy`) do not significantly improve coverage beyond what `TorchExportNonStrictStrategy` already handles. Furthermore, the complex try-catch error messages make it harder for users to understand the root cause of failures.
To simplify the process and reduce confusion, especially in light of upcoming changes https://github.com/pytorch/pytorch/issues/151693, `torch.onnx.export(..., dynamo=True)` should directly use `torch.export.export(..., strict=False)` (`TorchExportNonStrictStrategy`), as the primary strategy. If this fails, it should immediately fall back to` torch.onnx.export(..., dynamo=False)`.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,005,692,414
|
[cutlass] Define GELU_taylor<float> only if CUTLASS version is <= 380
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary:
#buildmore
https://github.com/NVIDIA/cutlass/blame/df8a550d3917b0e97f416b2ed8c2d786f7f686a3/include/cutlass/epilogue/thread/activation.h#L610
was added in v3.9 (not tagged yet)
Test Plan:
mostly ci.
Logic seems same.
Reviewed By: drisspg
Differential Revision: D72615240
| true
|
3,005,683,743
|
Non-deterministic alert in histc_cuda for floating types only
|
amjames
|
closed
|
[
"module: cuda",
"module: determinism",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 13
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151701
The note about atomic add only applies for floating point. The
implementation is deterministic for integer data types.
fixes: #151610
cc @ptrblck @msaroufim @eqy @jerryzh168 @mruberry @kurtamohler
| true
|
3,005,682,015
|
[test] larger runner for cuda
|
clee2000
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,005,676,688
|
[ONNX] Migrate torchscript-based exporter test to exported program-based exporter
|
titaiwangms
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 1
|
COLLABORATOR
|
Go through https://github.com/pytorch/pytorch/tree/main/test/onnx, and keep the important one. Specifically, In [test_pytorch_onnx_onnxruntime.py](https://github.com/pytorch/pytorch/blob/main/test/onnx/test_pytorch_onnx_onnxruntime.py), we have all the bugs fixed tests.
| true
|
3,005,661,821
|
Replace perf-nightly-macos with inductor-perf-nightly-macos
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default"
] | 3
|
CONTRIBUTOR
|
The name was updated by https://github.com/pytorch/pytorch/pull/151155. The benchmark results weren't updated on the dashboard otherwise.
For PT2 compiler perf benchmark, we are still relying on this old workflow. To get rid of this, we need to update PT2 benchmark dashboard to use the new benchmark database (cc @yangw-dev)
The results are there on the new database:
```
SELECT
*
FROM
oss_ci_benchmark_v3
WHERE
workflow_id = 14510035576
```
but not on the old database:
```
SELECT
*
FROM
inductor_torch_dynamo_perf_stats
WHERE
workflow_id = 14510035576
```
| true
|
3,005,636,111
|
consolidate ATen/test/dispatch_key_set_test.cpp with rest of DispatchKeySet tests
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151682
* __->__ #151697
* #151630
* #151629
* #151628
* #151627
* #151626
Doesn't seem to be a reason to have two test files for this.
Differential Revision: [D73274020](https://our.internmc.facebook.com/intern/diff/D73274020/)
| true
|
3,005,594,406
|
ci: Ensure runners have a prefix
|
seemethere
|
closed
|
[
"topic: not user facing",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 10
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151696
Ensures that runners that should have a prefix do have a prefix. These
runners were continuously queueing since most of our pool was in
ephemeral type runners so we should switch these over to utilize the
prefixes as well.
Future work for this could include submitting a linter to check these
and ensure that they're all using the correct prefix
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
3,005,587,509
|
FP8 Support for FlexAttention
|
harveyp123
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
For FlexAttention, is it possible to add FP8 support for inference? FP8+Flash Attention 3 seems to be a great success, it will be great if FlexAttention can support FP8 inference
### Alternatives
_No response_
### Additional context
_No response_
cc @clee2000 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,005,582,461
|
[ONNX] Test 1.18 rc ONNX
|
titaiwangms
|
closed
|
[
"open source",
"topic: not user facing"
] | 23
|
COLLABORATOR
|
Test ONNX 1.18rc with the current tests
| true
|
3,005,560,533
|
[ONNX] Flip `dynamo` default to True in torch.onnx.export
|
titaiwangms
|
open
|
[
"module: onnx",
"triaged",
"onnx-triaged",
"oncall: pt2",
"oncall: export"
] | 3
|
COLLABORATOR
|
### NOTE
The task will not commence until the sub-issues outlined below are addressed or resolved.
### Description
The torch.onnx.export function currently defaults the dynamo parameter to False. However, the dynamo=True path is the recommended approach for ONNX export, as it leverages the modern torch.export.export logic introduced in PyTorch 2.0. This task proposes flipping the default value of dynamo to True to align with the latest export pipeline and encourage users to adopt the updated functionality.
Before implementing this change, several sub-issues need to be addressed to ensure a smooth transition for users.
### Proposed Change
Update [torch.onnx.export](https://github.com/pytorch/pytorch/blob/e434a9152e1a749ca45918c47e7a1690e9c21e4a/torch/onnx/__init__.py#L115) to set `dynamo=True` as default value:
```python
def export(
model: torch.nn.Module
| torch.export.ExportedProgram
| torch.jit.ScriptModule
| torch.jit.ScriptFunction,
args: tuple[Any, ...] = (),
f: str | os.PathLike | None = None,
*,
kwargs: dict[str, Any] | None = None,
export_params: bool = True,
verbose: bool | None = None,
input_names: Sequence[str] | None = None,
output_names: Sequence[str] | None = None,
opset_version: int | None = None,
dynamic_axes: Mapping[str, Mapping[int, str]]
| Mapping[str, Sequence[int]]
| None = None,
keep_initializers_as_inputs: bool = False,
dynamo: bool = True, # Flip default to True
...
) -> ONNXProgram | None:
```
### Rationale
1. The dynamo=True path is the recommended approach for ONNX export, as it uses the modern (PyTorch 2.0 ) [torch.export.export](https://pytorch.org/docs/stable/export.html#torch.export.export) logic.
2. Flipping the default will encourage users to adopt the new export logic without requiring explicit configuration.
3. This change aligns with the deprecation of older options and the push towards modernizing the ONNX export pipeline.
### Impact
1. `dynamic_axes` will be replaced by `dynamic_shapes`, as dynamism is now decided in `torch.export.export`. The main difference is that while `dynamic_axes` is defined within flatten structure, `dynamic_shapes` requires the nested tree structure as input argument. Although the auto-conversion will be provided, there are corner cases. More detail can be found: https://github.com/pytorch/pytorch/issues/150940 (TODO: Summarize the pifalls)
2. Loops in the models will need to be re-written with torch.scan. https://github.com/pytorch/pytorch/issues/151327 and https://github.com/pytorch/pytorch/issues/151564
3. Control flows (if/else) in the models will need to be re-written with [torch.cond](https://pytorch.org/docs/stable/generated/torch.cond.html). For example, https://github.com/pytorch/pytorch/issues/144691, https://pytorch.org/tutorials/beginner/onnx/export_control_flow_model_to_onnx_tutorial.html
4. torch.jit.ScriptModule and torch.jit.ScriptFunction will automatically fallback to depreacted torchscript-based exporter (#151714 )
5. Improve and finalize fallback mechanism (#151703 )
6. Migrate torchscript-based ONNX tests to the new exporter (https://github.com/pytorch/pytorch/issues/129279)
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @xadupre @justinchuby @shubhambhokare1 @gramalingam
| true
|
3,005,554,059
|
The docstring linter should not force overridden methods to be documented
|
rec
|
closed
|
[
"module: docs",
"module: lint",
"triaged",
"actionable"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
The [docstring linter](https://github.com/pytorch/pytorch/blob/28974a1ec3b921809c20a1217178da3792e5c545/tools/linter/adapters/docstring_linter.py) is asking me to document [this method](https://github.com/pytorch/pytorch/blob/28974a1ec3b921809c20a1217178da3792e5c545/torch/_inductor/ir.py#L6058) that I changed, but that method overrides and implements a method on the parent class - no additional documentation is needed.
The solution is to exempt methods which have the `typing_extensions.overrides` decorator from consideration, and to mention this option in the error message:
Error (DOCSTRING_LINTER) No docstring found for function 'codegen' (82 lines)
If this function is a method override, decorate it with @typing_extensions.override
(I'm on the case.)
cc @svekars @sekyondaMeta @AlannaBurke @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,005,552,609
|
Turn on static cuda launcher in OSS
|
jamesjwu
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 28
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151691
After a few small bugfixes on tests (to make it so we throw/catch similar exceptions to triton), I think we're ready to flip the switch and use StaticCudaLauncher on by default in OSS.
Initial round of benchmarks look good, with average compilation time going down by a few percent:
<img width="828" alt="image" src="https://github.com/user-attachments/assets/cad03e09-b4d6-49a7-a9e5-6068d1c0bd5c" />
With no changes to runtime perf:
<img width="823" alt="image" src="https://github.com/user-attachments/assets/3fcd435e-1057-43f4-878b-8d66a3812a10" />
There are a few noisy models I want to double check, though, so will run some more tests before accepting review.
Full benchmark results, showing a ~5% compile time improvement across the board:
https://hud.pytorch.org/benchmark/huggingface/inductor_with_cudagraphs?dashboard=torchinductor&startTime=Wed%2C%2016%20Apr%202025%2002%3A31%3A12%20GMT&stopTime=Wed%2C%2023%20Apr%202025%2002%3A31%3A12%20GMT&granularity=hour&mode=training&dtype=amp&deviceName=cuda%20(a100)&lBranch=gh/jamesjwu/139/orig&lCommit=cc45c8667fa23dec16ca50002d9504a34688ca5c&rBranch=main&rCommit=2a9afdae81d0dde98e96d7e3c9ca840e241e5405
<img width="1482" alt="image" src="https://github.com/user-attachments/assets/6e6a7f39-7f44-459f-9845-9a37f084ea82" />
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,549,990
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE_256_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE_256_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40784445042).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE_256_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,549,559
|
DISABLED test_non_equal_head_dims_score_mod2_float16_head_dims1_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod2_float16_head_dims1_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40784309916).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod2_float16_head_dims1_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,549,529
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE2_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE2_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40784445042).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE2_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,549,462
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod3_BLOCK_SIZE_256_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod3_BLOCK_SIZE_256_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40784445042).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod3_BLOCK_SIZE_256_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 588.12 MiB is free. Including non-PyTorch memory, this process has 21.46 GiB memory in use. Of the allocated memory 6.69 GiB is allocated by PyTorch, and 14.50 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod3_BLOCK_SIZE_256_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,549,453
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE_256_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE_256_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40784445042).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE_256_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 864.12 MiB is free. Including non-PyTorch memory, this process has 21.19 GiB memory in use. Of the allocated memory 6.71 GiB is allocated by PyTorch, and 14.22 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod5_BLOCK_SIZE_256_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,484,322
|
DTensor HOP call in TorchDispatchMode
|
XilunWu
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151903
* __->__ #151685
* #151497
* #151507
* #151495
## Test
`pytest test/distributed/tensor/test_attention.py -s -k test_ring_flex_attention`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,005,469,509
|
More fix for aot_export_module name collision during unlifting
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary: Also check the module's named buffers and parameters when resolving name collision
Test Plan:
```
buck2 run mode/dev-nosan caffe2/test/inductor:test_aot_inductor -- -r aoti_constant_tensor_name_collision
```
Differential Revision: D73264885
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,005,436,049
|
[c10d][fr] Fix a bug when first rank is not zero in the script
|
fduwjj
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: Further testing the script, we found that we shouldn't always assume rank 0 is the first rank, so we need to check all entries and see if it P2P op for this coalesced group.
Test Plan: Directly test with corner case.
Differential Revision: D73266257
| true
|
3,005,423,055
|
Speed up OperatorEntry construction by avoiding updateDispatchTableFull_
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: performance",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* #151849
* #151810
* #151807
* #151806
* #151805
* #151804
* #151803
* #151802
* #151801
* #151800
* __->__ #151682
The purpose of the updateDispatchTableFull_ call is, according to the comment, just to pick up fallback kernels if there are any. We can implement that directly more efficiently.
Differential Revision: [D73129447](https://our.internmc.facebook.com/intern/diff/D73129447/)
| true
|
3,005,360,790
|
Integrate with ONNX 1.18.0 release branch
|
ramkrishna2910
|
open
|
[
"module: onnx",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
We are releasing ONNX 1.18.0. A release branch is created (https://github.com/onnx/onnx/tree/rel-1.18.0). Release candidates are also available from TestPyPI: pip install -i https://test.pypi.org/simple/ --pre onnx
Updates to op-list and other key updates can be found here: https://github.com/onnx/onnx/wiki/Logistics-for-ONNX-Release-1.18.0
It is important to integrate ONNX release branch ASAP so that any issues and incompatibilities can be detected and resolved before the ONNX release.
In case a bug in ONNX is detected during integration of ONNX 1.18.0, please open a [ONNX Bug Report](https://github.com/onnx/onnx/issues/new?assignees=&labels=bug&projects=&template=bug.md&title=) and tag ONNX Release Manager @ramkrishna2910 so that the bug is fixed in the ONNX release branch.
### Alternatives
None
### Additional context
None
| true
|
3,005,321,053
|
undefined reference to `at::native::blas_impl::fp16_gemv_notrans(int, int, float, c10::Half const*, int, c10::Half const*, int, float, c10::Half*, int)'
|
shink
|
open
|
[
"module: build",
"triaged",
"module: regression",
"module: arm"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Compile error when building PyTorch from source:
```
/usr/bin/ld: /home/devuser/workspace/pytorch/build/lib/libtorch_cpu.so: undefined reference to `at::native::blas_impl::fp16_gemv_notrans(int, int, float, c10::Half const*, int, c10::Half const*, int, float, c10::Half*, int)'
```
But everything will be fine after reverting this commit: https://github.com/pytorch/pytorch/commit/32c79da789af84312a0db2de19211a7c57196ba7
Here's my build.sh:
```bash
#!/bin/bash
git submodule sync
git submodule update --init --recursive
python setup.py clean
export DEBUG=1
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py develop
```
### Versions
https://github.com/pytorch/pytorch/commit/783be8f93248ca3af24b968bdf84188f5a3257d1
```
$ python torch/utils/collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 17.0.6 (958fd14d28f0)
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:19:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.19.90-vhulk2211.3.0.h1654.eulerosv2r10.aarch64-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: HiSilicon
Model name: Kunpeng-920
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 4
Stepping: 0x1
BogoMIPS: 200.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
L1d cache: 12 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 96 MiB (192 instances)
L3 cache: 192 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
NUMA node4 CPU(s): 96-119
NUMA node5 CPU(s): 120-143
NUMA node6 CPU(s): 144-167
NUMA node7 CPU(s): 168-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.0
[pip3] optree==0.13.0
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.22.0.dev20250318
[pip3] triton==3.3.0+git3b4a9fbf
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250318 pypi_0 pypi
[conda] triton 3.3.0+git3b4a9fbf dev_0 <develop>
```
cc @malfet @seemethere @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
3,005,313,374
|
[dtensor] op_schema recursive check for symints
|
IvanKobzarev
|
open
|
[
"oncall: distributed",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151679
Fix for https://github.com/pytorch/pytorch/issues/151106
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,005,233,281
|
fix spammy library deinit errors when user passes an invalid TORCH_LOGS argument
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 9
|
CONTRIBUTOR
|
fixes https://github.com/pytorch/pytorch/issues/151055. Thanks @desertfire for the patch that fixed this.
I was a bit careful about the test - I wanted to make sure the test accurately ensures that we don't regress and our error message is not spammy when users enter an invalid `TORCH_LOGS=....` argument. But I tried to avoid using expecttests, since people occasionally add new logging artifacts and I didn't want to add to much churn by forcing this to fail CI.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151678
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,005,217,244
|
[BE][Easy]: Change typing to DimsType in dim_reduction
|
Skylion007
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 23
|
COLLABORATOR
|
Use prims_common DimsType to reduce duplication of DType
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,005,216,362
|
[aot] Set config partitioner recompute_views True by default
|
IvanKobzarev
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151676
Differential Revision: [D73260370](https://our.internmc.facebook.com/intern/diff/D73260370)
| true
|
3,005,206,921
|
DISABLED test_cublas_addmm_reduced_precision_fp16_accumulate_size_1000_cuda_float16 (__main__.TestMatmulCudaCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: linear algebra",
"skipped"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cublas_addmm_reduced_precision_fp16_accumulate_size_1000_cuda_float16&suite=TestMatmulCudaCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40777328756).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cublas_addmm_reduced_precision_fp16_accumulate_size_1000_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_matmul_cuda.py`
cc @clee2000 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,005,202,986
|
[BE][Easy]: Simplify reversed call in graph matcher
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx",
"release notes: AO frontend"
] | 6
|
COLLABORATOR
|
Another list call on reversed that is no longer necessary since ItemViews reversed
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,005,156,421
|
[BE][Easy]: Simplify ModuleList reversed method
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Removes unnecessary list calls now that we are in Python 3.9 and KeyViews implement reversed directly.
| true
|
3,004,939,136
|
[torch] Expose PCI info from CUDA device
|
efiks
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
Summary:
PR#125083 add cuda device UUID info, but due to meta internal [version of ROCM the code was excluded](https://github.com/pytorch/pytorch/pull/125083?fbclid=IwY2xjawJvLnNleHRuA2FlbQIxMQABHlY55crrkTqWBWTsr2HVfuqnZ3R1GHR3o9Kf1o3h3uvyawEmCEdhdT48iY1P_aem_8tfrGrWE9SxFYasGfH8kCQ#issuecomment-2103315320).
This change will ensure meta internal code is built and PCI info is available
Test Plan: pass CI
Differential Revision: D73253426
| true
|
3,004,938,510
|
Migrate Windows Arm64 workflows to the new GitHub Actions Runner
|
iremyux
|
open
|
[
"module: windows",
"module: ci",
"triaged",
"module: arm"
] | 1
|
COLLABORATOR
|
Windows Arm64 workflows and jobs should be migrated to use the newly available GitHub Actions runner, as described [here](https://github.com/actions/partner-runner-images/blob/main/images/arm-windows-11-image.md). This update ensures compatibility, improves performance, and leverages officially supported runner images provided by GitHub.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @Blackhex @seemethere @malfet @pytorch/pytorch-dev-infra @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
3,004,901,642
|
[inductor] [cuda] [fake tensor] `torch.ones(x.size(0))` becomes a fake tensor for `torch.diagonal_scatter`
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: dynamo",
"module: pt2-dispatcher"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `torch.diagonal_scatter` throws fake tensor error, misaligning with eager.
**device**: only cuda
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
import os
os.environ['TORCHDYNAMO_VERBOSE'] = '1'
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
y = torch.ones(x.size(0))
x = torch.diagonal_scatter(x, y)
return x
model = Model()
x = torch.rand(1, 2)
inputs = [x]
def run_test(model, inputs, device, backend):
torch.manual_seed(0)
model = model.to(device)
inputs = [x.to(device) for x in inputs]
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'cuda', 'eager')
run_test(model, inputs, 'cuda', 'inductor')
```
### Error logs
```
succeed on eager
Dynamo failed to run FX node with fake tensors: call_function <built-in method diagonal_scatter of type object at 0x7f4ca6e5a2c0>(*(FakeTensor(..., device='cuda:0', size=(1, 2)), FakeTensor(..., size=(1,))), **{}): got RuntimeError('Unhandled FakeTensor Device Propagation for aten.diagonal_scatter.default, found two different devices cuda:0, cpu')
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames @bdhirsh
| true
|
3,004,852,713
|
[Infra] Jobs got frequently cancelled, sometimes mid-checkout
|
malfet
|
open
|
[
"module: ci",
"triaged",
"module: flaky-tests"
] | 13
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I'm looking at the hud now and see some red:
https://hud.pytorch.org/hud/pytorch/pytorch/14293c237729a99440470206c0f791a8e76224ec/1?per_page=50&mergeEphemeralLF=true
But than clicking on those changes shows that jobs has been cancelled, for example see following workflow:
- https://github.com/pytorch/pytorch/actions/runs/14532623857/job/40775614824
- https://github.com/pytorch/pytorch/actions/runs/14535158684/job/40782040158
- https://github.com/pytorch/pytorch/actions/runs/14535138677/job/40781987650
- https://github.com/pytorch/pytorch/actions/runs/14533573687/job/40778262513
- https://github.com/pytorch/pytorch/actions/runs/14532623857/job/40775614821
- https://github.com/pytorch/pytorch/actions/runs/14535158685/job/40782032942
- https://github.com/pytorch/pytorch/actions/runs/14535138648/job/40781979892
Cancellation reason (not sure if valid) is "Cancelled by pytorchmergebot"... Is this a glitch in concurrency rules?
<img width="396" alt="Image" src="https://github.com/user-attachments/assets/6eec52b6-9668-4628-9637-0d446d898843" />
### Versions
CI
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @pytorch/pytorch-dev-infra @clee2000
| true
|
3,004,710,575
|
[autodeps2] Replace third-party/pyyaml with third-party/pypi/pyyaml
|
kkolur76
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary: We should use the pypi version.
Test Plan: CI
Differential Revision: D73211869
| true
|
3,004,660,226
|
[MPS] MultiheadAttention with masks and dropout produces NaNs
|
matkozak
|
open
|
[
"needs reproduction",
"triaged",
"module: macos",
"module: NaNs and Infs",
"module: correctness (silent)",
"module: mps"
] | 6
|
NONE
|
### 🐛 Describe the bug
Summary:
On MPS backend, combining MultiheadAttention with attention masks and dropout produces NaNs, while CPU execution works correctly.
I tried trimming down my code as much as possible, but I've ran into some seriously non-deterministic behaviors; this is a minimal snippet I've built which reproduces it every time. I am willing to investigate deeper, just need some guidance.
A curious quirk is that adding a no-op like `x = x + 0` magically "fixes" the problem (see comments).
Minimal reproduction:
```python
import torch
import torch.nn as nn
def check_tensor(x: torch.Tensor, msg: str):
print(
f"Has NaNs: {torch.isnan(x).any().item()} - Range: {x.min().item():.3f} to {x.max().item():.3f} - {msg}"
)
class Block(nn.Module):
def __init__(
self,
embed_dim: int = 64,
res_dropout: float = 0.1,
attn_dropout: float = 0.1,
) -> None:
super().__init__()
self.attention = nn.MultiheadAttention(
embed_dim, num_heads=1, dropout=attn_dropout, batch_first=True
)
self.residual_dropout = nn.Dropout(res_dropout)
def forward(self, x: torch.Tensor):
check_tensor(x, "input")
seq_len = x.size(1)
attn_mask = torch.triu(
torch.ones((seq_len, seq_len)),
diagonal=1,
).to(x.device, torch.bool)
padding_mask = torch.zeros((x.size(0), seq_len)).to(x.device, torch.bool)
padding_mask[:, seq_len // 2 :] = True # Simulate padding in second half
attn_out, _ = self.attention(
x, x, x, attn_mask=attn_mask, key_padding_mask=padding_mask
)
# Without this, NaNs appear
# x = x + 0 # <--- UNCOMMENT THIS TO "FIX" NAN OUTPUTS
# check_tensor(x, "after attn") # <--- or this
x = x + self.residual_dropout(attn_out)
check_tensor(x, "output")
return x
def test_device(model: nn.Module, x: torch.Tensor, d: str):
device = torch.device(d)
x, model = x.to(device), model.to(device)
print(f"Testing for NaNs on {device.type}...")
model(x)
if __name__ == "__main__":
torch.manual_seed(2137)
batch_size, seq_len, dim = 32, 16, 64
x = torch.randn(batch_size, seq_len, dim)
model = Block(res_dropout=0.1, attn_dropout=0.1)
for d in ("cpu", "mps"):
test_device(model, x, d)
```
Expected output:
MPS and CPU give the same output (preferably no NaNs!).
Output:
```
Testing for NaNs on cpu...
Has NaNs: False - Range: -4.285 to 4.010 - input
Has NaNs: False - Range: -4.340 to 4.066 - output
Testing for NaNs on mps...
Has NaNs: False - Range: -4.285 to 4.010 - input
Has NaNs: True - Range: nan to nan - output
```
### Versions
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.7.5 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.9 (main, Mar 11 2025, 17:41:32) [Clang 20.1.0 ] (64-bit runtime)
Python platform: macOS-14.7.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,004,606,245
|
Encountered NCCL stuck, I have got the flight recorder trace
|
XZLancer
|
closed
|
[
"oncall: distributed",
"module: nccl"
] | 1
|
NONE
|
I encountered NCCL stuck error and below is the flight recorder trace. The error occurs at random position in one epoch. It seems there is a size mismatch causing the all-reduce error. How should I debug further to resolve the issue? Thanks for any reply.
```
Collective sequence number: 1487 has errors
internal record id: 1486
group info: 0:default_pg
collective: nccl:all_reduce
input sizes: [[40]]
output sizes: [[40]]
world size: 2
expected ranks: {0, 1}
collective state: scheduled
error msg: Culprit rank 1; Error type: COLLECTIVE_STATE_MISMATCH, Expected state: 'scheduled' does not match found state: 'completed'
collective stack trace:
all_reduce at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:2501
wrapper at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/distributed/c10d_logger.py:83
backward at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/nn/modules/_functions.py:158
apply at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/autograd/function.py:307
Collective sequence number: 1488 has errors
internal record id: 1487
group info: 0:default_pg
collective: nccl:all_reduce
input sizes: [[16]]
output sizes: [[16]]
world size: 2
expected ranks: {0, 1}
collective state: scheduled
error msg: Culprit rank 1; Error type: SIZE_OR_SYNTAX_MISMATCH, Expected input sizes: '[[16]]' does not match found input sizes: '[[40]]'
collective stack trace:
all_reduce at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:2501
wrapper at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/distributed/c10d_logger.py:83
backward at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/nn/modules/_functions.py:158
apply at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/autograd/function.py:307
_engine_run_backward at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/autograd/graph.py:825
backward at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/autograd/__init__.py:347
backward at /home/user/anaconda3/envs/torch-latest/lib/python3.9/site-packages/torch/_tensor.py:581
train at /home/user/projects/pisca-speckle-detection/train.py:286
main at /home/user/projects/pisca-speckle-detection/train.py:530
<module> at /home/user/projects/pisca-speckle-detection/train.py:584
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,004,577,554
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE_128_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE_128_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40767878528).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE_128_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,004,577,325
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE3_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE3_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40767899147).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE3_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,004,577,246
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE_256_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE_256_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40767899147).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE_256_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 452.12 MiB is free. Including non-PyTorch memory, this process has 21.60 GiB memory in use. Of the allocated memory 6.69 GiB is allocated by PyTorch, and 14.64 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod4_BLOCK_SIZE_256_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,004,577,237
|
DISABLED test_builtin_score_mods_different_block_size_bfloat16_score_mod7_BLOCK_SIZE3_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_bfloat16_score_mod7_BLOCK_SIZE3_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40767648035).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_bfloat16_score_mod7_BLOCK_SIZE3_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,004,576,162
|
DISABLED test_cublas_addmm_reduced_precision_fp16_accumulate_size_10000_cuda_float16 (__main__.TestMatmulCudaCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: linear algebra",
"skipped"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cublas_addmm_reduced_precision_fp16_accumulate_size_10000_cuda_float16&suite=TestMatmulCudaCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40767648033).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cublas_addmm_reduced_precision_fp16_accumulate_size_10000_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_matmul_cuda.py", line 166, in test_cublas_addmm_reduced_precision_fp16_accumulate
self.cublas_addmm(size, dtype, False, True)
File "/var/lib/jenkins/workspace/test/test_matmul_cuda.py", line 128, in cublas_addmm
res_cuda = torch.addmm(m_input, m_1, m_2, beta=m_beta.item())
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Cdesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`
To execute this test, run the following from the base repo dir:
python test/test_matmul_cuda.py TestMatmulCudaCUDA.test_cublas_addmm_reduced_precision_fp16_accumulate_size_10000_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_matmul_cuda.py`
cc @clee2000 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,004,576,159
|
DISABLED test_kv_batch_broadcast_float16_batch_dims2_head_dims1_score_mod2_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_kv_batch_broadcast_float16_batch_dims2_head_dims1_score_mod2_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40767675430).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_kv_batch_broadcast_float16_batch_dims2_head_dims1_score_mod2_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1227, in test_kv_batch_broadcast
self.run_test(
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.3034 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 8, in forward
add = torch.ops.aten.add.Tensor(mul_2, mul_1); mul_2 = mul_1 = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 640.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 348.12 MiB is free. Including non-PyTorch memory, this process has 21.70 GiB memory in use. Of the allocated memory 5.03 GiB is allocated by PyTorch, and 16.41 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_kv_batch_broadcast_float16_batch_dims2_head_dims1_score_mod2_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,004,576,158
|
DISABLED test_load_rel_bias_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_load_rel_bias_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40767675430).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_load_rel_bias_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1562, in test_load_rel_bias
self.run_test(bias_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 612.12 MiB is free. Including non-PyTorch memory, this process has 21.44 GiB memory in use. Of the allocated memory 6.93 GiB is allocated by PyTorch, and 14.25 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_load_rel_bias_float16_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,004,574,407
|
nn.Linear same weight and bias running on cuda float16 with input shape [2, 4] vs [1, 4] + [1, 4] got different results.
|
ganlvtech
|
closed
|
[
"module: cuda",
"triaged",
"module: cublas",
"module: half"
] | 3
|
NONE
|
### 🐛 Describe the bug
```python
import torch
for i in [44191, 47224]:
torch.manual_seed(i)
torch.cuda.manual_seed(i)
x = torch.empty(1, 4, device='cuda:0', dtype=torch.float16).uniform_(-1, 1)
x = torch.cat([x, torch.zeros_like(x)], dim=0)
weight = torch.empty(1, 4, device='cuda:0', dtype=torch.float16).uniform_(-1, 1)
bias = torch.zeros(1, device='cuda:0', dtype=torch.float16)
result_1 = torch.nn.functional.linear(x[:1], weight, bias)
result_2 = torch.nn.functional.linear(x, weight, bias)[:1]
equal = torch.all(torch.eq(result_1, result_2)).item()
max_diff = (result_1 - result_2).max()
print('i', i)
print('equal', equal)
print('max_diff', max_diff)
print('weight', weight)
print('bias', bias)
print('x', x)
print('result_1', result_1)
print('result_2', result_2)
```
```plain
i 44191
equal False
max_diff tensor(-1.5259e-05, device='cuda:0', dtype=torch.float16)
weight tensor([[-0.6021, 0.8657, 0.3923, 0.1321]], device='cuda:0',
dtype=torch.float16)
bias tensor([0.], device='cuda:0', dtype=torch.float16)
x tensor([[-0.6870, -0.8218, 0.7827, 0.1174],
[ 0.0000, 0.0000, 0.0000, 0.0000]], device='cuda:0',
dtype=torch.float16)
result_1 tensor([[0.0248]], device='cuda:0', dtype=torch.float16)
result_2 tensor([[0.0248]], device='cuda:0', dtype=torch.float16)
i 47224
equal False
max_diff tensor(-0.0001, device='cuda:0', dtype=torch.float16)
weight tensor([[ 0.7158, 0.3010, -0.0497, -0.9819]], device='cuda:0',
dtype=torch.float16)
bias tensor([0.], device='cuda:0', dtype=torch.float16)
x tensor([[-0.7119, 0.5000, 0.7407, -0.1523],
[ 0.0000, 0.0000, 0.0000, 0.0000]], device='cuda:0',
dtype=torch.float16)
result_1 tensor([[-0.2463]], device='cuda:0', dtype=torch.float16)
result_2 tensor([[-0.2462]], device='cuda:0', dtype=torch.float16)
```
It only happens on CUDA with float16. Maybe it's a bug of cuBLAS?
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.9 (main, Mar 8 2023, 10:47:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
GPU 4: NVIDIA GeForce RTX 4090
GPU 5: NVIDIA GeForce RTX 4090
GPU 6: NVIDIA GeForce RTX 4090
GPU 7: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0+cu124 pypi_0 pypi
[conda] torchvision 0.21.0+cu124 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @ptrblck @msaroufim @eqy @jerryzh168 @csarofeen @xwang233
| true
|
3,004,509,151
|
Device Check in torch.linalg.solve_triangular
|
hegdeadithyak
|
closed
|
[
"open source",
"release notes: linalg_frontend"
] | 2
|
NONE
|
Fixes #142048
- [ ] Added Device Check in `pytorch/torch/testing/_internal/opinfo/definitions`
- [x] Added Test in `pytorch/test/test_linalg.py`
@lezcano I have no cuda on my device hence the tests got skipped , and I'm unsure wether this is a right approach.
| true
|
3,004,496,880
|
Add a custom profiler configuration option
|
fwenguang
|
open
|
[
"triaged",
"open source"
] | 3
|
CONTRIBUTOR
|
We aim to pass some configuration options to our custom Kineto backend via ExperimentalConfig,, so we added a `custom_profiler_config` parameter.
Requires https://github.com/pytorch/kineto/pull/1077 ,
| true
|
3,004,400,722
|
[Intel GPU] Use user-friendly err msg in mm
|
ZhiweiYan-96
|
open
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/xpu"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151655
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,004,355,224
|
[sparse][exported-graph] Wrong Exported Graph for the Sparse Tensor
|
VimalWill
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 5
|
NONE
|
The exported graph neither reflected the incoming sparse tensor's layout nor threw an error message.
```
import torch
import torch.export
import torch.sparse
class BikNet(torch.nn.Module):
def __init__(self):
super(BikNet, self).__init__()
return
def forward(self, x):
return x.sum()
biknet = BikNet()
biknet.eval()
dense_input = torch.ones(64, 64)
sparse_input = dense_input.to_sparse_csr()
print(sparse_input)
prog2 = torch.export.export(biknet, args=(sparse_input,))
print(prog2)
```
```
UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /pytorch/aten/src/ATen/SparseCsrTensorImpl.cpp:53.)
sparse_input = dense_input.to_sparse_csr()
tensor(crow_indices=tensor([ 0, 64, 128, 192, 256, 320, 384, 448,
512, 576, 640, 704, 768, 832, 896, 960,
1024, 1088, 1152, 1216, 1280, 1344, 1408, 1472,
1536, 1600, 1664, 1728, 1792, 1856, 1920, 1984,
2048, 2112, 2176, 2240, 2304, 2368, 2432, 2496,
2560, 2624, 2688, 2752, 2816, 2880, 2944, 3008,
3072, 3136, 3200, 3264, 3328, 3392, 3456, 3520,
3584, 3648, 3712, 3776, 3840, 3904, 3968, 4032,
4096]),
col_indices=tensor([ 0, 1, 2, ..., 61, 62, 63]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(64, 64), nnz=4096,
layout=torch.sparse_csr)
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[64, 64]"):
# File: /home/vimal/Edge_ai/ss_sparsity/expr_ss_sparsity.py:13 in forward, code: return x.sum()
sum_1: "f32[]" = torch.ops.aten.sum.default(x); x = None
return (sum_1,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='sum_1'), target=None)])
Range constraints: {}
```
### Versions
**2.6.0+cu124** is the torch version I'm using.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,004,290,133
|
[Easy] Optimize container.py typing
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,004,179,348
|
[AMD] Remove fbcode limit for uuid
|
xw285cornell
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: We're now w/ later rocm version so ok to add uuid back.
Test Plan: sandcastle
Differential Revision: D73240086
| true
|
3,004,142,595
|
[Inductor] Keep quiet if compile without CUDA support
|
shink
|
closed
|
[
"topic: not user facing",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
Fixes #151650
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,004,139,950
|
[Inductor] Error getting cuda arch: Torch not compiled with CUDA enabled
|
shink
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
If you run the model with `torch.compile` in a non-cuda env, will get an error:
```
Error getting cuda arch: Torch not compiled with CUDA enabled
```
Here's a simple case can repro this issue:
```python
@torch.compile(backend="inductor")
def fn(x, y):
return x + y
x = torch.randn(10)
y = torch.randn(10)
print(f"cuda is compiled: {torch.cuda._is_compiled()}")
fn(x, y)
```
### Versions
main
| true
|
3,004,071,273
|
Inductor pattern matcher replaces aten.reshape with aten.view in pattern
|
ProExpertProg
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"vllm-compile"
] | 17
|
NONE
|
### 🐛 Describe the bug
Based on my rudimentary debugging, it seems that the pattern will record a `torch.ops.aten.reshape.default` as a `aten.view.default`. When a reshape is found in the fx.Graph, it is hence not matched.
In the example below, we compile the pattern itself, which should obviously match, but it doesn't.
```python
from typing import Callable, List
import torch
from torch._inductor.pattern_matcher import register_replacement, PatternMatcherPass, fwd_only
def pattern(x, y):
c = torch.mm(x, y)
d = torch.ops.aten.reshape.default(c, [-1, 4, 4])
return d.relu()
def replacement(x, y):
c = torch.mm(x, y)
return torch.ops.aten.reshape.default(c.relu(), [-1, 4, 4])
inputs = [
torch.empty(5, 16, device="cuda"),
torch.empty(16, 16, device="cuda"),
]
patterns = PatternMatcherPass()
register_replacement(pattern, replacement, inputs, fwd_only, patterns)
def pg_pass(graph: torch.fx.Graph):
count = patterns.apply(graph)
print(f"Count: {count}")
def custom_backend(graph: torch.fx.GraphModule, example_inputs: List[torch.Tensor]) -> Callable:
from torch._inductor import config
current_config = config.shallow_copy_dict()
from torch._inductor.compile_fx import compile_fx
current_config['post_grad_custom_post_pass'] = pg_pass
return compile_fx(graph, example_inputs, config_patches=current_config)
func = torch.compile(pattern, backend=custom_backend)
# The line below did not work somehow?
# func = torch.compile(pattern, backend="inductor", options={"post_grad_custom_post_pass":pg_pass})
func(*inputs)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+rocm6.2.4
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41134-65d174c3e
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~22.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41134
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 104
On-line CPU(s) list: 0-103
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8470
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51
NUMA node1 CPU(s): 52-103
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] pytorch-triton-rocm==3.2.0
[pip3] torch==2.6.0+rocm6.2.4
[pip3] torchaudio==2.6.0+rocm6.2.4
[pip3] torchvision==0.21.0+rocm6.2.4
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519
| true
|
3,004,058,353
|
UnsupportedOperatorError: Exporting the operator 'aten::searchsorted' to ONNX opset version 20 is not supported.
|
ElinLiu0
|
closed
|
[
"module: onnx",
"triaged"
] | 9
|
NONE
|
### 🚀 The feature, motivation and pitch
When expoprting a `torch.nn.Module()` with `torch.searchsorted()`,it raise:
```python
UnsupportedOperatorError: Exporting the operator 'aten::searchsorted' to ONNX opset version 20 is not supported.Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
```
I really need this function works to supporting my model being exported into ONNX format.
### Alternatives
_No response_
### Additional context
_No response_
| true
|
3,004,055,244
|
Update link to NVIDIA cuDNN Support Matrix
|
radeksm
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
NONE
|
Fortunately or unfortunately the generic cuDNN version agnostic link doesn't wotk and now NVIDIA publishes supported hardware per cuDNN version.
| true
|
3,004,000,027
|
[Inductor] Test ND block pointers with dynamic shapes
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
With ND tiling, we can get multi-dimensional block pointers with dynamic shapes. This is an important capability, but I couldn't find any CI tests for it. This PR adds a couple of tests checking that we get the expected block pointers with dynamic shapes, both for pointwise and reduction kernels.
Example kernels:
```
@triton.jit
def triton_poi_fused_div_0(in_ptr0, out_ptr0, ks0, ks1, ynumel, xnumel, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
yoffset = (tl.program_id(1) + tl.program_id(2) * tl.num_programs(1)) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[:, None]
ymask = yindex < ynumel
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[None, :]
xmask = xindex < xnumel
x1 = xindex
y0 = yindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[ks0, ks0], strides=[ks1, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), boundary_check=[0, 1])
tmp1 = (tmp0 / tmp0)
tl.store(tl.make_block_ptr(out_ptr0, shape=[ks0, ks0], strides=[ks0, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp1, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
@triton.jit
def triton_red_fused_prod_0(in_ptr0, out_ptr0, ks0, ks1, xnumel, r0_numel, r1_numel, XBLOCK : tl.constexpr, R0_BLOCK : tl.constexpr, R1_BLOCK : tl.constexpr):
xnumel = 1
rnumel = r0_numel * r1_numel
RBLOCK: tl.constexpr = R0_BLOCK*R1_BLOCK
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None, None]
xmask = tl.full([XBLOCK, R0_BLOCK, R1_BLOCK], True, tl.int1)
r0_base = tl.arange(0, R0_BLOCK)[None, :, None]
r1_base = tl.arange(0, R1_BLOCK)[None, None, :]
rbase = r1_base + r0_base*r1_numel
block_ptr0 = tl.make_block_ptr(in_ptr0, shape=[ks0, ks0], strides=[ks1, 1], block_shape=[R0_BLOCK, R1_BLOCK], order=[1, 0], offsets=[0, 0])
_tmp2 = tl.full([XBLOCK, R0_BLOCK, R1_BLOCK], 1, tl.float32)
for r0_offset in range(0, r0_numel, R0_BLOCK):
r0_index = r0_offset + r0_base
r0_mask = r0_index < r0_numel
for r1_offset in range(0, r1_numel, R1_BLOCK):
r1_index = r1_offset + r1_base
r1_mask = r1_index < r1_numel
roffset = r1_offset + r0_offset*r1_numel
rindex = r1_index + r0_index*r1_numel
r0_0 = r0_index
r1_1 = r1_index
tmp0 = tl.load(block_ptr0, boundary_check=[0, 1], padding_option='zero', eviction_policy='evict_first')[None, :, :]
tmp1 = tl.broadcast_to(tmp0, [XBLOCK, R0_BLOCK, R1_BLOCK])
tmp3 = _tmp2 * tmp1
_tmp2 = tl.where(r0_mask & r1_mask, tmp3, _tmp2)
block_ptr0 = tl.advance(block_ptr0, [0, R1_BLOCK])
block_ptr0 = tl.advance(block_ptr0, [R0_BLOCK, (-1)*R1_BLOCK*(triton_helpers.div_floor_integer((-1) + ks0 + R1_BLOCK, R1_BLOCK))])
tmp4 = tl.reshape(_tmp2, [XBLOCK, RBLOCK])
tmp2 = triton_helpers.prod(tmp4, 1)[:, None, None]
tl.store(out_ptr0 + (tl.full([XBLOCK, 1, 1], 0, tl.int32)), tmp2, None)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,921,175
|
[demo] Verify test runner integration
|
codeJRV
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 1
|
NONE
|
**Fixes #000. Do NOT Merge, this is just a test PR to test Nvidia CI-CD Runners**
@ZainRizvi PTAL. I'm out tomorrow afternoon, so we can test either tomorrow morning or on Monday afternoon.
cc: @zhe-thoughts
| true
|
3,003,885,233
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40759171802).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,885,213
|
DISABLED test_captured_buffers_all_dims_bfloat16_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_captured_buffers_all_dims_bfloat16_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40761172832).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_captured_buffers_all_dims_bfloat16_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1458, in test_captured_buffers_all_dims
self.run_test(all_bias, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1086 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 8, in forward
add_1 = torch.ops.aten.add.Tensor(add, index_1); add = index_1 = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 272.12 MiB is free. Including non-PyTorch memory, this process has 21.77 GiB memory in use. Of the allocated memory 6.73 GiB is allocated by PyTorch, and 14.78 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_captured_buffers_all_dims_bfloat16_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,885,157
|
DISABLED test_builtin_score_mods_dynamic_float16_score_mask_mod4_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_dynamic_float16_score_mask_mod4_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40759167539).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_dynamic_float16_score_mask_mod4_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,885,156
|
DISABLED test_non_equal_head_dims_score_mod3_bfloat16_head_dims0_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod3_bfloat16_head_dims0_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40761172832).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod3_bfloat16_head_dims0_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,884,896
|
DISABLED test_non_equal_head_dims_score_mod7_float16_head_dims1_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod7_float16_head_dims1_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40759366269).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod7_float16_head_dims1_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,884,869
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE_256_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE_256_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40757687396).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE_256_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,003,866,023
|
Modified Dr. CI so it could detect runner disconnection failures
|
ParamThakkar123
|
closed
|
[
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Fixes #66902
| true
|
3,003,859,775
|
[Intel GPU][Inductor] Fallback embedding_dense_backward on XPU
|
jianyizh
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"keep-going",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 19
|
CONTRIBUTOR
|
Reopen #146888, now the modification only affects xpu device. We do not want to decompose embedding_dense_backward for torch.compile. Current XPU devices have hardware limitations on atomic ops. Fallback to eager and we can use sort to implement this op. hf_T5 amp bf16 training in torchbench can get 2x improvement on Max 1550. I also align with cuda on gelu decomposition in _addmm_activation
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @gujinghui @fengyuan14 @guangyey
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.