id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,908,701,098
|
bootcamp task for DTensor
|
XilunWu
|
open
|
[
"oncall: distributed",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148932
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,908,685,839
|
Enable lazy tests
|
cyyever
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,908,674,564
|
[cond] don't trace fw and bw graph in autograd key
|
ydwu4
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 16
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148930
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,668,043
|
[cutlass backend] Add addmm and bmm tests for AOTI
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148929
Needs to do:
1. Expand addmm tests to cover all 4 shapes
2. Add dynamic shape support.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,668,007
|
[Codemod][AddExplicitStrictExportArg] caffe2/test/inductor
|
gmagogsfm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Differential Revision: D70908557
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,657,480
|
default cudagraphable policy for custom op
|
BoyuanFeng
|
open
|
[
"triaged",
"module: cuda graphs",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Currently pytorch assumes custom op is cudagraphable. Sometime this is wrong (repro below). Since custom op details is opaque to compiler, there might be something cudagraph cannot support (e.g., cpu ops) and compiler cannot detect that. From correctness perspective, it might be good to `default as non-cudagraphable` for custom ops.
On the other side, there are many custom ops that contain only cuda ops. `default as non-cudagraphable` may lead to performance regression.
```python
import torch
from torch import Tensor
@torch.library.custom_op("mylib::foo", mutates_args={})
def foo(x: Tensor) -> Tensor:
# weird clone :P
# return x.clone()
return x.cpu().cuda()
@foo.register_fake
def _(x):
return x.cpu().cuda()
@torch.compile(mode="max-autotune")
def f(x):
return foo(x)
x = torch.tensor([0., 1, 2, 3, 4], device="cuda")
print(f(x))
x = torch.randn(5, device="cuda")
print(f(x))
```
cc @mcarilli @ezyang @eellison @penguinwu @chauhang @zou3519
### Versions
PyTorch version: 2.7.0a0+git4a2173d
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.1.1
/usr/lib64/libcudnn_adv.so.9.1.1
/usr/lib64/libcudnn_cnn.so.9.1.1
/usr/lib64/libcudnn_engines_precompiled.so.9.1.1
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.1
/usr/lib64/libcudnn_graph.so.9.1.1
/usr/lib64/libcudnn_heuristic.so.9.1.1
/usr/lib64/libcudnn_ops.so.9.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 184
On-line CPU(s) list: 0-183
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 184
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 11.5 MiB (184 instances)
L1i cache: 11.5 MiB (184 instances)
L2 cache: 92 MiB (184 instances)
L3 cache: 2.9 GiB (184 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-183
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.7.0a0+git4a2173d
[pip3] torchao==0.7.0
[pip3] torchvision==0.20.0a0+120e22b
[conda] blas 1.0 mkl
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.10 py310h5eee18b_0
[conda] mkl_random 1.2.7 py310h1128e8f_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.7.0a0+git4a2173d dev_0 <develop>
[conda] torchao 0.7.0 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchvision 0.22.0a0+7b2addf dev_0 <develop>
| true
|
2,908,652,276
|
Only create new tensors in `nn.Module.to_empty` if source tensor is not already on target device
|
ringohoffman
|
closed
|
[
"open source"
] | 2
|
CONTRIBUTOR
|
Fixes #148843
Some `Module`s are only partially initialized on the meta-device, like with [`accelerate.init_empty_weights()`](https://huggingface.co/docs/accelerate/v0.11.0/en/big_modeling#accelerate.init_empty_weights)
to avoid needing to re-initialize non-persistent buffers that are destroyed by `nn.Module.to_empty`, it can instead skip creating empty tensors when the source is already on the target device
cc: @awgu
| true
|
2,908,648,115
|
[modefile free][long tail] selectify fbcode/caffe2/defs.bzl
|
jordanzoo
|
closed
|
[
"fb-exported",
"Merged",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Summary:
replace read_config with select
For more info, please refer to the [doc](https://docs.google.com/document/d/1e0Hvht8WEHhcRvlCAodq_R9xnAtKBrAhdyvxcAqQjCw/edit?tab=t.hl8j18gza0cv)
Test Plan: CI
Reviewed By: malfet
Differential Revision: D70267850
| true
|
2,908,640,562
|
[triton 3.3] Forward-fix mm template selection logic
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148924
Follow-up from https://github.com/pytorch/pytorch/pull/148662.
The logic from https://github.com/pytorch/pytorch/pull/148662 is incorrect; what we want is "choose the second template 'AMD-specific template' only if we're on hip AND triton version < 3.3" - negating it, the code should be "choose the cirst template if we're NOT on hip OR triton version >= 3.3".
Tested locally to verify that it fixes the test.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,628,503
|
[dynamo][guards] Do not ID_MATCH on numpy tensors
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148923
Might help with https://github.com/pytorch/pytorch/issues/148535
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,908,604,878
|
partitioner: treat inputs with static indices as free to save
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/141881
internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1538435030128036/?comment_id=1556782068293332
I tried to make a test case out of the code linked in that github issue. The setup + bad outcome today was as follows:
(1) you have a graph where one of its inputs is a model weight
(2) in the backward, you do some downstream compute on `weight`, `tmp = f(weight)`, where (a) `tmp` is of a smaller size than `weight`, and (b) the compute is trivially fusible into other kernels (so the partitioner thinks it is "free" to recompute
(3) since `sizeof(tmp) < sizeof(weight)` and the recompute is free, the partitioner decides that it would be strictly better to save `tmp` for backward instead of weight
(4) this is bad: `weight` is a static tensor that sits in GPU memory for the duration of your entire training loop, so saving it for backward has no negative impact on peak memory. Since we're saving `tmp` instead, we end up unnecessarily increasing peak memory. In particular - the repro involves an autograd.Function in eager that saves the weight for bw, so we end up hitting higher peak memory in compile
The fix I'm trying out in this PR is to tell the partitioner that graph inputs that we know have static addresses (aka parameters) are "free" to save.
Below is the fw/bw graph before my change, where you can see that instead of `primals_2` being saved for backward, we save `t_8` (which involves some low precision downstream compute on `primals_2`, that is only needed in the backward.
```
===== Forward graph 0 =====
/data/users/hirsheybar/checkout2/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "bf16[64, 64][64, 1]cuda:0", primals_2: "bf16[64, 64][64, 1]cuda:0", primals_3: "bf16[64][1]cuda:0"):
# File: /data/users/hirsheybar/checkout2/pytorch/test/dynamo/test_repros.py:6943 in forward, code: out = Fp8LinearFn.apply(
abs_1: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.abs.default(primals_1)
view: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(abs_1, [64, 1, 64]); abs_1 = None
amax: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.amax.default(view, [-1]); view = None
abs_2: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.abs.default(primals_2)
view_1: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(abs_2, [64, 1, 64]); abs_2 = None
amax_1: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.amax.default(view_1, [-1]); view_1 = None
_to_copy: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(amax, dtype = torch.float32); amax = None
clamp: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy, 1e-12); _to_copy = None
div: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp, 448.0); clamp = None
reciprocal: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.reciprocal.default(div)
view_2: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(primals_1, [64, 1, 64])
view_3: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_2, [64, 1, 1, 64]); view_2 = None
slice_1: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(reciprocal, 0, 0, 9223372036854775807); reciprocal = None
unsqueeze: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_1, 1); slice_1 = None
slice_2: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze, 2, 0, 9223372036854775807); unsqueeze = None
unsqueeze_1: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_2, 3); slice_2 = None
mul: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_3, unsqueeze_1); view_3 = unsqueeze_1 = None
view_4: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul, [64, 1, 64]); mul = None
view_5: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_4, [64, 64]); view_4 = None
_to_copy_1: "f8e4m3fn[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_5, dtype = torch.float8_e4m3fn); view_5 = None
_to_copy_2: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(amax_1, dtype = torch.float32)
clamp_1: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy_2, 1e-12); _to_copy_2 = None
div_1: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp_1, 448.0); clamp_1 = None
reciprocal_1: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.reciprocal.default(div_1)
view_6: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(primals_2, [64, 1, 64])
view_7: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_6, [64, 1, 1, 64]); view_6 = None
slice_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(reciprocal_1, 0, 0, 9223372036854775807); reciprocal_1 = None
unsqueeze_2: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_3, 1); slice_3 = None
slice_4: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_2, 2, 0, 9223372036854775807); unsqueeze_2 = None
unsqueeze_3: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_4, 3); slice_4 = None
mul_1: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_7, unsqueeze_3); view_7 = unsqueeze_3 = None
view_8: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_1, [64, 1, 64]); mul_1 = None
view_9: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_8, [64, 64]); view_8 = None
_to_copy_3: "f8e4m3fn[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_9, dtype = torch.float8_e4m3fn); view_9 = None
t: "f32[1, 64][1, 1]cuda:0" = torch.ops.aten.t.default(div_1); div_1 = None
new_ones: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.new_ones.default(div, [1, 1], pin_memory = False)
new_ones_1: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.new_ones.default(t, [1, 1], pin_memory = False)
t_2: "f8e4m3fn[64, 64][1, 64]cuda:0" = torch.ops.aten.t.default(_to_copy_3); _to_copy_3 = None
t_3: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.t.default(new_ones_1); new_ones_1 = None
_scaled_mm: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten._scaled_mm.default(_to_copy_1, t_2, new_ones, t_3, None, None, torch.bfloat16); _to_copy_1 = t_2 = new_ones = t_3 = None
view_10: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(_scaled_mm, [64, 1, 64]); _scaled_mm = None
view_11: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_10, [64, 1, 1, 64]); view_10 = None
slice_5: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(div, 0, 0, 9223372036854775807); div = None
unsqueeze_4: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_5, 1); slice_5 = None
slice_6: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_4, 2, 0, 9223372036854775807); unsqueeze_4 = None
unsqueeze_5: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_6, 3); slice_6 = None
mul_2: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_11, unsqueeze_5); view_11 = unsqueeze_5 = None
view_12: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_2, [64, 1, 64]); mul_2 = None
view_13: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_12, [64, 64]); view_12 = None
view_14: "f32[1, 64, 64][4096, 64, 1]cuda:0" = torch.ops.aten.view.default(view_13, [1, 64, 64]); view_13 = None
view_15: "f32[1, 64, 64, 1][4096, 64, 1, 1]cuda:0" = torch.ops.aten.view.default(view_14, [1, 64, 64, 1]); view_14 = None
slice_7: "f32[1, 64][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(t, 0, 0, 9223372036854775807); t = None
unsqueeze_6: "f32[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_7, 1); slice_7 = None
slice_8: "f32[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_6, 2, 0, 9223372036854775807); unsqueeze_6 = None
unsqueeze_7: "f32[1, 1, 64, 1][1, 64, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_8, 3); slice_8 = None
mul_3: "f32[1, 64, 64, 1][4096, 64, 1, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_15, unsqueeze_7); view_15 = unsqueeze_7 = None
view_16: "f32[64, 64, 1][64, 1, 1]cuda:0" = torch.ops.aten.view.default(mul_3, [64, 64, 1]); mul_3 = None
view_17: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_16, [64, 64]); view_16 = None
_to_copy_4: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_17, dtype = torch.bfloat16); view_17 = None
add: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.add.Tensor(_to_copy_4, primals_3); _to_copy_4 = primals_3 = None
t_4: "bf16[64, 64][1, 64]cuda:0" = torch.ops.aten.t.default(primals_2); primals_2 = None
clone: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.clone.default(t_4, memory_format = torch.contiguous_format); t_4 = None
t_5: "bf16[1, 64][1, 1]cuda:0" = torch.ops.aten.t.default(amax_1); amax_1 = None
view_21: "bf16[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.view.default(t_5, [1, 1, 64]); t_5 = None
amax_3: "bf16[1, 1][1, 1]cuda:0" = torch.ops.aten.amax.default(view_21, [-1]); view_21 = None
unsqueeze_8: "bf16[1, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(amax_3, 1); amax_3 = None
expand: "bf16[1, 64, 1][1, 0, 1]cuda:0" = torch.ops.aten.expand.default(unsqueeze_8, [1, 64, 1])
clone_1: "bf16[1, 64, 1][64, 1, 1]cuda:0" = torch.ops.aten.clone.default(expand, memory_format = torch.contiguous_format); expand = None
view_22: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.view.default(clone_1, [64, 1]); clone_1 = None
_to_copy_7: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(view_22, dtype = torch.float32); view_22 = None
clamp_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy_7, 1e-12); _to_copy_7 = None
div_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp_3, 448.0); clamp_3 = None
reciprocal_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.reciprocal.default(div_3); div_3 = None
view_27: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(clone, [64, 1, 64]); clone = None
view_28: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_27, [64, 1, 1, 64]); view_27 = None
slice_11: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(reciprocal_3, 0, 0, 9223372036854775807); reciprocal_3 = None
unsqueeze_11: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_11, 1); slice_11 = None
slice_12: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_11, 2, 0, 9223372036854775807); unsqueeze_11 = None
unsqueeze_12: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_12, 3); slice_12 = None
mul_5: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_28, unsqueeze_12); view_28 = unsqueeze_12 = None
view_29: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_5, [64, 1, 64]); mul_5 = None
view_30: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_29, [64, 64]); view_29 = None
_to_copy_8: "f8e4m3fn[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_30, dtype = torch.float8_e4m3fn); view_30 = None
t_8: "f8e4m3fn[64, 64][1, 64]cuda:0" = torch.ops.aten.t.default(_to_copy_8); _to_copy_8 = None
# No stacktrace found for following nodes
view_39: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(add, [64, 64]); add = None
return (view_39, primals_1, unsqueeze_8, t_8)
INFO: TRACED GRAPH
===== Backward graph 0 =====
<eval_with_key>.1 class GraphModule(torch.nn.Module):
def forward(self, primals_1: "bf16[64, 64][64, 1]cuda:0", unsqueeze_8: "bf16[1, 1, 1][1, 1, 1]cuda:0", t_8: "f8e4m3fn[64, 64][1, 64]cuda:0", tangents_1: "bf16[64, 64][64, 1]cuda:0"):
# File: /data/users/hirsheybar/checkout2/pytorch/test/dynamo/test_repros.py:6946 in forward, code: out = out.unflatten(0, input.shape[:-1])
view_19: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(tangents_1, [64, 64]); tangents_1 = None
# File: /data/users/hirsheybar/checkout2/pytorch/test/dynamo/test_repros.py:6943 in forward, code: out = Fp8LinearFn.apply(
abs_3: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.abs.default(view_19)
view_20: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(abs_3, [64, 1, 64]); abs_3 = None
amax_2: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.amax.default(view_20, [-1]); view_20 = None
expand: "bf16[1, 64, 1][1, 0, 1]cuda:0" = torch.ops.aten.expand.default(unsqueeze_8, [1, 64, 1]); unsqueeze_8 = None
clone_1: "bf16[1, 64, 1][64, 1, 1]cuda:0" = torch.ops.aten.clone.default(expand, memory_format = torch.contiguous_format); expand = None
view_22: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.view.default(clone_1, [64, 1]); clone_1 = None
_to_copy_5: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(amax_2, dtype = torch.float32); amax_2 = None
clamp_2: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy_5, 1e-12); _to_copy_5 = None
div_2: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp_2, 448.0); clamp_2 = None
reciprocal_2: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.reciprocal.default(div_2)
view_23: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_19, [64, 1, 64])
view_24: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_23, [64, 1, 1, 64]); view_23 = None
slice_9: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(reciprocal_2, 0, 0, 9223372036854775807); reciprocal_2 = None
unsqueeze_9: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_9, 1); slice_9 = None
slice_10: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_9, 2, 0, 9223372036854775807); unsqueeze_9 = None
unsqueeze_10: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_10, 3); slice_10 = None
mul_4: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_24, unsqueeze_10); view_24 = unsqueeze_10 = None
view_25: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_4, [64, 1, 64]); mul_4 = None
view_26: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_25, [64, 64]); view_25 = None
_to_copy_6: "f8e4m3fn[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_26, dtype = torch.float8_e4m3fn); view_26 = None
_to_copy_7: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(view_22, dtype = torch.float32); view_22 = None
clamp_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy_7, 1e-12); _to_copy_7 = None
div_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp_3, 448.0); clamp_3 = None
t_6: "f32[1, 64][1, 1]cuda:0" = torch.ops.aten.t.default(div_3); div_3 = None
new_ones_2: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.new_ones.default(div_2, [1, 1], pin_memory = False)
new_ones_3: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.new_ones.default(t_6, [1, 1], pin_memory = False)
t_9: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.t.default(new_ones_3); new_ones_3 = None
_scaled_mm_1: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten._scaled_mm.default(_to_copy_6, t_8, new_ones_2, t_9, None, None, torch.bfloat16); _to_copy_6 = t_8 = new_ones_2 = t_9 = None
view_31: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(_scaled_mm_1, [64, 1, 64]); _scaled_mm_1 = None
view_32: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_31, [64, 1, 1, 64]); view_31 = None
slice_13: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(div_2, 0, 0, 9223372036854775807); div_2 = None
unsqueeze_13: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_13, 1); slice_13 = None
slice_14: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_13, 2, 0, 9223372036854775807); unsqueeze_13 = None
unsqueeze_14: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_14, 3); slice_14 = None
mul_6: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_32, unsqueeze_14); view_32 = unsqueeze_14 = None
view_33: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_6, [64, 1, 64]); mul_6 = None
view_34: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_33, [64, 64]); view_33 = None
view_35: "f32[1, 64, 64][4096, 64, 1]cuda:0" = torch.ops.aten.view.default(view_34, [1, 64, 64]); view_34 = None
view_36: "f32[1, 64, 64, 1][4096, 64, 1, 1]cuda:0" = torch.ops.aten.view.default(view_35, [1, 64, 64, 1]); view_35 = None
slice_15: "f32[1, 64][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(t_6, 0, 0, 9223372036854775807); t_6 = None
unsqueeze_15: "f32[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_15, 1); slice_15 = None
slice_16: "f32[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_15, 2, 0, 9223372036854775807); unsqueeze_15 = None
unsqueeze_16: "f32[1, 1, 64, 1][1, 64, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_16, 3); slice_16 = None
mul_7: "f32[1, 64, 64, 1][4096, 64, 1, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_36, unsqueeze_16); view_36 = unsqueeze_16 = None
view_37: "f32[64, 64, 1][64, 1, 1]cuda:0" = torch.ops.aten.view.default(mul_7, [64, 64, 1]); mul_7 = None
view_38: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_37, [64, 64]); view_37 = None
_to_copy_9: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_38, dtype = torch.bfloat16); view_38 = None
t_10: "bf16[64, 64][1, 64]cuda:0" = torch.ops.aten.t.default(view_19)
mm: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.mm.default(t_10, primals_1); t_10 = primals_1 = None
sum_1: "bf16[64][1]cuda:0" = torch.ops.aten.sum.dim_IntList(view_19, [0]); view_19 = None
return (_to_copy_9, mm, sum_1)
```
With the change, we save primals_2 for backward instead
```
===== Forward graph 0 =====
/data/users/hirsheybar/checkout2/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "bf16[64, 64][64, 1]cuda:0", primals_2: "bf16[64, 64][64, 1]cuda:0", primals_3: "bf16[64][1]cuda:0"):
# File: /data/users/hirsheybar/checkout2/pytorch/test/dynamo/test_repros.py:6943 in forward, code: out = Fp8LinearFn.apply(
abs_1: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.abs.default(primals_1)
view: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(abs_1, [64, 1, 64]); abs_1 = None
amax: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.amax.default(view, [-1]); view = None
abs_2: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.abs.default(primals_2)
view_1: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(abs_2, [64, 1, 64]); abs_2 = None
amax_1: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.amax.default(view_1, [-1]); view_1 = None
_to_copy: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(amax, dtype = torch.float32); amax = None
clamp: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy, 1e-12); _to_copy = None
div: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp, 448.0); clamp = None
reciprocal: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.reciprocal.default(div)
view_2: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(primals_1, [64, 1, 64])
view_3: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_2, [64, 1, 1, 64]); view_2 = None
slice_1: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(reciprocal, 0, 0, 9223372036854775807); reciprocal = None
unsqueeze: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_1, 1); slice_1 = None
slice_2: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze, 2, 0, 9223372036854775807); unsqueeze = None
unsqueeze_1: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_2, 3); slice_2 = None
mul: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_3, unsqueeze_1); view_3 = unsqueeze_1 = None
view_4: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul, [64, 1, 64]); mul = None
view_5: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_4, [64, 64]); view_4 = None
_to_copy_1: "f8e4m3fn[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_5, dtype = torch.float8_e4m3fn); view_5 = None
_to_copy_2: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(amax_1, dtype = torch.float32)
clamp_1: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy_2, 1e-12); _to_copy_2 = None
div_1: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp_1, 448.0); clamp_1 = None
reciprocal_1: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.reciprocal.default(div_1)
view_6: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(primals_2, [64, 1, 64])
view_7: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_6, [64, 1, 1, 64]); view_6 = None
slice_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(reciprocal_1, 0, 0, 9223372036854775807); reciprocal_1 = None
unsqueeze_2: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_3, 1); slice_3 = None
slice_4: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_2, 2, 0, 9223372036854775807); unsqueeze_2 = None
unsqueeze_3: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_4, 3); slice_4 = None
mul_1: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_7, unsqueeze_3); view_7 = unsqueeze_3 = None
view_8: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_1, [64, 1, 64]); mul_1 = None
view_9: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_8, [64, 64]); view_8 = None
_to_copy_3: "f8e4m3fn[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_9, dtype = torch.float8_e4m3fn); view_9 = None
t: "f32[1, 64][1, 1]cuda:0" = torch.ops.aten.t.default(div_1); div_1 = None
new_ones: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.new_ones.default(div, [1, 1], pin_memory = False)
new_ones_1: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.new_ones.default(t, [1, 1], pin_memory = False)
t_2: "f8e4m3fn[64, 64][1, 64]cuda:0" = torch.ops.aten.t.default(_to_copy_3); _to_copy_3 = None
t_3: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.t.default(new_ones_1); new_ones_1 = None
_scaled_mm: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten._scaled_mm.default(_to_copy_1, t_2, new_ones, t_3, None, None, torch.bfloat16); _to_copy_1 = t_2 = new_ones = t_3 = None
view_10: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(_scaled_mm, [64, 1, 64]); _scaled_mm = None
view_11: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_10, [64, 1, 1, 64]); view_10 = None
slice_5: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(div, 0, 0, 9223372036854775807); div = None
unsqueeze_4: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_5, 1); slice_5 = None
slice_6: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_4, 2, 0, 9223372036854775807); unsqueeze_4 = None
unsqueeze_5: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_6, 3); slice_6 = None
mul_2: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_11, unsqueeze_5); view_11 = unsqueeze_5 = None
view_12: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_2, [64, 1, 64]); mul_2 = None
view_13: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_12, [64, 64]); view_12 = None
view_14: "f32[1, 64, 64][4096, 64, 1]cuda:0" = torch.ops.aten.view.default(view_13, [1, 64, 64]); view_13 = None
view_15: "f32[1, 64, 64, 1][4096, 64, 1, 1]cuda:0" = torch.ops.aten.view.default(view_14, [1, 64, 64, 1]); view_14 = None
slice_7: "f32[1, 64][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(t, 0, 0, 9223372036854775807); t = None
unsqueeze_6: "f32[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_7, 1); slice_7 = None
slice_8: "f32[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_6, 2, 0, 9223372036854775807); unsqueeze_6 = None
unsqueeze_7: "f32[1, 1, 64, 1][1, 64, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_8, 3); slice_8 = None
mul_3: "f32[1, 64, 64, 1][4096, 64, 1, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_15, unsqueeze_7); view_15 = unsqueeze_7 = None
view_16: "f32[64, 64, 1][64, 1, 1]cuda:0" = torch.ops.aten.view.default(mul_3, [64, 64, 1]); mul_3 = None
view_17: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_16, [64, 64]); view_16 = None
_to_copy_4: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_17, dtype = torch.bfloat16); view_17 = None
add: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.add.Tensor(_to_copy_4, primals_3); _to_copy_4 = primals_3 = None
t_5: "bf16[1, 64][1, 1]cuda:0" = torch.ops.aten.t.default(amax_1); amax_1 = None
view_21: "bf16[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.view.default(t_5, [1, 1, 64]); t_5 = None
amax_3: "bf16[1, 1][1, 1]cuda:0" = torch.ops.aten.amax.default(view_21, [-1]); view_21 = None
unsqueeze_8: "bf16[1, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(amax_3, 1); amax_3 = None
# No stacktrace found for following nodes
view_39: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(add, [64, 64]); add = None
return (view_39, primals_1, primals_2, unsqueeze_8)
INFO: TRACED GRAPH
===== Backward graph 0 =====
<eval_with_key>.1 class GraphModule(torch.nn.Module):
def forward(self, primals_1: "bf16[64, 64][64, 1]cuda:0", primals_2: "bf16[64, 64][64, 1]cuda:0", unsqueeze_8: "bf16[1, 1, 1][1, 1, 1]cuda:0", tangents_1: "bf16[64, 64][64, 1]cuda:0"):
# File: /data/users/hirsheybar/checkout2/pytorch/test/dynamo/test_repros.py:6946 in forward, code: out = out.unflatten(0, input.shape[:-1])
view_19: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(tangents_1, [64, 64]); tangents_1 = None
# File: /data/users/hirsheybar/checkout2/pytorch/test/dynamo/test_repros.py:6943 in forward, code: out = Fp8LinearFn.apply(
t_4: "bf16[64, 64][1, 64]cuda:0" = torch.ops.aten.t.default(primals_2); primals_2 = None
clone: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.clone.default(t_4, memory_format = torch.contiguous_format); t_4 = None
abs_3: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.abs.default(view_19)
view_20: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(abs_3, [64, 1, 64]); abs_3 = None
amax_2: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.amax.default(view_20, [-1]); view_20 = None
expand: "bf16[1, 64, 1][1, 0, 1]cuda:0" = torch.ops.aten.expand.default(unsqueeze_8, [1, 64, 1]); unsqueeze_8 = None
clone_1: "bf16[1, 64, 1][64, 1, 1]cuda:0" = torch.ops.aten.clone.default(expand, memory_format = torch.contiguous_format); expand = None
view_22: "bf16[64, 1][1, 1]cuda:0" = torch.ops.aten.view.default(clone_1, [64, 1]); clone_1 = None
_to_copy_5: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(amax_2, dtype = torch.float32); amax_2 = None
clamp_2: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy_5, 1e-12); _to_copy_5 = None
div_2: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp_2, 448.0); clamp_2 = None
reciprocal_2: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.reciprocal.default(div_2)
view_23: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_19, [64, 1, 64])
view_24: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_23, [64, 1, 1, 64]); view_23 = None
slice_9: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(reciprocal_2, 0, 0, 9223372036854775807); reciprocal_2 = None
unsqueeze_9: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_9, 1); slice_9 = None
slice_10: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_9, 2, 0, 9223372036854775807); unsqueeze_9 = None
unsqueeze_10: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_10, 3); slice_10 = None
mul_4: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_24, unsqueeze_10); view_24 = unsqueeze_10 = None
view_25: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_4, [64, 1, 64]); mul_4 = None
view_26: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_25, [64, 64]); view_25 = None
_to_copy_6: "f8e4m3fn[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_26, dtype = torch.float8_e4m3fn); view_26 = None
_to_copy_7: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten._to_copy.default(view_22, dtype = torch.float32); view_22 = None
clamp_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.clamp.default(_to_copy_7, 1e-12); _to_copy_7 = None
div_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.div.Tensor(clamp_3, 448.0); clamp_3 = None
reciprocal_3: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.reciprocal.default(div_3)
view_27: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(clone, [64, 1, 64]); clone = None
view_28: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_27, [64, 1, 1, 64]); view_27 = None
slice_11: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(reciprocal_3, 0, 0, 9223372036854775807); reciprocal_3 = None
unsqueeze_11: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_11, 1); slice_11 = None
slice_12: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_11, 2, 0, 9223372036854775807); unsqueeze_11 = None
unsqueeze_12: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_12, 3); slice_12 = None
mul_5: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_28, unsqueeze_12); view_28 = unsqueeze_12 = None
view_29: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_5, [64, 1, 64]); mul_5 = None
view_30: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_29, [64, 64]); view_29 = None
_to_copy_8: "f8e4m3fn[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_30, dtype = torch.float8_e4m3fn); view_30 = None
t_6: "f32[1, 64][1, 1]cuda:0" = torch.ops.aten.t.default(div_3); div_3 = None
new_ones_2: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.new_ones.default(div_2, [1, 1], pin_memory = False)
new_ones_3: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.new_ones.default(t_6, [1, 1], pin_memory = False)
t_8: "f8e4m3fn[64, 64][1, 64]cuda:0" = torch.ops.aten.t.default(_to_copy_8); _to_copy_8 = None
t_9: "f32[1, 1][1, 1]cuda:0" = torch.ops.aten.t.default(new_ones_3); new_ones_3 = None
_scaled_mm_1: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten._scaled_mm.default(_to_copy_6, t_8, new_ones_2, t_9, None, None, torch.bfloat16); _to_copy_6 = t_8 = new_ones_2 = t_9 = None
view_31: "bf16[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(_scaled_mm_1, [64, 1, 64]); _scaled_mm_1 = None
view_32: "bf16[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.view.default(view_31, [64, 1, 1, 64]); view_31 = None
slice_13: "f32[64, 1][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(div_2, 0, 0, 9223372036854775807); div_2 = None
unsqueeze_13: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_13, 1); slice_13 = None
slice_14: "f32[64, 1, 1][1, 1, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_13, 2, 0, 9223372036854775807); unsqueeze_13 = None
unsqueeze_14: "f32[64, 1, 1, 1][1, 1, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_14, 3); slice_14 = None
mul_6: "f32[64, 1, 1, 64][64, 64, 64, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_32, unsqueeze_14); view_32 = unsqueeze_14 = None
view_33: "f32[64, 1, 64][64, 64, 1]cuda:0" = torch.ops.aten.view.default(mul_6, [64, 1, 64]); mul_6 = None
view_34: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_33, [64, 64]); view_33 = None
view_35: "f32[1, 64, 64][4096, 64, 1]cuda:0" = torch.ops.aten.view.default(view_34, [1, 64, 64]); view_34 = None
view_36: "f32[1, 64, 64, 1][4096, 64, 1, 1]cuda:0" = torch.ops.aten.view.default(view_35, [1, 64, 64, 1]); view_35 = None
slice_15: "f32[1, 64][1, 1]cuda:0" = torch.ops.aten.slice.Tensor(t_6, 0, 0, 9223372036854775807); t_6 = None
unsqueeze_15: "f32[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_15, 1); slice_15 = None
slice_16: "f32[1, 1, 64][1, 64, 1]cuda:0" = torch.ops.aten.slice.Tensor(unsqueeze_15, 2, 0, 9223372036854775807); unsqueeze_15 = None
unsqueeze_16: "f32[1, 1, 64, 1][1, 64, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(slice_16, 3); slice_16 = None
mul_7: "f32[1, 64, 64, 1][4096, 64, 1, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_36, unsqueeze_16); view_36 = unsqueeze_16 = None
view_37: "f32[64, 64, 1][64, 1, 1]cuda:0" = torch.ops.aten.view.default(mul_7, [64, 64, 1]); mul_7 = None
view_38: "f32[64, 64][64, 1]cuda:0" = torch.ops.aten.view.default(view_37, [64, 64]); view_37 = None
_to_copy_9: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten._to_copy.default(view_38, dtype = torch.bfloat16); view_38 = None
t_10: "bf16[64, 64][1, 64]cuda:0" = torch.ops.aten.t.default(view_19)
mm: "bf16[64, 64][64, 1]cuda:0" = torch.ops.aten.mm.default(t_10, primals_1); t_10 = primals_1 = None
sum_1: "bf16[64][1]cuda:0" = torch.ops.aten.sum.dim_IntList(view_19, [0]); view_19 = None
return (_to_copy_9, mm, sum_1)
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149411
* __->__ #148922
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,591,296
|
fix cuDNN SDPA meta registration
|
eqy
|
closed
|
[
"module: cudnn",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: sdpa"
] | 6
|
COLLABORATOR
|
Update `cuDNN SDPA` meta registration to matching memory layout behavior in: https://github.com/pytorch/pytorch/pull/138354
cc @csarofeen @ptrblck @xwang233 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,590,506
|
Rewrite cpp extension tests to not be crazy
|
janeyx99
|
open
|
[
"module: cpp-extensions",
"module: tests",
"triaged",
"better-engineering"
] | 3
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Today, adding a test for a custom extension is painful because dependencies are weirdly tangled, different extensions want to test different things, and the prior attempt to consolidate all the building/installing into run_test.py is just confusing.
We also use this python setup.py install --root ./install command followed by adding those weird ./installs to the python_path to get things to work. I don't know why we do this so I will chalk it up to ~ historical reasons ~.
Either way, when someone gets the time, we should really refactor how we test our cpp extensions so that the following requirements are met:
A. We should be able to test 1 extension at a time without needing to run and build and install all the other extensions
B. Tests for extensions should live like normal unit tests
### Alternatives
do nothing, i continue to suffer
### Additional context
_No response_
cc @malfet @zou3519 @xmfan @mruberry @ZainRizvi
| true
|
2,908,587,920
|
[testing only] Update torch.utils.checkpoint to stash and restore TLS state
|
soulitzer
|
open
|
[
"ciflow/trunk"
] | 5
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,908,569,293
|
[DSD] Update the document to mention the limitation of set_optimizer_state_dict
|
fegin
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148918
Summary:
Fixes https://github.com/pytorch/pytorch/issues/140898
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,908,532,329
|
[dynamo] Remove L scoping for recompilation messages
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148917
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,517,027
|
Print hostname for ROCm CI runners in GHA logs
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 2
|
COLLABORATOR
|
Will help provide debug info for MI300 nodes when something goes wrong in the GHA run, since currently it only prints the ephemeral pod ID, which cannot be easily traced back to the node after-the-fact.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,908,491,724
|
fix typo
|
not-lain
|
closed
|
[
"module: docs",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
NONE
|
Fixes #ISSUE_NUMBER
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
2,908,491,638
|
DISABLED test_nested_wrap_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nested_wrap_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38507216615).
Over the past 3 hours, it has been determined flaky in 24 workflow(s) with 48 failures and 24 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nested_wrap_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,908,446,043
|
caffe2: gpu_cpp_library for :caffe2_gpu
|
get9
|
open
|
[
"caffe2",
"fb-exported",
"topic: not user facing"
] | 4
|
NONE
|
Test Plan:
#buildmore
CI
Reviewed By: christycylee
Differential Revision: D70892337
| true
|
2,908,434,884
|
Automate stable CUDA update and linter using min Python verison
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
1. Fixes: https://github.com/pytorch/pytorch/issues/145571 . Cuda Stable is the same cuda version that is published to pypi, also used to set Metadata section in the rest of whl scripts and tag the docker releases with latest tag.
2. Updates min python version used in linter
| true
|
2,908,433,441
|
[ROCm] testing: enable MEFF/FA unittests for gfx1100
|
xinyazhang
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"ciflow/rocm"
] | 7
|
COLLABORATOR
|
Include gfx1100, and optionally enable gfx1201/gfx950 according to env var TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,908,406,960
|
log cudagraph skip reasons
|
BoyuanFeng
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Add skip reasons to dynamo_compile so we can know popular skip reasons for cudagraph
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,359,787
|
Skip distributed subprocess test internally as they don't work
|
albanD
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Follow up from https://github.com/pytorch/pytorch/pull/146098
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,908,308,870
|
Numpy v1 v2 compatibility
|
clee2000
|
closed
|
[
"module: numpy"
] | 1
|
CONTRIBUTOR
|
Whats the policy on numpy compatibility in pytorch? I see that requirements-ci.txt pins numpy==1 for <python3.13 and numpy==2 for py3.13, but later in CI numpy gets reinstalled as numpy==2.0.2 for most python versions. Is CI supposed to use v2 or v1? Does being compatible with v2 ensure compatibility with v1?
cc @mruberry @rgommers @malfet
| true
|
2,908,299,346
|
[AOTI] Remove aoti_torch_cpu__weight_int4pack_mm_cpu_tensor
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 19
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148907
Summary: shim.h is only meant for generic tensor util shim functions. We should switch to use the auto fallback generation, but it will need some extra care on the op schema.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,908,290,089
|
[Torchscript] Add a flag to use mangled names instead of demangled
|
RihamSelim
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 12
|
CONTRIBUTOR
|
Summary: Optionally keep mangled names when expanding torchscript stacks
Test Plan:
```
buck2 build mode/opt //scripts/rihams/LearnPyTorch:torch_script_generate --show-full-output
/data/users/rihams/fbsource/buck-out/v2/gen/fbcode/0bd9d136228ad8a7/scripts/rihams/LearnPyTorch/__torch_script_generate__/torch_script_generate.par
buck2 build mode/opt //scripts/rihams/LearnPyTorch:torch_script_execute --show-full-output
```
- With `--torch_jit_expanded_stacks_mangled` Flag:
/data/users/rihams/fbsource/buck-out/v2/gen/fbcode/ef35e45045e8164c/scripts/rihams/LearnPyTorch/__torch_script_execute__/torch_script_execute fbcode/model.pt --torch_jit_expanded_stacks_mangled --torch_jit_enable_expanded_stacks
https://fburl.com/scuba/strobelight_function_tracer/8die4rvm
{F1975933247}
Without Flag:
/data/users/rihams/fbsource/buck-out/v2/gen/fbcode/ef35e45045e8164c/scripts/rihams/LearnPyTorch/__torch_script_execute__/torch_script_execute ./model.pt --torch_jit_enable_expanded_stacks
https://fburl.com/scuba/strobelight_function_tracer/x3nladpf
{F1975933268}
Reviewed By: bbus
Differential Revision: D70905872
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,908,231,257
|
[ONNX] Create onnx_symbolic
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: new features"
] | 7
|
COLLABORATOR
|
In the old exporter we allow users to define a symbolic() method to bypass JIT tracing for a block of logic. We can allow users to do similar things by creating symbolic ops at export.
This PR implements `torch.onnx.ops.symbolic` and `torch.onnx.ops.symbolic_multi_out` to allow users to create onnx nodes symbolically with pt2 & fx. The custom pytorch ops were designed such that the attributes are encoded to be part of a valid fx op. Users provide shape and dtype for the meta function to produce the currect fake tensor during export.
An example is

| true
|
2,908,145,811
|
[CI] Upgrade numpy?
|
clee2000
|
closed
|
[
"release notes: releng"
] | 1
|
CONTRIBUTOR
|
Gets rid of mention of py3.8, which is no longer supported
Upgrades the numpy version used in build when possible (numpy2.0.2 is the most recent version that supports py3.9)
As of right now, numpy2.2.3 is the most recent numpy version.
py3.13 already has numpy2.1.2 installed and 2.0.2 doesn't have a release for py3.12 on pypi
| true
|
2,908,138,103
|
[BE] Remove unused macro ENABLE_NCCL_P2P_SUPPORT
|
kwen2501
|
open
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148903
* #148900
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,908,132,670
|
Remove Direct Arm Compute Libray (ACL) Integration for Quantized Matmuls: `qlinear`/`qlinear_dynamic`
|
fadara01
|
open
|
[
"oncall: quantization",
"module: arm"
] | 1
|
COLLABORATOR
|
PR https://github.com/pytorch/pytorch/pull/148585 (temporarily) introduced a direct ACL implementation for `qlinear` and `qlinear_dynamic` for AArch64 when `USE_MKLDNN_ACL` is set.
This direct ACL implementation is a lot faster than the existing implementations that utilized ACL through oneDNN (MKLDNN) due to the (current) API friction between the stateful ACL API and the stateless oneDNN API (see benchmarks and numbers on https://github.com/pytorch/pytorch/pull/148585).
I'm creating this issue to make sure that we end up removing this direct ACL path for `qlinear` and `qlinear_dynamic` once we're done enabling a fast implementation for quantized matmuls through oneDNN+ACL.
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @malfet @snadampal @milpuz01
| true
|
2,908,127,531
|
DISABLED test_train_parity_multi_group_cpu_offload_eager (__main__.TestFullyShard1DTrainingCore)
|
pytorch-bot[bot]
|
open
|
[
"oncall: distributed",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_train_parity_multi_group_cpu_offload_eager&suite=TestFullyShard1DTrainingCore&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38499596698).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 6 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_train_parity_multi_group_cpu_offload_eager`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 894, in _check_return_codes
raise RuntimeError(error)
RuntimeError: Process 1 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 734, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 607, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 204, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 339, in test_train_parity_multi_group_cpu_offload_eager
self.run_subtests(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py", line 1180, in run_subtests
return run_subtests(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1003, in run_subtests
test_fn(*test_args, **test_kwargs, **subtest_kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 466, in _test_train_parity_multi_group
self.assertEqual(losses[0], losses[1])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not close!
Expected -16099.6064453125 but got -16100.7587890625.
Absolute difference: 1.15234375 (up to 1e-05 allowed)
Relative difference: 7.157589559187715e-05 (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/distributed/_composable/fsdp/test_fully_shard_training.py TestFullyShard1DTrainingCore.test_train_parity_multi_group_cpu_offload_eager
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
Process 2 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 734, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 607, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 204, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 339, in test_train_parity_multi_group_cpu_offload_eager
self.run_subtests(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py", line 1180, in run_subtests
return run_subtests(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1003, in run_subtests
test_fn(*test_args, **test_kwargs, **subtest_kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 466, in _test_train_parity_multi_group
self.assertEqual(losses[0], losses[1])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not close!
Expected -14233.7666015625 but got -14235.5712890625.
Absolute difference: 1.8046875 (up to 1e-05 allowed)
Relative difference: 0.00012678917327490054 (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/distributed/_composable/fsdp/test_fully_shard_training.py TestFullyShard1DTrainingCore.test_train_parity_multi_group_cpu_offload_eager
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
Process 4 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 734, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 607, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 204, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 339, in test_train_parity_multi_group_cpu_offload_eager
self.run_subtests(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py", line 1180, in run_subtests
return run_subtests(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1003, in run_subtests
test_fn(*test_args, **test_kwargs, **subtest_kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 466, in _test_train_parity_multi_group
self.assertEqual(losses[0], losses[1])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not close!
Expected -16928.962890625 but got -16930.900390625.
Absolute difference: 1.9375 (up to 1e-05 allowed)
Relative difference: 0.00011444883023950379 (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/distributed/_composable/fsdp/test_fully_shard_training.py TestFullyShard1DTrainingCore.test_train_parity_multi_group_cpu_offload_eager
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
Process 6 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 734, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 607, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 204, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 339, in test_train_parity_multi_group_cpu_offload_eager
self.run_subtests(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py", line 1180, in run_subtests
return run_subtests(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1003, in run_subtests
test_fn(*test_args, **test_kwargs, **subtest_kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 466, in _test_train_parity_multi_group
self.assertEqual(losses[0], losses[1])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not close!
Expected -16458.76953125 but got -16462.013671875.
Absolute difference: 3.244140625 (up to 1e-05 allowed)
Relative difference: 0.00019710711780977324 (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/distributed/_composable/fsdp/test_fully_shard_training.py TestFullyShard1DTrainingCore.test_train_parity_multi_group_cpu_offload_eager
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
Process 7 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 734, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 607, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 204, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 339, in test_train_parity_multi_group_cpu_offload_eager
self.run_subtests(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_fsdp.py", line 1180, in run_subtests
return run_subtests(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1003, in run_subtests
test_fn(*test_args, **test_kwargs, **subtest_kwargs)
File "/var/lib/jenkins/pytorch/test/distributed/_composable/fsdp/test_fully_shard_training.py", line 466, in _test_train_parity_multi_group
self.assertEqual(losses[0], losses[1])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not close!
Expected -14614.646484375 but got -14616.1748046875.
Absolute difference: 1.5283203125 (up to 1e-05 allowed)
Relative difference: 0.00010457456594204845 (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/distributed/_composable/fsdp/test_fully_shard_training.py TestFullyShard1DTrainingCore.test_train_parity_multi_group_cpu_offload_eager
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/_composable/fsdp/test_fully_shard_training.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @chauhang @penguinwu
| true
|
2,908,113,128
|
[RFC][BE] assume error checking is on by default (#141914)
|
kwen2501
|
open
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148903
* __->__ #148900
Summary:
Remove conditional MACRO `ENABLE_NCCL_ERROR_CHECKING` and assume that error checking is always on.
These checks were wrapped in a macro because older NCCL libraries didn't have the pre-requisite functions to do error checks. This check was put in several years ago.
Pull request https://github.com/pytorch/pytorch/issues/142023 adds a static_assert that NCCL version should be 2.7 or above for PyTorch to work.
2.4 released about 2 years ago so it's relatively safe to assume that everyone has upgraded.
Assume that the world has all upgraded to later version of NCCL library.
Release note for PyTorch must specify that going forward, PyTorch will only work with NCCL version 2.7 and above.
Test Plan:
Unit tests.
cc H-Huang awgu kwen2501 wanchaol fegin fduwjj wz337 wconstab d4l3k
Reviewed By: wconstab, fduwjj, kwen2501
Differential Revision: D66672062
Pulled By: c-p-i-o
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,908,112,052
|
[DRAFT] make reshape work for reshapeing 1dim unbacked non-contig to anything
|
laithsakka
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149266
* __->__ #148899
* #148893
* #148872
* #148742
* #148815
* #148809
* #148430
| true
|
2,908,071,698
|
[IR] adding option to enable storing namedtuple fields
|
felixsu2006
|
open
|
[
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Summary:
adding option to enable/disable this functionality
setting to True by default so shouldn't affect any existing use cases unless explicitly set to False
Test Plan: no functionality changes
Differential Revision: D70905747
| true
|
2,908,065,282
|
Enable experimentation with ephemeral runners on pull.yml
|
jeanschmidt
|
closed
|
[
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
# TLDR
Adds `get-is-ephemeral` step to pull.yml workflow and enable the experimentation of `ephemeral` on `pull.yml` workflow.
The status of the experiment can be found in the [test-infra issue](https://github.com/pytorch/test-infra/issues/5132).
# What?
Enable experiment with ephemeral runners in the pull.yml workflow.
# Why?
Those runners are ephemeral, as eliminating nonephemeral runners is a follow up for the recent security incident. Refreshable infrastructure have been something we've trying to accomplish for a while, but haven't been successful. The major blocker we face is related to stockouts and unreliability from GH side. Most of it is because nonephemeral runners can run other jobs and continue clearing the queue in case of a problem. This is not possible for ephemeral runners.
# How?
To remediate stockouts, the [reuse/refresh of ephemeral instances](https://github.com/pytorch/test-infra/pull/6315) have been introduced.
In order to remediate GH side issues, [queue healing mechanism](https://github.com/pytorch/test-infra/pull/6018) is being implemented.
# Next steps
After merging those changes, we intend to put a small percentage of jobs to use ephemeral runners, so we can evaluate impact on queue times and gather statistics on reuse and tune noflap cluster behaviour. Once we feel comfortable the experiment will be shifted to 100% and we'll migrate all workflows to fully ephemeral instances. Eventually, all runners will be ephemeral and the experiment and runner variants will be removed, as we update other workflows like slow, trunk and nightlies.
| true
|
2,908,036,888
|
[dynamo] fix bug where non-recursive disable modifies the original function
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going"
] | 7
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148896
Fixes https://github.com/pytorch/pytorch/issues/148787.
We fix this by:
- Wrapping the original function instead of directly modifying it
- When we detect that the previous frame is the non-recursive disable wrapper, then skip tracing this frame (non-recursive disable wrapper will always be skipped, so that frame will be present in the traceback)l
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,908,029,449
|
Remove 12.4 x86 builds and 12.6 sbsa builds from nightly
|
tinglvv
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/issues/145570
redo https://github.com/pytorch/pytorch/pull/148625
cc @atalman @malfet @nWEIdia @ptrblck
| true
|
2,907,995,378
|
Support uneven sharding for FSDP2 + TP
|
lw
|
closed
|
[
"oncall: distributed",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150393
* #150146
* __->__ #148894
| true
|
2,907,993,177
|
use statically known true instead of guard size oblivious in bmm and mm inductor decompositions .
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148893
this was discussed with @eellison and he recommended using statically_known_true here, the intuition is. We already have 0/1 specializations in place, if we reach those checks with dynamic shapes that are not already specialized
then we do not want them to specialize them, "a recompilation here is not justified".
Those are all non-semantic changing optimizations.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,907,962,080
|
Introduce TORCH_ABI_VERSION and a runtime aoti_torch_abi_version C shim ABI
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/inductor",
"ci-no-td"
] | 5
|
CONTRIBUTOR
|
Importable https://github.com/pytorch/pytorch/pull/148836
| true
|
2,907,884,008
|
Upgrading FlashAttention to V3
|
drisspg
|
open
|
[
"triaged",
"module: sdpa"
] | 5
|
CONTRIBUTOR
|
# Summary
We are currently building and utilizing FlashAttention2 for torch.nn.functional.scaled_dot_product_attention
Up until recently the files we build and our integration was very manual. We recently changed this and made FA a third_party/submodule: https://github.com/pytorch/pytorch/pull/146372
This makes it easier to pull in new files (including those for FAv3) however due to the fact that third_party extensions do not have a mechanism to be re-integrated into ATen the build system + flash_api is still manual.
### Plan
At a very high level we have a few options. I will for the sake of argument though not include the runtime dependency option. So for know lets assume we need to build and ship the kernels in libtorchcuda.so
1. Replace entirely FAv2 w/ FAv3:
This up until recently seemed like a non ideal option since we would lose FA support for A100 + machines. This has changed in: https://github.com/Dao-AILab/flash-attention/commit/7bc3f031a40ffc7b198930b69cf21a4588b4d2f9 and therefor this seems like a much more viable option, and least binary size impactful. I think the main difference is that FAv3 doesn't support Dropout. TBD if this a large enough blocker.
2. Add FAv3 along w/ FAv2
This would require adding another backend to SDPA for FAv3. This would naively have a large impact to binary size, however we could choose to only build these kernels on H100 machines.
I am personally in favor of 1 since it easier to maintain and will provide increased perf on a100 machines for the hot path (no dropout).
For both paths, updates to internal build system will be needed.
| true
|
2,907,782,392
|
Hook StaticCudaLauncher up to torch.compile (cold start)
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149629
* #149442
* #149054
* __->__ #148890
This hooks up the previous PR to torch.compile. Will add a config flag to hide this behind in a bit, but for now it's useful for testing purposes to have it on by default.
Inductor will automatically choose to use StaticCudaLauncher to launch triton kernels if:
- The kernel is a cuda kernel and inductor can find a cubin file associated with it
- The kernel takes less than 50 arguments
- The kernel doesn't use any special features (launch hooks, large amounts of shared memory)
- The kernel is not user defined (to be supported in a later PR)
We split CompileResult into TritonCompileResult and StaticTritonCompileResult, but have them share implementations of how they exec a python launcher. StaticTritonCompileResult's python launcher has the benefit of a simpler def_args/call_args setup, since it always filters out all constexprs before running, no matter the triton version.
Some key features of StaticTritonCompileResult:
- It is fully serializable
- It stores the minimum amount of stuff, so that later it can be cached easily
- It does not depend on any triton specific types (though it does have various triton metadata).
For now, both TritonCompileResult and StaticTritonCompileResult still `exec` custom python launchers, and use GridExpr. We can change that in the future to simplify if we'd like. For now though, this custom python codegen is good for flexibility when it comes to supporting removal of constexprs, so using it for static launching is nice to not have to pay the cost of removing constexprs at kernel runtime.
Hooking everything up to torch.compile lets me run every unit test with StaticCudaLauncher to make sure that we still pass (even if we bypass StaticCudaLauncher itself). It also lets me check for compilation/runtime performance with these changes.
Fixes #149448
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,907,697,033
|
DISABLED test_make_closure_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: higher order operators",
"module: pt2-dispatcher"
] | 3
|
NONE
|
Platforms: linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_make_closure_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38476198190).
Over the past 3 hours, it has been determined flaky in 21 workflow(s) with 42 failures and 21 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_make_closure_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_dynamic_shapes.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @zou3519 @ydwu4 @penguinwu @bdhirsh @clee2000 @chauhang @ezyang @bobrenjc93
| true
|
2,907,693,730
|
Update RELEASE.md with latest changes to release process and release 2.7 information
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
1. Update for Release 2.7 compatibility matrix
2. Remove mention of builder project, the scripts for release management were migrated to test-infra
| true
|
2,907,582,092
|
Vincent/rebase 2.5
|
vincent-tr
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)"
] | 2
|
NONE
|
Rebase `flexai/v2.5.0` from `upstream/release/2.5`
Refs: NOTICKET
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mingfeima @XiaobingSuper @ashokei @jingxu10 @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,907,351,246
|
Unable to export model to ONNX with dynamo and dynamic batch size
|
Fredrik00
|
closed
|
[
"module: onnx",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
I have been trying to export PARSeq, a Transformer based scene text recognition model, to ONNX with torch.onnx.export and dynamo enabled. I have been successful in getting the model exported with a fixed batch size, but unfortunately not with dynamic shapes.
I have created an input tensor matching my max batch size by repeating my original one. After passing it through the model for export, the batch size becomes fixed to the example tensor batch size rather than being dynamic. I have also attempted with dynamic_axes, which from the code looked like it would attempt to convert it to dynamic_shapes internally.
```
image = get_dummy_input()
image_batch = image.repeat(128, 1, 1, 1)
onnx_model = torch.onnx.export(
lightning_model,
image_batch,
input_names=['input'],
output_names=['output'],
dynamo=True,
dynamic_shapes=[{0: torch.export.Dim('batch_size', min=1, max=128)}],
optimize=True,
verbose=True # Includes metadata used during quantization
)
onnx_model.save(output_path)
```
When using the exported onnx model for inference, with a batch size of 8 in this example, I get the following error:
`onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Where node. Name:'node_Where_3144' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:540 void onnxruntime::BroadcastIterator::Init(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 8 by 128`
I couldn't find any example of the dynamic_shapes input, and the documentation is very vague. Is there something wrong with how I am specifying the dynamic input shapes?
Full example can be found here (requirements listed in requirements/onnx.txt):
https://github.com/Fredrik00/parseq/blob/tflite-export/tools/export_onnx.py
### Versions
```
[pip3] ai-edge-torch-nightly==0.4.0.dev20250303
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.21.0
[pip3] onnxscript==0.2.1
[pip3] optree==0.14.1
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.6.0+cu126
[pip3] torch_xla2==0.0.1.dev202412041639
[pip3] torchaudio==2.6.0+cu126
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] ai-edge-torch-nightly 0.4.0.dev20250303 pypi_0 pypi
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.14.1 pypi_0 pypi
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torch-xla2 0.0.1.dev202412041639 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
| true
|
2,907,250,810
|
test_memory_profiler_viz failed on cudamallocasync
|
garfield1997
|
open
|
[
"module: cuda",
"triaged",
"module: testing"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
step to reproduce the bug
```shell
# step 1
export PYTORCH_CUDA_ALLOC_CONF="backend:cudaMallocAsync"
# step 2 run test case from test_cuda.py
python test_cuda.py -k 'test_memory_profiler_viz'
```
output
```
FAIL: test_memory_profiler_viz (__main__.TestCudaMallocAsync)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/projs/framework/xushuo/venv/gpu/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3108, in wrapper
method(*args, **kwargs)
File "/projs/framework/xushuo/venv/gpu/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1846, in wrapper
return fn(*args, **kwargs)
File "/projs/framework/xushuo/pytorch2_6/pytorch/test/test_cuda.py", line 3457, in test_memory_profiler_viz
self.assertTrue("test_cuda.py" in plot)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCudaMallocAsync.test_memory_profiler_viz
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 0.109s
FAILED (failures=1)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 4
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 44 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0
[pip3] torchdump==0.5.0
[pip3] triton==3.2.0
[conda] numpy 2.0.0 pypi_0 pypi
[conda] torch-mlu-ci-overrides 0.0.2 pypi_0 pypi
cc @ptrblck @msaroufim @eqy
| true
|
2,907,206,658
|
CrossEntropy with label smoothing does not apply the correct label smoothing
|
adrien-grl
|
closed
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
The `label_smoothing` parameter of the `nn.CrossEntropyLoss` does not match the expected behavior. Instead, it seems that the label smoothing that is applied is half of the correct value.
Indeed, in the [original paper](https://arxiv.org/pdf/1512.00567) they specify that the loss is:
$$\mathcal{L}(p) = D_{KL}(q' \lVert p) + H(q').$$
As a result, the minimum value of $p$ is $q'$ and $q'$ is given as:
$$q'(k) = (1 - \epsilon) \delta_{k,y} + \frac{\epsilon}{K}.$$
In the example below we have $K=2$, $\epsilon=0.1$ and as a result, the softmax of `param` should be equal to $[1 - 0.1, 0.1]$.
## Minimum example to reproduce
```python
import math
import torch
label_smoothing = 0.1
K = 2
label = torch.zeros(1, dtype=torch.long)
p = torch.randn(1, K).requires_grad_(True)
optim = torch.optim.SGD((p,), lr=1e-1)
loss_fn = torch.nn.CrossEntropyLoss(label_smoothing=label_smoothing)
for _ in range(10000):
loss = loss_fn(p, label)
loss.backward()
optim.step()
optim.zero_grad()
def min_theoretical_value(label_smoothing):
return (
-(1. - label_smoothing) * math.log(1. - label_smoothing)
-label_smoothing * (math.log(label_smoothing) - math.log(K - 1))
)
print(loss - min_theoretical_value(label_smoothing))
print(loss - min_theoretical_value(label_smoothing / 2))
print(p.softmax(1))
```
Note that you can check that the loss converged because
## Expected output
```
tensor(0.)
tensor(0.1266)
tensor([[0.9000, 0.1000]])
```
## Actual output
```
tensor(-0.1266)
tensor(0.)
tensor([[0.9500, 0.0500]])
```
### Versions
PyTorch version: 2.7.0.dev20250224+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 3500 Ada Generation Laptop GPU
Nvidia driver version: 570.124.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13950HX
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5500,0000
CPU min MHz: 800,0000
BogoMIPS: 4838.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1,3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-gpu==1.20.1
[pip3] optree==0.14.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250224+cu128
[pip3] torchaudio==2.6.0.dev20250224+cu128
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.22.0.dev20250224+cu128
[conda] Could not collect
Edit: Closed because my math is wrong...
| true
|
2,907,187,294
|
Pytorch2.7+ROCm6.3 is 34.55% slower than Pytorch2.6+ROCm6.2.4
|
testbug5577
|
closed
|
[
"module: performance",
"module: rocm",
"triaged"
] | 6
|
NONE
|
The same hardware and software environment, only the versions of PyTorch+ROCm are different.
Use ComfyUI to run Hunyuan text to video:
ComfyUI:v0.3.24
ComfyUI plugin: teacache
49frames
480x960
20steps
CPU:i5-7500
GPU:AMD 7900XT 20GB
RAM:32GB
PyTorch2.6+ROCm6.2.4 Time taken: 348 seconds 14.7s/it
The VAE Decode Tiled node (parameters: 128 64 32 8) takes: 55 seconds
PyTorch2.7+ROCm6.3 Time taken: 387 seconds 15.66s/it**(11.21% slower)**
The VAE Decode Tiled node (parameters: 128 64 32 8) takes: 74 seconds**(34.55% slower)**
In addition, if the VAE node parameters are set to 256 64 64 8 (the default parameters for nvidia graphics cards), it will take a very long time and seem to be stuck but the program will not crash.The same situation occurs in both Pytorch 2.6 and 2.7.
I'm sorry I don't know what error message to submit for this discrepancy, but I can cooperate with the test and upload the specified information.
Thank you.
[ComfyUI_running_.json](https://github.com/user-attachments/files/19162936/ComfyUI_running_.json)
cc @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,907,029,068
|
[Inductor][CPP] Fix expr issue in loop split
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148882
**Summary**
Fix issue: https://github.com/pytorch/pytorch/issues/148058. In this case, there is an `indexing_expr` as an integer which doesn't have the method of `find`.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_repro.py -k test_issue_148058
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,906,788,379
|
Update torch-xpu-ops commit pin
|
chunhuanMeng
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 3
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [026b2c8c7c92a7b2cec5d26334006e3423251cc6](https://github.com/intel/torch-xpu-ops/commit/026b2c8c7c92a7b2cec5d26334006e3423251cc6), includes:
- Enable AOT for LNL
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,906,681,453
|
Refactor to use torch.accelerator.device_index instead of torch.cuda.device for generic device context manager
|
guangyey
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm",
"ciflow/xpu"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148880
* #148864
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,906,586,191
|
setuptools pinning
|
ozanMSFT
|
closed
|
[
"module: windows",
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Fixes #148877
---
On 9 March 2025, [setuptools](https://pypi.org/project/setuptools/#history) published a new version and it is causing an issue on `pytorch` with the following error:
```
AttributeError: module 'distutils' has no attribute '_msvccompiler'. Did you mean: 'ccompiler'?
```
Last known working version is [75.8.2](https://pypi.org/project/setuptools/75.8.2/)
Currently it is affecting Windows ARM64 nightly build, however soon it might affect also Windows x64 builds. (conda version is not updated yet [setuptools conda](https://anaconda.org/anaconda/setuptools)
Locally both `Windows ARM64` and `Windows x64` are having same problem with the latest `setuptools` (>75.8.2)
---
This PR is pinning `setuptools` version.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,906,577,125
|
Add Half support for weight_norm on CPU
|
CaoE
|
closed
|
[
"module: cpu",
"open source",
"module: half",
"Merged",
"ciflow/trunk",
"release notes: nn",
"ciflow/inductor"
] | 10
|
COLLABORATOR
|
Fixes #148867.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,906,496,492
|
setuptools error - Windows - 'distutils' has no attribute '_msvccompiler'
|
ozanMSFT
|
closed
|
[
"module: build",
"module: windows",
"triaged",
"module: regression"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
`setuptools` is updated on 9 March 2025. (last working version is `75.8.2`)
https://pypi.org/project/setuptools/#history
With this update, Windows nightly builds started fail with `AttributeError: module 'distutils' has no attribute '_msvccompiler'`
---
Currently `x64` builds are not affected since they're still using `conda` version is not updated yet `75.8.0` as of today. However, once `conda` version is being updated, they will be also affected. (https://anaconda.org/anaconda/setuptools)
NOTE: We locally tested on Windows x64 with `setuptools` recent version (`>75.8.2`), it also got same error.
---
**Logs:**
https://ossci-raw-job-status.s3.amazonaws.com/log/38441317550](https://ossci-raw-job-status.s3.amazonaws.com/log/38441317550)
```
2025-03-09T08:11:01.4358834Z Traceback (most recent call last):
2025-03-09T08:11:01.4359408Z -- Building version 2.7.0.dev20250309+cpu
2025-03-09T08:11:01.4360139Z File "C:\a\pytorch\pytorch\pytorch\tools\build_pytorch_libs.py", line 21, in _get_vc_env
2025-03-09T08:11:01.4361713Z return distutils._msvccompiler._get_vc_env(vc_arch) # type: ignore[no-any-return]
2025-03-09T08:11:01.4362368Z ^^^^^^^^^^^^^^^^^^^^^^^
2025-03-09T08:11:01.4363073Z AttributeError: module 'distutils' has no attribute '_msvccompiler'. Did you mean: 'ccompiler'?
2025-03-09T08:11:01.4363687Z
2025-03-09T08:11:01.4363996Z During handling of the above exception, another exception occurred:
2025-03-09T08:11:01.4364443Z
2025-03-09T08:11:01.4364610Z Traceback (most recent call last):
2025-03-09T08:11:01.4365176Z File "C:\a\pytorch\pytorch\pytorch\setup.py", line 1502, in <module>
2025-03-09T08:11:01.4373104Z main()
2025-03-09T08:11:01.4373545Z File "C:\a\pytorch\pytorch\pytorch\setup.py", line 1170, in main
2025-03-09T08:11:01.4382421Z build_deps()
2025-03-09T08:11:01.4382970Z File "C:\a\pytorch\pytorch\pytorch\setup.py", line 490, in build_deps
2025-03-09T08:11:01.4389171Z build_pytorch(
2025-03-09T08:11:01.4389796Z File "C:\a\pytorch\pytorch\pytorch\tools\build_pytorch_libs.py", line 121, in build_pytorch
2025-03-09T08:11:01.4391983Z my_env = _create_build_env()
2025-03-09T08:11:01.4392637Z ^^^^^^^^^^^^^^^^^^^
2025-03-09T08:11:01.4393297Z File "C:\a\pytorch\pytorch\pytorch\tools\build_pytorch_libs.py", line 82, in _create_build_env
2025-03-09T08:11:01.4395306Z my_env = _overlay_windows_vcvars(my_env)
2025-03-09T08:11:01.4395815Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-09T08:11:01.4396547Z File "C:\a\pytorch\pytorch\pytorch\tools\build_pytorch_libs.py", line 51, in _overlay_windows_vcvars
2025-03-09T08:11:01.4398054Z vc_env = _get_vc_env(vc_arch)
2025-03-09T08:11:01.4398830Z ^^^^^^^^^^^^^^^^^^^^
2025-03-09T08:11:01.4399562Z File "C:\a\pytorch\pytorch\pytorch\tools\build_pytorch_libs.py", line 25, in _get_vc_env
2025-03-09T08:11:01.4400909Z return _msvccompiler._get_vc_env(vc_arch) # type: ignore[no-any-return]
2025-03-09T08:11:01.4402146Z ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-09T08:11:01.4402831Z AttributeError: module 'distutils._msvccompiler' has no attribute '_get_vc_env'
```
### Versions
torch 2.7.0
**working version:**
setuptools [75.8.2](https://pypi.org/project/setuptools/75.8.2/)
**failed versions (all published on 9 March 2025):**
setuptools [75.9.0](https://pypi.org/project/setuptools/75.9.0/)
setuptools [75.9.1](https://pypi.org/project/setuptools/75.9.1/)
setuptools [76.0.0](https://pypi.org/project/setuptools/76.0.0/)
created issue on `setuptools`:
https://github.com/pypa/setuptools/issues/4874
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,906,488,406
|
Use device agnostic APIs and variable names for dtensor
|
amathewc
|
closed
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"module: mkldnn",
"open source",
"module: amp (automated mixed precision)",
"NNC",
"release notes: quantization",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)",
"module: compiled autograd"
] | 23
|
CONTRIBUTOR
|
## MOTIVATION
To generalize DTensor test cases for non-CUDA devices, we are replacing certain APIs with device-agnostic alternatives. Additionally, we are refactoring the code to improve modularity.
Please refer to this RFC as well: https://github.com/pytorch/rfcs/pull/66
## CHANGES
### common_dtensor.py
- Use APIs like torch.get_device_module and dist.get_default_backend_for_device to dynamically determine the device and backend based on the environment.
- Replace hardcoded device names with generic identifiers such as self.device_type.
- In the wrapper function, use DEVICE_COUNT, which is set via DEVICE_MODULE.device_count, instead of torch.accelerator.device_count(), as the latter does not support out-of-tree devices.
### test_random_ops.py & test_dtensor_config.py
- Replace hardcoded device names with self.device_type.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @mcarilli @ptrblck @leslie-fang-intel @EikanWang @voznesenskym @penguinwu @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan @kwen2501 @c-p-i-o @ankurneog
| true
|
2,906,394,490
|
Refactor `test/test_torch.py` by moving testcase to `test_indexing.py`
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Fix `FIXME` in `test_torch.py` by moving test-cases to `test_indexing.py`
```python
# FIXME: move to test indexing
# FIXME: move to indexing test suite
```
- Move tests in `test/test_torch.py` to `test_indexing.py`
- Remove `FIXME` comments
## TestResult
```bash
pytest test/test_torch.py -k TestTorchDeviceType -vv
pytest test/test_indexing.py -k TestIndexing -vv
```


| true
|
2,906,382,344
|
`torch.device.__enter__` does not affect `get_default_device` despite taking precedence over `set_default_device`
|
ringohoffman
|
open
|
[
"triaged",
"module: python frontend"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Using a `torch.device` as a context manager takes precedence over `set_default_device`, but this isn't reflected by the return value of `get_default_device`.
```python
import torch
import torch.utils._device
torch.set_default_device("cuda:1")
with torch.device("cuda:0"):
print(f"get_default_device(): {torch.get_default_device()}")
print(f"CURRENT_DEVICE: {torch.utils._device.CURRENT_DEVICE}")
print(f"actual current device: {torch.tensor(()).device}")
```
```
get_default_device(): cuda:1
CURRENT_DEVICE: cuda:1
actual current device: cuda:0
```
I feel like calling `__enter__` on the `DeviceContext` created in `torch.device`'s C++ `__enter__` implementation and `__exit__` in the C++ `__exit__` implementation might be a solution.
https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/torch/csrc/Device.cpp#L179-L197
https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/torch/utils/_device.py#L100-L104
https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/torch/__init__.py#L1134-L1147
cc: @ezyang
### Versions
torch==2.6.0
cc @albanD
| true
|
2,906,358,061
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,906,293,895
|
convert guard_size_oblivious to runtime check in infer_size_impl
|
laithsakka
|
open
|
[
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152722
* __->__ #148872
its ok to check the requirement numel == newsize at runtime in case of unbacked instead of at compile time and assume that its true.
| true
|
2,906,283,113
|
Make `torch._check` support bool tensor as `cond` param
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes #148349
## Test Result
```python
pytest test/test_torch.py -k test_check -vv
```

| true
|
2,906,243,333
|
DISABLED test_lift_tensors_with_shared_symbols_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_lift_tensors_with_shared_symbols_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38461240129).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_lift_tensors_with_shared_symbols_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,906,200,092
|
Optimize `MaxPool1d` param `ceil_mode` description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Fixes #148123
Add output shape formula based on `ceil_mode` value, according to
https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/aten/src/ATen/native/Pool.h#L61-L75
## Test Result
### Before

### After

| true
|
2,906,148,635
|
Optimize shard_dim_alltoall to use alltoall_single
|
wanchaol
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"ciflow/periodic",
"release notes: distributed (dtensor)"
] | 6
|
COLLABORATOR
|
as titled, previously the shard_dim_alltoall uses `all_to_all`, which essentially could incur lots of copies if the tensor become non-contiguous during splits, and alltoall itself also incur copies
This PR uses alltoall_single instead, so that we could minimize tensor copies.
tested on all the shard dim change tests and it works properly:
```
pytest test/distributed/tensor/test_redistribute.py -s -k shard_dim_alltoall
```
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,906,143,010
|
FP16 of weight norm is slower than BF16 on CPU
|
jiqing-feng
|
closed
|
[
"module: nn",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
To reproduce it.
CMD: `numactl -C 0-31 -m 0 python test.py`
```python
import time
import torch
weight_norm = torch.nn.utils.parametrizations.weight_norm
conv_layer = torch.nn.Conv1d(in_channels=192, out_channels=383, kernel_size=5, dilation=1, padding=2, dtype=torch.bfloat16)
in_layer = weight_norm(conv_layer)
input_tensor = torch.rand(1, 192, 178).to(conv_layer.weight.dtype) - 0.5
with torch.no_grad():
for i in range(100):
start = time.time()
out = in_layer(input_tensor)
end = time.time()
print(f"time costs: {(end-start)*1000000} us")
```
You can see the overall latency bf16: fp16 = 1:2
I profiled it and found the overhead is from weight norm.
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250309+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.11.0-13-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) 6972P
BIOS Model name: Intel(R) Xeon(R) 6972P
CPU family: 6
Model: 173
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr
sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid ap
erfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpci
d cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xget
bv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat
pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnn
i avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig
arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 9 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 384 MiB (192 instances)
L3 cache: 960 MiB (2 instances)
NUMA node(s): 6
NUMA node0 CPU(s): 0-31,192-223
NUMA node1 CPU(s): 32-63,224-255
NUMA node2 CPU(s): 64-95,256-287
NUMA node3 CPU(s): 96-127,288-319
NUMA node4 CPU(s): 128-159,320-351
NUMA node5 CPU(s): 160-191,352-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-metric-learning==2.8.1
[pip3] pytorchvideo==0.1.5
[pip3] torch==2.7.0.dev20250309+cpu
[pip3] torch-audiomentations==0.11.1
[pip3] torch_pitch_shift==1.2.5
[pip3] torchaudio==2.6.0.dev20250309+cpu
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.22.0.dev20250309+cpu
[conda] Could not collect
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,906,066,198
|
[Inductor] Core dumped due to invalid next size
|
Cookiee235
|
open
|
[
"module: crash",
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class TestModel(torch.nn.Module):
def __init__(self):
super(TestModel, self).__init__()
self.linear = torch.nn.Linear(10, 10)
def forward(self, x):
mean = torch.zeros(10, 10)
std = torch.ones(10, 10)
random_data = torch.normal(mean, std)
LD = torch.randn(10, 10)
pivots = torch.randint(0, 10, (10,))
B = torch.randn(10, 10)
ldl_solution = torch.linalg.ldl_solve(LD, pivots, B)
input_unpool = torch.randn(1, 1, 10, 10, 10)
indices = torch.randint(0, 10, (1, 1, 10, 10, 10))
unpooled = torch.nn.functional.max_unpool3d(input_unpool, indices, kernel_size=2)
combined = self.linear(x) + random_data + ldl_solution + unpooled.mean()
return combined
model = TestModel()
inputs = torch.randn(1, 10)
res = model(inputs)
compiled_model = torch.compile(model, backend='inductor')
compiled_out = compiled_model(inputs) # core dump
```
### StackTrack
```
free(): invalid next size (fast)
Aborted (core dumped)
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,906,017,307
|
Create and send `full_tensor` on `ProcessGroup`-supported device in `_broadcast_tensors`
|
ringohoffman
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)"
] | 7
|
CONTRIBUTOR
|
Fixes #138842
`device` is always the device of the `local_state_dict`, which may or may not be CPU, which is not supported by NCCL backend.
Instead, create broadcasted tensors on one of `pg._device_types` and then move the tensors back if `local_state_dict`'s `device` was not supported by the `ProcessGroup`.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,905,997,322
|
Add torch.accelerator.device_index as accelerator's device switch context
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: not user facing",
"ciflow/rocm",
"ciflow/xpu",
"module: accelerator"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148880
* __->__ #148864
# Motivation
We propose adding support for the Python with statement on `torch.accelerator.device_index` to enable device switching functionality. This enhancement would simplify writing device-agnostic code and provide benefits across all accelerators. Its device-specific counterparts include [`torch.cuda.device`](https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/torch/cuda/__init__.py#L482) and [`torch.cuda._DeviceGuard`](https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/torch/cuda/__init__.py#L469).
**Design Philosophy**
It accepts either an `Int` or `None` as input. When `None` is passed, no device switch is performed. Supporting `None` is important for compatibility, as it's possible to encounter `None` values from `torch.device.index`.
Therefore, with this PR, we can do like this
```python
src = 0
dst = 1
# Set src to current device
torch.accelerator.set_device_index(src)
with torch.accelerator.device_index(dst):
# Inside with statement, we set dst to current device
assert torch.accelerator.get_device_index() == dst
# Here the current device should be src
assert torch.accelerator.get_device_index() == src
```
cc @albanD @EikanWang
| true
|
2,905,985,840
|
[Doc] Update CMAKE_PREFIX_PATH for XPU windows README
|
Stonepia
|
closed
|
[
"module: docs",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: xpu"
] | 11
|
CONTRIBUTOR
|
We found that the `pip install cmake` and `conda install cmake` has different behavior.
The reason is that the pip installed one doesn't find the corresponding libs under conda env. So we need to set the `CMAKE_PREFIX_PATH` for alignment.
cc @svekars @sekyondaMeta @AlannaBurke @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,905,948,283
|
[Inductor] Compiled model crashed when execute inference
|
Cookiee235
|
open
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class SimpleModel(torch.nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.linear = torch.nn.Linear(10, 10)
def forward(self, x):
x = self.linear(x)
eigvals = torch.linalg.eigvals(x)
eigvals_not = torch.bitwise_not(eigvals.to(torch.int32))
loss = torch.nn.functional.margin_ranking_loss(
eigvals.to(torch.float32),
eigvals_not.to(torch.float32),
torch.ones_like(eigvals).to(torch.float32)
)
return loss
model = SimpleModel()
inputs = torch.randn(10, 10)
res = model(inputs)
compiled_model = torch.compile(model, backend='inductor')
compiled_out = compiled_model(inputs)
```
### Traceback
```
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/torch_tests/zero_signature_3apis_random/torch.nn.functional.margin_ranking_loss.py", line 26, in <module>
compiled_out = compiled_model(inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/remote_pc/LLM4Converter/torch_tests/zero_signature_3apis_random/torch.nn.functional.margin_ranking_loss.py", line 8, in forward
def forward(self, x):
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward
return compiled_fn(full_args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 315, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
compiled_fn, args_, disable_amp=disable_amp, steal_args=True
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
~^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/utils.py", line 100, in g
return f(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1937, in forward
fw_outs = call_func_at_runtime_with_args(
CompiledFunction.compiled_fw,
args,
disable_amp=disable_amp,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
~^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper
return compiled_fn(runtime_args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 689, in inner_fn
outs = compiled_fn(args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "/tmp/torchinductor_qshenaf/3o/c3o7f3a7u7ehdg6lyrtuhqpuznmezaiii6mubdya4xmkaexqettr.py", line 166, in call
assert_size_stride(buf3, (10, 10), (10, 1))
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: expected size 10==10, stride 1==10 at dim=0; expected size 10==10, stride 10==1 at dim=1
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,905,947,765
|
[CPU]DNNL does not support bf16 backward on Lunar lake
|
gaopengff
|
closed
|
[
"triaged",
"module: mkldnn",
"module: regression",
"module: intel",
"bug"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I have tested my ut on Intel Lunar lake cpu(Intel® Core™ Ultra Processors). It failed with error message: “**RuntimeError: DNNL does not support bf16/f16 backward on the platform with avx2_vnni_2**”. Here is the reproducer:
```python
import torch
x = torch.ones([2, 3, 8, 6], dtype=torch.float, requires_grad=True)
conv1 = torch.nn.Conv2d(3, 32, kernel_size=7, stride=2, padding=3, bias=False)
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
y = conv1(x)
loss = y.sum()
loss.backward()
```
I think it may be caused by CPU's compatibility with DNNL. Could you help with it?
### Versions
Pytorch 2.6
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @frank-wei
| true
|
2,905,932,644
|
AttributeError: module 'torch.compiler' has no attribute 'save_cache_artifacts'
|
janak2
|
closed
|
[
"triaged",
"oncall: pt2",
"compile-cache"
] | 4
|
NONE
|
### 🐛 Describe the bug
Documentation says you need pytorch > 2.4: https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html
I have tried with torch 2.6 but am getting the following error:
### Error logs
```
Traceback (most recent call last):
File "/pkg/modal/_runtime/container_io_manager.py", line 703, in handle_user_exception
yield
File "/pkg/modal/_container_entrypoint.py", line 384, in call_lifecycle_functions
event_loop.run(res)
File "/pkg/modal/_container_entrypoint.py", line 168, in run
return self.loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/minicpm_inference_engine.py", line 390, in load_to_gpu
artifacts = torch.compiler.save_cache_artifacts()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'torch.compiler' has no attribute 'save_cache_artifacts'
```
### Versions
GPU - H100
Cuda - 12.4
Torch - 2.6.0
cc @chauhang @penguinwu
| true
|
2,905,930,106
|
[Inductor] Output mismatch shape after compilation
|
Cookiee235
|
open
|
[
"triaged",
"oncall: pt2",
"module: pt2 accuracy"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class SimpleModel(torch.nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.conv = torch.nn.Conv2d(1, 3, kernel_size=3, stride=1, padding=1)
self.upsample = torch.nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
def forward(self, x):
x = self.conv(x)
x = torch.floor(x)
x = self.upsample(x)
x = torch.unique_consecutive(x)
return x
model = SimpleModel()
inputs = torch.randn(1, 1, 16, 16)
res = model(inputs)
compiled_model = torch.compile(model, backend='inductor')
compiled_out = compiled_model(inputs)
torch.testing.assert_close(res, compiled_out, rtol=1e-3, atol=1e-3)
```
### Traceback
```
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/torch_tests/zero_signature_3apis_random/torch.nn.functional.upsample_bilinear.py", line 20, in <module>
torch.testing.assert_close(res, compiled_out, rtol=1e-3, atol=1e-3)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: The values for attribute 'shape' do not match: torch.Size([2649]) != torch.Size([2638]).
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,905,929,913
|
RuntimeError: OffsetBasedRNGTracker instantiation requires the presence of CUDA/CUDA-like device
|
zqwenn
|
open
|
[
"oncall: distributed",
"triaged"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This [PR](https://github.com/pytorch/pytorch/pull/147025) will cause a RuntimeError for third-party backends while using the torch.distributed.tensor._random.manual_seed function.
Here is the error stack.
```bash
Root Cause (first observed failure):
[0]:
time : 2025-03-10_09:33:40
host : localhost
rank : 7 (local_rank: 7)
exitcode : 1 (pid: 1951609)
error_file: /tmp/torchelastic_vxbheltp/none_pixn22hf/attempt_0/7/error.json
traceback : Traceback (most recent call last):
File "/root/anaconda3/envs/zqwtitan/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 354, in wrapper
return f(*args, **kwargs)
File "/dl/z00659619/torchtitan/torchtitan0301/torchtitan/torchtitan/train.py", line 99, in main
dist_utils.set_determinism(
File "/dl/z00659619/torchtitan/torchtitan0301/torchtitan/torchtitan/distributed/utils.py", line 110, in set_determinism
torch.distributed.tensor._random.manual_seed(seed, spmd_mesh)
File "/root/anaconda3/envs/zqwtitan/lib/python3.10/site-packages/torch/distributed/tensor/_random.py", line 82, in manual_seed
_rng_tracker = OffsetBasedRNGTracker(device_mesh, run_state_sync=False)
File "/root/anaconda3/envs/zqwtitan/lib/python3.10/site-packages/torch/distributed/tensor/_random.py", line 174, in __init__
raise RuntimeError(
RuntimeError: OffsetBasedRNGTracker instantiation requires the presence of CUDA/CUDA-like device. Got npu instead.
```
### Versions
2.7.0
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,905,884,556
|
[Flex Attention] support num_heads > 1 in block_mask
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 4
|
CONTRIBUTOR
|
Previously flex decoding errors when block mask has num_heads > 1. So users have to use num_heads=1, or explicitly mark `kernel_options={"FORCE_USE_FLEX_ATTENTION": True}`.
This PR fixes this issue. When not using grouped query attention (GQA, i.e., Hq == Hkv), we support block mask with num_heads = 1 and num_heads = num_query_heads (i.e., Hq). This is the same setting as flex attention kernel.
When using GQA (i.e., Hq != Hkv), we support block mask with num_heads = 1. When num_heads = Hq, we fall back to flex attention kernel so user don't need to explicitly mark `kernel_options={"FORCE_USE_FLEX_ATTENTION": True}` anymore.
Why fallback? In the current flex decoding triton kernel, grouped query heads for the same kv head are handled by the same thread block. Supporting num_heads = Hq with GQA requires support different kv num blocks for different query heads in the same thread block, leading to lots of redundant workload. So we should better use the main flex_attention kernel where each query head is handled by a separate block.
Fixes #148527
Fixes #147267
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @drisspg @yanboliang
| true
|
2,905,884,349
|
[Inductor] RuntimeError: derivative for aten::heaviside is not implemented
|
Cookiee235
|
closed
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
### Reproducible script
```
import torch
class SimpleModel(torch.nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.linear = torch.nn.Linear(10, 10)
def forward(self, x):
x = self.linear(x)
x = torch.heaviside(x, torch.tensor([0.0]))
return x
model = SimpleModel()
inputs = torch.randn(10, 10)
torch.set_num_interop_threads(4)
num_threads = torch.get_num_threads()
with torch.no_grad():
res = model(inputs)
compiled_model = torch.compile(model, backend='inductor')
compiled_out = compiled_model(inputs) # failed
```
### StackTrace
```
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0310/torch.set_num_interop_threads.py", line 27, in <module>
compiled_out = compiled_model(inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1429, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1210, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 597, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1056, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 758, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 794, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 712, in transform
tracer.run()
~~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3315, in run
super().run()
~~~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3511, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3496, in _return
self.output.compile_subgraph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self,
^^^^^
...<2 lines>...
),
^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1141, in compile_subgraph
self.compile_and_call_fx_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
tx, list(reversed(stack_values)), root, output_replacements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1434, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1484, in call_user_compiler
return self._call_user_compiler(gm)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1541, in _call_user_compiler
raise BackendCompilerFailed(
self.compiler_fn, e, inspect.currentframe()
).with_traceback(e.__traceback__) from None
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1516, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in__call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/__init__.py", line 2349, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2087, in compile_fx
return aot_autograd(
~~~~~~~~~~~~~
...<6 lines>...
cudagraphs=cudagraphs,
~~~~~~~~~~~~~~~~~~~~~~
)(model_, example_inputs_)
~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
dispatch_and_compile,
...<5 lines>...
remote,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py",line 779, in load
compiled_fn = dispatch_and_compile()
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
functional_call,
^^^^^^^^^^^^^^^^
...<3 lines>...
shape_env,
^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
~~~~~~~~~~~^
flat_fn,
^^^^^^^^
...<2 lines>...
fw_metadata=fw_metadata,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 783, in aot_dispatch_autograd
fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~^
flat_fn, flat_args, aot_config, fw_metadata=fw_metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 318, in aot_dispatch_autograd_graph
fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph
fx_g = make_fx(
...<3 lines>...
pre_dispatch=aot_config.pre_dispatch,
)(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2240,in wrapped
return make_fx_tracer.trace(f, *args)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2178,in trace
return self._trace_inner(f, *args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2149,in _trace_inner
t = dispatch_trace(
wrap_key(func, args, self.fx_tracer, self.pre_dispatch),
tracer=self.fx_tracer,
concrete_args=tuple(phs),
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1174,in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/_symbolic_trace.py", line 838, in trace
(self.create_arg(fn(*args)),),
~~^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/_symbolic_trace.py", line 692, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1229,in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 717, in inner_fn
outs = fn(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 668, in joint_helper
return _functionalized_f_helper(primals, tangents)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 416, in _functionalized_f_helper
f_outs = fn(*f_args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 283, in inner_fn_with_anomaly
return inner_fn(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 268, in inner_fn
backward_out = torch.autograd.grad(
needed_outs,
...<2 lines>...
allow_unused=True,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/__init__.py", line 451, in grad
return handle_torch_function(
grad,
...<9 lines>...
materialize_grads=materialize_grads,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1277,in __torch_function__
return func(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/__init__.py", line 502, in grad
result = _engine_run_backward(
outputs,
...<5 lines>...
accumulate_grad=False,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
t_outputs, *args, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^
) # Calls into the C++ engine to run the backward pass
^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: derivative for aten::heaviside is not implemented
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,905,876,615
|
Fix invalid format string in libfmt calls
|
cyyever
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps"
] | 8
|
COLLABORATOR
|
Wrap shaderSource inside fmt::runtime because the format string is not a string literal and can't pass libfmt's compile time check in C++23
| true
|
2,905,871,886
|
Fix "invalid application of 'sizeof' to an incomplete type"
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 8
|
COLLABORATOR
|
Fixes with C++23 and constexpr std::unique_ptr
| true
|
2,905,815,888
|
DISABLED test_comprehensive_nn_functional_conv_transpose3d_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 28
|
NONE
|
Platforms: inductor, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_conv_transpose3d_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38456186869).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_nn_functional_conv_transpose3d_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1458, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2288, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1239, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1239, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1239, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 887, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 879, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1128, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1088, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 631, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 513, in check_model
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 2744 (0.0%)
Greatest absolute difference: 9.1552734375e-05 at index (0, 5, 4, 4, 3) (up to 1.5e-05 allowed)
Greatest relative difference: 1.7759699403541163e-05 at index (0, 5, 4, 4, 3) (up to 1.3e-05 allowed)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 8: SampleInput(input=Tensor[size=(1, 4, 5, 5, 5), device="cuda:0", dtype=torch.float32], args=(Tensor[size=(4, 8, 3, 3, 3), device="cuda:0", dtype=torch.float32],None), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=8 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_nn_functional_conv_transpose3d_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,905,815,739
|
DISABLED test_train_parity_multi_group_unshard_async_op (__main__.TestFullyShard1DTrainingCore)
|
pytorch-bot[bot]
|
closed
|
[
"oncall: distributed",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2"
] | 5
|
NONE
|
Platforms: inductor, linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_train_parity_multi_group_unshard_async_op&suite=TestFullyShard1DTrainingCore&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38454373016).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_train_parity_multi_group_unshard_async_op`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 899, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 0 terminated or timed out after 300.1126618385315 seconds
```
</details>
Test file path: `distributed/_composable/fsdp/test_fully_shard_training.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @chauhang @penguinwu
| true
|
2,905,815,685
|
DISABLED test_capture_tracked_nested_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, mac, macos, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_capture_tracked_nested_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38454349123).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_capture_tracked_nested_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,905,808,113
|
[ROCm] AOTriton 0.9.2b RuntimeError Only Supports Head Dimension <=256
|
Beinsezii
|
closed
|
[
"module: rocm",
"triaged",
"module: sdpa"
] | 4
|
NONE
|
### 🐛 Describe the bug
The latest pytorch nightly makes it impossible to run `scaled_dot_product_attention` on tensors with batch dim > 256 without manually disabling the efficient attention kernels, likely as a result of https://github.com/pytorch/pytorch/pull/148433 trying to enable 512 >= hdim > 256 support
```python
import torch
q = torch.ones([1, 1, 16384, 512], dtype=torch.float16, device="cuda")
k, v = q.clone(), q.clone()
result = torch.nn.functional.scaled_dot_product_attention(q, k, v)
```
Full error
```
Traceback (most recent call last):
File "/home/beinsezii/Python/quickdif/r.py", line 6, in <module>
result = torch.nn.functional.scaled_dot_product_attention(q, k, v)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: FlashAttention forward only supports head dimension at most 256
```
Verified AOTriton 0.9.2
```
> readelf -p .comment .venv/lib/python3.12/site-packages/torch/lib/libaotriton_v2.so
[ 2f] AOTriton 0.9.2
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250309+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20250207
Clang version: 19.1.7
CMake version: version 3.31.6
Libc version: glibc-2.41
Python version: 3.12.7 (main, Oct 8 2024, 00:20:25) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.13.5-arch1-1-x86_64-with-glibc2.41
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon Graphics (gfx1100)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7900X 12-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 70%
CPU max MHz: 5908.0000
CPU min MHz: 545.0000
BogoMIPS: 9382.43
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] pytorch-triton-rocm==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250309+rocm6.3
[pip3] torch_migraphx==0.0.4
[pip3] torchao==0.10.0.dev20250309+rocm6.3
[pip3] torchsde==0.2.6
[conda] Could not collect
```
cc @mruberry @jbschlosser @walterddr @mikaylagawarecki @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,905,806,795
|
fix dynamo ide
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148849
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,905,753,715
|
[dynamo][guards] Dont guard on ephemeral numpy tensors
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148917
* __->__ #148848
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,905,748,547
|
Fix AttributeError for `_get_vc_env` with setuptools>=75.9.0
|
sigvoid
|
open
|
[
"triaged",
"open source"
] | 7
|
NONE
|
```
File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\cpp_extension.py", line 2172, in _get_vc_env
return _msvccompiler._get_vc_env(vc_arch)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'distutils._msvccompiler' has no attribute '_get_vc_env'
```
see https://github.com/pypa/setuptools/blob/v75.9.0/setuptools/_distutils/_msvccompiler.py
| true
|
2,905,741,079
|
C++ support to print symbolic tensors as `Symbolic tensor: size=(...)`
|
grodranlorth
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 11
|
NONE
|
Fixes https://github.com/pytorch/pytorch/issues/145491
| true
|
2,905,734,501
|
Unable to compile pad/unpad from Flash Attention 2
|
conceptofmind
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2
|
NONE
|
### 🐛 Describe the bug
Hello all,
I am attempting to compile a model that is unpadding and padding the input ids for an encoder with Flash Attention 2. The pad and unpad code can be found here: https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/bert_padding.py#L98
Example code:
```python
x, indices, cu_seqlens, max_seqlen, _ = unpad_input(
inputs=x, attention_mask=attn_mask
)
x = self.embed(x)
x = self.transformer(x, attn_mask, cu_seqlens, max_seqlen)
x = self.lmhead(x)
```
### Error logs
When trying to compile the model I receive this graph break:
```
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0] Graph break from `Tensor.item()`, consider setting:
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0] torch._dynamo.config.capture_scalar_outputs = True
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0] or:
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0] to include these operations in the captured graph.
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0]
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0] Graph break: from user code at:
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0] File "/models/attn.py", line 171, in torch_dynamo_resume_in__upad_input_at_170
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0] max_seqlen_in_batch = seqlens_in_batch.max().item()
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0]
W0308 14:43:37.982000 21720 site-packages/torch/_dynamo/variables/tensor.py:869] [7/0]
tensor([[-1.0541, -0.1679, 0.2720, ..., -0.0505, 0.5067, -0.0918],
[-0.8398, -0.2256, 0.5115, ..., -0.2662, 0.0860, -0.1525],
[-0.2781, -0.3775, 0.3996, ..., -0.1714, 0.7148, 0.3041]],
device='cuda:0', grad_fn=<CompiledFunctionBackward>)
Number of parameters in torch model: 34554170
```
https://github.com/Dao-AILab/flash-attention/blob/5639b9d26dac63d912d6815cb4369250f6cef764/flash_attn/bert_padding.py#L115
### Versions
python3 collect_env.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24353 100 24353 0 0 68213 0 --:--:-- --:--:-- --:--:-- 68407
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-10600K CPU @ 4.10GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU max MHz: 4800.0000
CPU min MHz: 800.0000
BogoMIPS: 8199.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu11==8.7.0.84
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.0.0+45fff310c8
[pip3] rotary-embedding-torch==0.5.3
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu11 8.7.0.84 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.20.5 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.0.0+45fff310c8 pypi_0 pypi
[conda] rotary-embedding-torch 0.5.3 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,905,679,656
|
[Export] fix automatically convert instances of _check(u>=0) to check_is_size()
|
SandishKumarHN
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
Fixes #148826
Understanding:
1. PyTorch should automatically convert instances of _check(u>=0) to check_is_size()
2. The export mechanism should suggest using check_is_size() instead of _check(u>=0) when applicable
Changes made:
1. Added a helper function to detect non-negative checks: is_non_negative_check
2. Modified the suggestion logic in _suggest_torch_checks to detect and handle non-negative checks
3. unit tests test_is_non_negative_check_function, test_suggest_torch_checks_with_non_negative_check, and test_suggest_torch_checks_with_regular_check
unit tests:
base) sany@sandishs-Laptop pytorch % pytest test/export/test_export.py::TestExport::test_suggest_torch_checks_with_non_negative_check
=================================== test session starts ==================
platform darwin -- Python 3.9.19, pytest-7.3.2, pluggy-1.5.0
rootdir: /Users/sany/git/pytorch
configfile: pytest.ini
plugins: xdoctest-1.1.0, cpp-2.3.0, flakefinder-1.1.0, anyio-4.6.0, rerunfailures-14.0, hypothesis-5.35.1, xdist-3.3.1, subtests-0.13.1, typeguard-4.3.0
collected 1 item
Running 1 items in this shard
test/export/test_export.py . [100%]
======================== 1 passed in 1.67s =======================
(base) sany@sandishs-Laptop pytorch % pytest test/export/test_export.py::TestExport::test_suggest_torch_checks_with_regular_check
======================= test session starts =================
platform darwin -- Python 3.9.19, pytest-7.3.2, pluggy-1.5.0
rootdir: /Users/sany/git/pytorch
configfile: pytest.ini
plugins: xdoctest-1.1.0, cpp-2.3.0, flakefinder-1.1.0, anyio-4.6.0, rerunfailures-14.0, hypothesis-5.35.1, xdist-3.3.1, subtests-0.13.1, typeguard-4.3.0
collected 1 item
Running 1 items in this shard
test/export/test_export.py . [100%]
================================= 1 passed in 1.61s ================
(base) sany@sandishs-Laptop pytorch % pytest test/export/test_export.py::TestExport::test_is_non_negative_check_function
================================ test session starts =============
platform darwin -- Python 3.9.19, pytest-7.3.2, pluggy-1.5.0
rootdir: /Users/sany/git/pytorch
configfile: pytest.ini
plugins: xdoctest-1.1.0, cpp-2.3.0, flakefinder-1.1.0, anyio-4.6.0, rerunfailures-14.0, hypothesis-5.35.1, xdist-3.3.1, subtests-0.13.1, typeguard-4.3.0
collected 1 item
Running 1 items in this shard
test/export/test_export.py . [100%]
======================= 1 passed in 1.62s =========================
(base) sany@sandishs-Laptop pytorch %
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,905,654,595
|
make `to_empty` a no-op if parameter/buffer already on `device`
|
ringohoffman
|
open
|
[
"module: nn",
"triaged",
"needs design"
] | 7
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
See also:
* https://github.com/huggingface/transformers/issues/34234#issuecomment-2429754244
When loading a model with non-persistent buffers on the meta device, the default behavior of [`accelerate.init_empty_weights()`](https://huggingface.co/docs/accelerate/v0.11.0/en/big_modeling#accelerate.init_empty_weights) is to load buffers on the default device.
This prevents you from:
1. needing to know that there are non-persistent buffers in whatever model you are loading, which cannot simply be reinitialized by calling `load_state_dict`
2. needing to know how to re-initialize those non-persistent buffers, which is both model and implementation specific and may break between versions
3. needing to update existing models to comply with some sort of `reset_parameters`-type API to enable restoring these non-persistent buffers through a stable API
4. needing to know that you need to write your models in this way to avoid problems with non-persistent buffers being initialized on the meta-device
Below you will see that calling `nn.Module.to_empty(device="cpu")` on a model with non-persistent buffers that are already on the cpu ends up replacing the non-persistent buffers--which were already on the desired device--with an empty tensor.
```python
import accelerate
import transformers
with accelerate.init_empty_weights():
config = transformers.AutoConfig.from_pretrained("/models/meta-llama/Llama-3.2-1B-Instruct")
model = transformers.AutoModelForCausalLM.from_config(config)
parameter_devices = {p.device for p in model.parameters()}
print(f"{parameter_devices=}") # parameter_devices={device(type='meta')}
buffer_devices = {b.device for b in model.buffers()}
print(f"{buffer_devices=}") # buffer_devices={device(type='cpu')}
buffer_names = {name for name, buffer in model.named_buffers()}
state_dict = set(model.state_dict())
assert state_dict.isdisjoint(buffer_names) # these are non-persistent buffers that will not be re-initialized by calling load_state_dict
rotary_embedding = next(iter(model.buffers()))
print(rotary_embedding) # tensor([1.0000e+00, 6.6360e-01, 4.4037e-01, 2.9223e-01, 1.9392e-01, 1.2869e-01, ...])
model = model.to_empty(device=rotary_embedding.device)
rotary_embedding = next(iter(model.buffers()))
print(rotary_embedding) # tensor([0.0000e+00, 0.0000e+00, 7.5098e-25, 0.0000e+00, 6.0012e-30, 0.0000e+00,, ...])
```
My suggestion is a simple, non-breaking change to `nn.Module.to_empty`, which is to only replace the parameter or buffer with an empty tensor if it is not already on the device the empty tensor would be created on:
```python
def to_empty(module: ModuleT, *, device: torch.device | str | int | None, recurse: bool = True) -> ModuleT:
"""Move the parameters and buffers to the specified device without copying storage if they are not already on that device.
Args:
module: The module whose parameters and buffers to (maybe) move.
device: The desired device of the parameters and buffers in the module. If `None`, the default device is used.
recurse: Whether parameters and buffers of submodules should be recursively moved to the specified device.
Returns:
The (maybe) moved module.
"""
device = torch.empty((), device=device).device
return module._apply(
lambda t: torch.empty_like(t, device=device) if t.device != device else t,
recurse=recurse,
)
```
### Alternatives
Currently I have no choice but to use the above implementation in my own code, but given that meta device initialization + `nn.Module.to_empty` is [part of the official FSDP2 guide for initializing models](https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md):
> After with FSDP2:
> ```python
> with torch.device("meta"):
> model = Transformer()
> for module in model.modules():
> if isinstance(module, TransformerBlock):
> fully_shard(module)
> fully_shard(model)
> for tensor in itertools.chain(model.parameters(), model.buffers()):
> assert tensor.device == torch.device("meta")
> # Allocate buffers and sharded parameters on GPU
> model.to_empty(device="cuda")
> # Run user-defined initializers
> model.init_weights() # or `model.apply(init_weights)`
> ```
as well as the popularity of the `transformers` library, I think this change will help to prevent the confusion that I experienced.
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,905,569,419
|
We should use max size instead of hint size when autotuning
|
bobrenjc93
|
open
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
From x-ref https://fb.workplace.com/profile.php?id=61573598535425
@eellison
> At compile time (mm tuning), it will use the hint, aka first size. we should use the max size. Similarly, runtime will use max size. When the max size diverges from runtime I think we could just reuse the existing cpp_wrapper compile time tuning.
@Chillee
> yeah, I think using max size is likely to lead to better generalization than using the first size.
cc @chauhang @penguinwu
| true
|
2,905,517,776
|
build pytorch2.3.0 cpu with mkldnn_acl 24.08 failed on aarch64
|
Serenagirl
|
open
|
[
"module: build",
"triaged",
"module: mkldnn",
"module: third_party",
"module: arm"
] | 3
|
NONE
|
I build acl24.08 with cmake .. -DCMAKE_BUILD_TYPE=Release -DARM_COMPUTE_OPENMP=1 -DARM_COMPUTE_WERROR=0 -DARM_COMPUTE_BUILD_EXAMPLES=0 -DARM_COMPUTE_BUILD_TESTING=0 -DCMAKE_INSTALL_PREFIX=/opt/acl cmake --build . --parallel 160
and build pytorch with python setup.py build --cmake-only
but failed

cc @malfet @seemethere @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @milpuz01
| true
|
2,905,430,822
|
[ONNX] Export fails on `torchvision.transforms.functional.resize` (_upsample_bilinear2d_aa)
|
FabianSchuetze
|
closed
|
[
"module: onnx",
"triaged"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following fails for me:
```
import torch
import torchvision
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
y = torchvision.transforms.functional.resize(x, size=[1024, 1024])
return y
model = Model()
x = torch.rand(1, 3, 400, 500)
y = model(x)
onnx_model = torch.onnx.export(model, x, dynamo=True)
```
The error message I got is:
```
/home/fabian/.local/lib/python3.12/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/home/fabian/.local/lib/python3.12/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
[torch.onnx] Obtain model graph for `Model()` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `Model()` with `torch.export.export(..., strict=False)`... ✅
[torch.onnx] Run decomposition...
[torch.onnx] Run decomposition... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ❌
---------------------------------------------------------------------------
DispatchError Traceback (most recent call last)
File ~/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py:708, in _translate_fx_graph(fx_graph, model, graph_like, owned_graphs, lower, registry)
707 if lower == "at_conversion":
--> 708 _handle_call_function_node_with_lowering(
709 model,
710 node,
711 node_name_to_values,
712 graph_like=graph_like,
713 constant_farm=constant_farm,
714 registry=registry,
715 opset=opset,
716 node_name_to_local_functions=node_name_to_local_functions,
717 )
718 else:
719 # No lowering
File ~/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py:490, in _handle_call_function_node_with_lowering(model, node, node_name_to_values, graph_like, constant_farm, registry, opset, node_name_to_local_functions)
488 if onnx_function is None:
489 # TODO(justinchuby): Fall back to ATen op or do something else?
--> 490 raise _errors.DispatchError(
491 f"No ONNX function found for {node.target!r}. Failure message: {message}"
492 )
494 # Map FX inputs to ONNX inputs and fill optional inputs.
495 # torch_args and torch_kwargs are for op-level validation
DispatchError: No ONNX function found for <OpOverload(op='aten._upsample_bilinear2d_aa', overload='default')>. Failure message: No decompositions registered for the real-valued input
The above exception was the direct cause of the following exception:
ConversionError Traceback (most recent call last)
File ~/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py:1372, in export(model, args, kwargs, registry, dynamic_shapes, input_names, output_names, report, verify, profile, dump_exported_program, artifacts_dir, verbose)
1370 try:
1371 # Convert the exported program to an ONNX model
-> 1372 onnx_program = _exported_program_to_onnx_program(
1373 decomposed_program, registry=registry
1374 )
1376 # Run the ONNX passes
File ~/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py:1008, in _exported_program_to_onnx_program(exported_program, registry, lower)
1006 graph_like = func
-> 1008 values = _translate_fx_graph(
1009 fx_graph,
1010 model,
1011 graph_like=graph_like,
1012 owned_graphs=owned_graphs,
1013 lower=lower,
1014 registry=registry,
1015 )
1017 assert name == "", "The last module processed should be the root module"
File ~/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py:734, in _translate_fx_graph(fx_graph, model, graph_like, owned_graphs, lower, registry)
733 except Exception as e:
--> 734 raise _errors.ConversionError(
735 f"Error when translating node {node.format_node()}. See the stack trace for more information."
736 ) from e
737 return node_name_to_values
ConversionError: Error when translating node %_upsample_bilinear2d_aa : [num_users=1] = call_function[target=torch.ops.aten._upsample_bilinear2d_aa.default](args = (%x, [1024, 1024], False), kwargs = {}). See the stack trace for more information.
The above exception was the direct cause of the following exception:
ConversionError Traceback (most recent call last)
Cell In[5], line 1
----> 1 torch.onnx.export(model, x, dynamo=True)
File ~/.local/lib/python3.12/site-packages/torch/onnx/__init__.py:351, in export(model, args, f, kwargs, export_params, verbose, input_names, output_names, opset_version, dynamic_axes, keep_initializers_as_inputs, dynamo, external_data, dynamic_shapes, custom_translation_table, report, optimize, verify, profile, dump_exported_program, artifacts_dir, fallback, training, operator_export_type, do_constant_folding, custom_opsets, export_modules_as_functions, autograd_inlining, **_)
349 if isinstance(args, torch.Tensor):
350 args = (args,)
--> 351 return _compat.export_compat(
352 model,
353 args,
354 f,
355 kwargs=kwargs,
356 export_params=export_params,
357 verbose=verbose,
358 input_names=input_names,
359 output_names=output_names,
360 opset_version=opset_version,
361 custom_translation_table=custom_translation_table,
362 dynamic_axes=dynamic_axes,
363 keep_initializers_as_inputs=keep_initializers_as_inputs,
364 external_data=external_data,
365 dynamic_shapes=dynamic_shapes,
366 report=report,
367 optimize=optimize,
368 verify=verify,
369 profile=profile,
370 dump_exported_program=dump_exported_program,
371 artifacts_dir=artifacts_dir,
372 fallback=fallback,
373 )
374 else:
375 from torch.onnx.utils import export
File ~/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_compat.py:304, in export_compat(model, args, f, kwargs, export_params, verbose, input_names, output_names, opset_version, custom_translation_table, dynamic_axes, dynamic_shapes, keep_initializers_as_inputs, external_data, report, optimize, verify, profile, dump_exported_program, artifacts_dir, fallback, **_)
302 registry.register_op(torch_op, op, is_complex=False)
303 try:
--> 304 onnx_program = _core.export(
305 model,
306 args,
307 kwargs,
308 registry=registry,
309 dynamic_shapes=dynamic_shapes,
310 input_names=input_names,
311 output_names=output_names,
312 profile=profile,
313 report=report,
314 verify=verify,
315 dump_exported_program=dump_exported_program,
316 artifacts_dir=artifacts_dir,
317 verbose=verbose,
318 )
320 except Exception as e:
321 if fallback:
File ~/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py:1416, in export(model, args, kwargs, registry, dynamic_shapes, input_names, output_names, report, verify, profile, dump_exported_program, artifacts_dir, verbose)
1413 else:
1414 report_path = None
-> 1416 raise _errors.ConversionError(
1417 _STEP_THREE_ERROR_MESSAGE
1418 + (f"\nError report has been saved to '{report_path}'." if report else "")
1419 + _summarize_exception_stack(e)
1420 ) from e
1422 profile_result = _maybe_stop_profiler_and_get_result(profiler)
1424 assert onnx_program.exported_program is not None
ConversionError: Failed to convert the exported program to an ONNX model. This is step 3/3 of exporting the model to ONNX. Next steps:
- If there is a missing ONNX function, implement it and register it to the registry.
- If there is an internal error during ONNX conversion, debug the error and summit a PR to PyTorch.
- Create an error report with `torch.onnx.export(..., report=True)`, and save the ExportedProgram as a pt2 file. Create an issue in the PyTorch GitHub repository against the *onnx* component. Attach the error report and the pt2 model.
## Exception summary
<class 'torch.onnx._internal.exporter._errors.DispatchError'>: No ONNX function found for <OpOverload(op='aten._upsample_bilinear2d_aa', overload='default')>. Failure message: No decompositions registered for the real-valued input
⬆️
<class 'torch.onnx._internal.exporter._errors.ConversionError'>: Error when translating node %_upsample_bilinear2d_aa : [num_users=1] = call_function[target=torch.ops.aten._upsample_bilinear2d_aa.default](args = (%x, [1024, 1024], False), kwargs = {}). See the stack trace for more information.
(Refer to the full stack trace above for more information.)
```
Is it true that resize doesn't work? Is there a workaround?
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 31%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 896 KiB (14 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fast_pytorch_kmeans==0.2.2
[pip3] flake8==7.1.2
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.0
[pip3] torch==2.6.0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
| true
|
2,905,398,382
|
[MPS] Fix Wreorder-init-list
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps"
] | 6
|
COLLABORATOR
|
Fixes the following warning:
```
warning: ISO C++ requires field designators to be specified in declaration order; field 'value' will be initialized after field 'size' [-Wreorder-init-list]
662 | return {.value.cf = scalar.to<c10::complex<float>>(), .size = sizeof(int64_t), .type = type};
```
| true
|
2,905,312,917
|
[Inductor] Inconsistency predict results for the compiled models with the original model
|
Cookiee235
|
closed
|
[] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
### After the compilation with the inductor, the compiled model outputs significantly different results (i.e., 0.05) from the original model. It seems to reveal a bug.
### The reproducible script
```python
import torch
class TestModel(torch.nn.Module):
def __init__(self):
super(TestModel, self).__init__()
self.fc1 = torch.nn.Linear(10, 10)
self.fc2 = torch.nn.Linear(10, 10)
def forward(self, x):
x = self.fc1(x)
x = torch.nn.functional.gumbel_softmax(x, tau=1.0, hard=False, dim=-1)
x = self.fc2(x)
x = torch.nn.functional.softshrink(x, lambd=0.5)
return x
model = TestModel()
compiled_model = torch.compile(model, backend='inductor')
for i in range(30):
inputs = torch.randn(1, 10)
ori_res = model(*inputs)
compiled_out = compiled_model(*inputs)
torch.testing.assert_close(ori_res, compiled_out, rtol=1e-3, atol=1e-3)
```
### StackTrace (inconsistency results)
```
(torch) [qshenaf@sccpu6 0309]$ python torch.nn.functional.gumbel_softmax.py
/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/guards.py:741: RuntimeWarning: Guards may run slower on Python 3.13.0. Consider upgrading to Python 3.13.1+.
warnings.warn(
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0309/torch.nn.functional.gumbel_softmax.py", line 24, in <module>
torch.testing.assert_close(ori_res, compiled_out, rtol=1e-3, atol=1e-3)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 10 (20.0%)
Greatest absolute difference: 0.05042219161987305 at index (5,) (up to 0.001 allowed)
Greatest relative difference: inf at index (6,) (up to 0.001 allowed)
(torch) [qshenaf@sccpu6 0309]$
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 80%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
| true
|
2,905,265,327
|
NVLS support in Pytorch
|
rajagond
|
open
|
[
"oncall: distributed",
"triaged"
] | 2
|
NONE
|
Does PyTorch support NVLS? If not, how does it manage to call NCCL’s NVLS algorithm using `torch.distributed.all_reduce`?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,905,247,105
|
Introduce TORCH_ABI_VERSION and a runtime aoti_torch_abi_version C shim ABI
|
janeyx99
|
closed
|
[
"release notes: cpp",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148836
<details>
ghstack-source-id: 9619e98a56b47312c0ddea04b9d9500dd8e554b3
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148836
<details>
| true
|
2,905,238,517
|
[Inductor] Error detected in ReluBackward0
|
Cookiee235
|
closed
|
[
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
### Description
The following script failed when running it with the `torch.compile(mod, 'inductor')` under the nightly version (i.e., 2.7.0.dev20250308+cu126)!
### Reproducible script
```python
import torch
torch.set_grad_enabled(True)
class SimpleModel(torch.nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.linear = torch.nn.Linear(10, 10)
def forward(self, x):
x = self.linear(x)
x = torch.nn.functional.relu(x)
x = torch.nn.functional.relu_(x)
return x
model = SimpleModel()
inputs = torch.randn(1, 10)
compiled_model = torch.compile(model, backend='inductor')
compiled_out = compiled_model(*inputs)
```
### StackTrace
```
(torch) [qshenaf@sccpu6 0309]$ python torch.nn.functional.relu_.py
/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/graph.py:824: UserWarning: Error detected in ReluBackward0. Traceback of forward call that caused the error:
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0309/torch.nn.functional.relu_.py", line 12, in forward
x = torch.nn.functional.relu(x)
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:122.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0309/torch.nn.functional.relu_.py", line 19, in <module>
compiled_out = compiled_model(*inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1429, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1210, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 597, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1056, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 758, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 794, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 712, in transform
tracer.run()
~~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3315, in run
super().run()
~~~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3511, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3496, in _return
self.output.compile_subgraph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self,
^^^^^
...<2 lines>...
),
^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1141, in compile_subgraph
self.compile_and_call_fx_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
tx, list(reversed(stack_values)), root, output_replacements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1434, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1484, in call_user_compiler
return self._call_user_compiler(gm)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1541, in _call_user_compiler
raise BackendCompilerFailed(
self.compiler_fn, e, inspect.currentframe()
).with_traceback(e.__traceback__) from None
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1516, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in__call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/__init__.py", line 2349, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2087, in compile_fx
return aot_autograd(
~~~~~~~~~~~~~
...<6 lines>...
cudagraphs=cudagraphs,
~~~~~~~~~~~~~~~~~~~~~~
)(model_, example_inputs_)
~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
dispatch_and_compile,
...<5 lines>...
remote,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py",line 779, in load
compiled_fn = dispatch_and_compile()
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
functional_call,
^^^^^^^^^^^^^^^^
...<3 lines>...
shape_env,
^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
~~~~~~~~~~~^
flat_fn,
^^^^^^^^
...<2 lines>...
fw_metadata=fw_metadata,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 783, in aot_dispatch_autograd
fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~^
flat_fn, flat_args, aot_config, fw_metadata=fw_metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 318, in aot_dispatch_autograd_graph
fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph
fx_g = make_fx(
...<3 lines>...
pre_dispatch=aot_config.pre_dispatch,
)(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2240,in wrapped
return make_fx_tracer.trace(f, *args)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2178,in trace
return self._trace_inner(f, *args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2149,in _trace_inner
t = dispatch_trace(
wrap_key(func, args, self.fx_tracer, self.pre_dispatch),
tracer=self.fx_tracer,
concrete_args=tuple(phs),
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1174,in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/_symbolic_trace.py", line 838, in trace
(self.create_arg(fn(*args)),),
~~^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/_symbolic_trace.py", line 692, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1229,in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 717, in inner_fn
outs = fn(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 668, in joint_helper
return _functionalized_f_helper(primals, tangents)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 416, in _functionalized_f_helper
f_outs = fn(*f_args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 283, in inner_fn_with_anomaly
return inner_fn(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 268, in inner_fn
backward_out = torch.autograd.grad(
needed_outs,
...<2 lines>...
allow_unused=True,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/__init__.py", line 451, in grad
return handle_torch_function(
grad,
...<9 lines>...
materialize_grads=materialize_grads,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1277,in __torch_function__
return func(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/__init__.py", line 502, in grad
result = _engine_run_backward(
outputs,
...<5 lines>...
accumulate_grad=False,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
t_outputs, *args, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^
) # Calls into the C++ engine to run the backward pass
^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [10]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 80%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,905,235,434
|
Remove aoti_torch_cpu__weight_int4pack_mm_cpu_tensor
|
janeyx99
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
I noticed that this op was likely intended to be in the `extern "C"` portion of the file, but it was not added as such in https://github.com/pytorch/pytorch/pull/145250 which means this function is actually not stable/would get mangled by C++.
Following the thread there I am thinking there are two possible solutions:
(1) Since this op was never stable to begin with, and @Xia-Weiwen should add it to fallback_ops.py, I think this op is deletable + should get deleted before the 2.7 branch cut.
(2) Or we could just move the op to the right portion of the code. ~While I like just deleting the op, I am hesitant to do in case there's something I haven't considered, so this PR does option 2.~
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148834
| true
|
2,905,230,544
|
[caffe2/torch] Fixup upstream LLVM (major version 21) API changes
|
HighW4y2H3ll
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"NNC",
"ciflow/trunk",
"release notes: jit"
] | 8
|
CONTRIBUTOR
|
Latest LLVM introduced two changes related to the `Triple` usage that causes build failures when building pytorch.
## Failure in llvm_codegen.cpp:
Triple is stored in Modules instead of the string: https://github.com/llvm/llvm-project/commit/979c275097a642e9b96c6b0a12f013c831af3a6e
## Failure in llvm_jit.cpp:
Triple argument is removed from LLJITBuilder::... : https://github.com/llvm/llvm-project/commit/b18e5b6a36399f11ba1152875b6892900c5afdaf
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.