id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,851,838,818
|
[export] Add meta for aten.bincount
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/147094
| true
|
2,851,835,028
|
DISABLED test_output_match_linalg_cholesky_ex_cpu_float32 (__main__.TestConsistencyCPU)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"module: macos",
"skipped",
"module: mps"
] | 2
|
NONE
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_output_match_linalg_cholesky_ex_cpu_float32&suite=TestConsistencyCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37174505386).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_output_match_linalg_cholesky_ex_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13308447103/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_mps.py", line 12640, in test_output_match
self.assertEqual(cpu_out, mps_out, atol=atol, rtol=rtol)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13308447103/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4102, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not equal!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 9 at index (0, 0)
Greatest relative difference: 1.0 at index (0, 0)
The failure occurred for item [1]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13308447103/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 3161, in wrapper
method(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13308447103/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13308447103/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13308447103/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 1626, in wrapper
fn(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13308447103/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 14: SampleInput(input=Tensor[size=(1, 1, 0, 0), device="cpu", dtype=torch.float32], args=(), kwargs={'upper': 'True'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=14 python test/test_mps.py TestConsistencyCPU.test_output_match_linalg_cholesky_ex_cpu_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_mps.py`
cc @clee2000 @wdvr @malfet @albanD @kulinseth @DenisVieriu97 @jhavukainen
| true
|
2,851,829,486
|
[cond] support output sizes mismatch in front end
|
ydwu4
|
closed
|
[
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147127
* #147045
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,851,803,230
|
[export] Generate printers/parsers for serialization enum values.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Summary:
Generate two helper functions for enum classes in generated_serialization_types.h
printEnum: will convert enum values into strings.
parseEnum: will convert strings into enum values.
Test Plan: CI
Differential Revision: D69604850
| true
|
2,851,785,088
|
Remove outdated comment in ATen/mkl/Sparse.h about lack of Windows support
|
gajanan-choudhary
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fixes #147124.
* #102604 added support for Intel oneMKL Sparse BLAS APIs so there was an outdated comment left around in the codebase that can now be removed.
| true
|
2,851,779,444
|
Windows support of Intel oneMKL Sparse BLAS APIs and possible outdated comment
|
gajanan-choudhary
|
closed
|
[
"triaged"
] | 0
|
CONTRIBUTOR
|
* This is a minor issue about there being a possibly misleading comment in the codebase.
* oneMKL Sparse BLAS APIs were not supported on Windows in the past, see #97352.
* Support for oneMKL Sparse BLAS APIs on Windows was later enabled in #102604.
* Therefore, I believe that the comment at https://github.com/pytorch/pytorch/blob/9a883007a2fae8917fd9ff2cc89e73b43dbf35ef/aten/src/ATen/mkl/Sparse.h#L5-L6 that was last updated in #97353 appears to now be outdated. Its removal appears to have been missed in #102604.
* [Edit]: Created #147125 to fix this
| true
|
2,851,760,693
|
[ddp] decouple python reducer from compilation mode
|
xmfan
|
closed
|
[
"oncall: distributed",
"module: ddp",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (miscellaneous)"
] | 10
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147123
Current implementation reads as: we will only actually use the "python_reducer" config if the DDP forward is compiled. Otherwise, we will silently fallback to C++ reducer + no DDPOptimizer.
I'm changing this behavior to always use the python reducer if the config is specified.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,851,758,877
|
PyTorch build with numpy version incompatibility
|
H-Huang
|
closed
|
[
"module: build",
"oncall: quantization",
"has workaround"
] | 2
|
MEMBER
|
I'm building the latest PyTorch using `TORCH_CUDA_ARCH_LIST="8.0 9.0" BUILD_TEST=0 USE_CUDA=1 USE_DISTRIBUTED=1 python setup.py install`
But when I `import torch` I get:
```
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.2.2 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "<stdin>", line 1, in <module>
File "/home/howardhuang/local/pytorch/torch/__init__.py", line 2232, in <module>
from torch import quantization as quantization # usort: skip
File "/home/howardhuang/local/pytorch/torch/quantization/__init__.py", line 2, in <module>
from .fake_quantize import * # noqa: F403
File "/home/howardhuang/local/pytorch/torch/quantization/fake_quantize.py", line 10, in <module>
from torch.ao.quantization.fake_quantize import (
File "/home/howardhuang/local/pytorch/torch/ao/quantization/__init__.py", line 12, in <module>
from .pt2e._numeric_debugger import ( # noqa: F401
File "/home/howardhuang/local/pytorch/torch/ao/quantization/pt2e/_numeric_debugger.py", line 9, in <module>
from torch.ao.quantization.pt2e.graph_utils import bfs_trace_with_node_process
File "/home/howardhuang/local/pytorch/torch/ao/quantization/pt2e/graph_utils.py", line 9, in <module>
from torch.export import ExportedProgram
File "/home/howardhuang/local/pytorch/torch/export/__init__.py", line 70, in <module>
from .decomp_utils import CustomDecompTable
File "/home/howardhuang/local/pytorch/torch/export/decomp_utils.py", line 5, in <module>
from torch._export.utils import (
File "/home/howardhuang/local/pytorch/torch/_export/__init__.py", line 48, in <module>
from .wrappers import _wrap_submodules
File "/home/howardhuang/local/pytorch/torch/_export/wrappers.py", line 7, in <module>
from torch._higher_order_ops.strict_mode import strict_mode
File "/home/howardhuang/local/pytorch/torch/_higher_order_ops/__init__.py", line 1, in <module>
from torch._higher_order_ops._invoke_quant import (
File "/home/howardhuang/local/pytorch/torch/_higher_order_ops/_invoke_quant.py", line 8, in <module>
from torch._higher_order_ops.base_hop import BaseHOP, FunctionWithNoFreeVars
File "/home/howardhuang/local/pytorch/torch/_higher_order_ops/base_hop.py", line 12, in <module>
from torch._subclasses.functional_tensor import disable_functional_mode
File "/home/howardhuang/local/pytorch/torch/_subclasses/functional_tensor.py", line 45, in <module>
class FunctionalTensor(torch.Tensor):
File "/home/howardhuang/local/pytorch/torch/_subclasses/functional_tensor.py", line 275, in FunctionalTensor
cpu = _conversion_method_template(device=torch.device("cpu"))
/home/howardhuang/local/pytorch/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.2.2 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
(Triggered internally at /home/howardhuang/local/pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
```
The workaround is as the error mentions to downgrade the numpy version (`pip install "numpy<2"`), but I am curious whether this is expected since we have `numpy` as a dependency in our `requirements.txt` and i believe the latest stable version of numpy is >2.
JFYI: I hit this error when trying to build and use a conda package internally for torchtitan so there may be some other setup things happening that I am unaware of.
cc @malfet @seemethere @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,851,749,033
|
[AMD] Compile Failure with triton templates
|
eellison
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"rocm"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
See pr [here](https://github.com/pytorch/pytorch/pull/146293) with special casing for amd triton template.
### Versions
master
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,851,708,369
|
Make torch.cuda.gds APIs public
|
mikaylagawarecki
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: new features"
] | 3
|
CONTRIBUTOR
|
Follow up to https://github.com/pytorch/pytorch/pull/145748 that turned USE_CUFILE on for CUDA 12.6 and 12.8 binaries
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147120
| true
|
2,851,665,252
|
[Edited] Add docstring to improve documentation
|
MayureshMore
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"module: mkldnn",
"open source",
"release notes: quantization",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo"
] | 3
|
NONE
|
Changes made in branch: **MayureshMore:2.1-dynamic-doc**
[Edited] Add docstring to improve documentation
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mingfeima @XiaobingSuper @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @ezyang @SherlockNoMad @voznesenskym @penguinwu @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @StrongerXi
| true
|
2,851,646,018
|
padding fails on view from large tensor
|
rtyasdf
|
open
|
[
"module: cuda",
"triaged",
"module: 64-bit",
"module: padding",
"module: edge cases"
] | 0
|
NONE
|
### 🐛 Describe the bug
Call to padding function (`torch.nn.functional.pad`) in `reflect` mode on view of a tensor with number of elements exceeding 2^32 may lead to unexpected behavior, which best illustrated by following snippet:
```python
import torch
import torch.nn.functional as F
DEVICE = torch.device('cuda:0')
a = torch.rand((256, 256, 256, 256), device=DEVICE)
# (expected) throws "RuntimeError: input tensor must fit into 32-bit index math"
a_pad = F.pad(a, (1, 1, 1, 1), mode='reflect')
# (expected) runs perfectly if we split large tensor along batch dimension
for a_split in torch.split(a, 2, dim=0):
split_pad = F.pad(a_split, (1, 1, 1, 1), mode='reflect')
# (unexpected) throws "RuntimeError: input tensor must fit into 32-bit index math"
for a_split in torch.split(a, 2, dim=1):
split_pad = F.pad(a_split, (1, 1, 1, 1), mode='reflect')
```
While shape of `a_split` in last example technically can be processed by `F.pad` (since it is same as for "split along batch" example) for some reason it can't handle it and throws `RuntimeError`.
### Versions
PyTorch version: 2.4.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1027-nvidia-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.1.105
cc @ptrblck @msaroufim @eqy
| true
|
2,851,624,535
|
[torch][amdsmi] Look for amdsmi in ROCM_HOME/ROCM_PATH before using rpath
|
danzimm
|
closed
|
[
"module: rocm",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
Summary: ROCm uses ROCM_HOME/ROCM_PATH to specify which version of rocm the user wants to use. This is especially important in multi-version setups. Let's respect that behavior when loading amdsmi.
Test Plan:
CI
```
NCCL_DEBUG=INFO NCCL_DEBUG_SUBSYS=INIT,COLL MSCCL_ALGO_DIR=~/2fbsource/third-party/rccl/develop/tools/msccl-algorithms RCCL_MSCCLPP_THRESHOLD=(math '128*1024*1024') RCCL_MSCCLPP_ENABLE=1 ENABLE_MSCCLPP=1 buck2 run fbcode//mode/opt-amd-gpu -m rocm621 fbcode//accelerators/workloads/microbench:bench_comm -- --shape moe_17b --comm_algo nccl_allreduce
```
Differential Revision: D69597647
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,851,618,730
|
[DCP] Cache save plans: planner helpers and interface updates
|
saumishr
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: new features"
] | 20
|
CONTRIBUTOR
|
Summary:
This PR updates the planner interface and introduces the class variables to cache the local and global plans.
Two new helpers are also introduced which will be used to compare if the plans have changed across save attempts and merge the delta plans.
Test Plan: UTs
Differential Revision: D69224488
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,851,576,077
|
Unable to print in a branch run by torch.cond
|
xadupre
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
The code run by torch cond has more constraints than the other part of the model. So even before exporting the model, it may not work because of logging, printing, ... The following script returns:
```python
import torch
class SubThen(torch.nn.Module):
def forward(self, x):
return x * x
class SubElse(torch.nn.Module):
def forward(self, x):
print(x)
return torch.abs(x)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.sub_then = SubThen()
self.sub_else = SubElse()
def forward(self, x):
return torch.cond(x.sum() > 0, self.sub_then, self.sub_else, [x])
model = Model()
model(torch.rand((5, 4)))
```
```
torch._dynamo.exc.UncapturedHigherOrderOpError: Cond doesn't work unless it is captured completely with torch.compile. Scroll up to find out what causes the graph break.
```
It works without ``print``.
Then I did (look for **added line**):
```python
import torch
class SubThen(torch.nn.Module):
def forward(self, x):
return x * x
class SubElse(torch.nn.Module):
def forward(self, x):
if not torch.compiler.is_compiling(): # added line
print(x)
return torch.abs(x)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.sub_then = SubThen()
self.sub_else = SubElse()
def forward(self, x):
return torch.cond(x.sum() > 0, self.sub_then, self.sub_else, [x])
model = Model()
model(torch.rand((5, 4)))
```
But that does not print anything. I chose print but anything considered as a mutation should behave the same as well. Is it by design? Is there a way to solve this (to print something or to cache something)?
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250212+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.5
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250212+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250212+cu126
[pip3] torchvision==0.22.0.dev20250212+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo
| true
|
2,851,541,853
|
[PT][FSDP] support custom all reduce hook across FSDP units
|
xunnanxu
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 29
|
CONTRIBUTOR
|
This change adds an API `set_all_reduce_hook` to the `FSDPModule` to
support customized all reduce either in native HSDP (2d mesh) setup or custom HSDP (1d FSDP + custom AR across replicas)
* For native HSDP, the original AR would still run as is and this hook allows for additional gradient modification post all reduce.
* For custom HSDP, the original AR will be skipped and all the logic is instead expected to be executed in the hook.
The custom hook is expected to perform operations in place (no return value).
Example basic usage:
```
model = ...
fully_shard(model, mesh=...)
model.set_all_reduce_hook(my_hook)
```
By default, the hook will run in the default all reduce stream post reduce scatter.
When native HSDP is NOT enabled, the custom hook can be specified to run in a custom stream. This custom stream will also be synchronized post reduce scatter similarly. See tests for examples.
Test Plan: CI
Differential Revision: D68255583
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,851,510,392
|
Add quantized BatchNorm1d module
|
mattpitkin
|
open
|
[
"triaged",
"open source",
"release notes: quantization"
] | 4
|
CONTRIBUTOR
|
Fixes #147112.
| true
|
2,851,509,393
|
Add quantized version of BatchNorm1d module
|
mattpitkin
|
open
|
[
"oncall: quantization"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Currently, there are quantized versions of the `BatchNorm2d` and `BatchNorm3d` modules, but not for `BatchNorm1d`. This is despite there being a quantized op for `batch_norm1d`. It would be useful to have the quantized `BatchNorm1d` included.
### Alternatives
_No response_
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,851,508,759
|
[dsutil] shape-env logging
|
bobrenjc93
|
closed
|
[
"fb-exported",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D69355332
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,851,435,783
|
s390x: add cleanup for cancelled docker image builds
|
AlekseiNikiforovIBM
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
When podman image build is cancelled,
a couple of processes are left behind,
and their existence prevents
proper shutdown of runner container.
Add cleanup step at the end of workflow
using new option recently introduced in podman:
https://github.com/containers/podman/pull/25102
Example of job preventing s390x worker cleaning up and restarting properly:
https://github.com/pytorch/pytorch/actions/runs/13289159296/job/37105230728
| true
|
2,851,338,544
|
torch.nan_to_num does not support complex64 data type under torch.compile
|
meetmul
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
### 🐛 Describe the bug
When receiving complex64 tensor, `torch.nan_to_num` works normal under eager, however it will raise not supported error under torch.compile.
code:
```python
import torch
input = torch.randn(1,1).to(torch.complex64)
try:
res = torch.nan_to_num(input)
print("Successfully run torch.nan_to_num under eager.")
except Exception as e:
print(e)
try:
res = torch.compile(torch.nan_to_num)(input)
print("Successfully run torch.nan_to_num under torch.compile.")
except Exception as e:
print(f"Failed to run torch.nan_to_num under torch.compile: {e}")
```
Actual output:
```
Successfully run torch.nan_to_num under eager.
Failed to run torch.nan_to_num under torch.compile: backend='inductor' raised:
RuntimeError: Complex dtype is not supported for isneginf, got dtype torch.complex64
While executing %nan_to_num : [num_users=1] = call_function[target=torch.nan_to_num](args = (%l_args_0_,), kwargs = {})
Original traceback:
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/external_utils.py", line 48, in inner
return fn(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Error logs
```
Successfully run torch.nan_to_num under eager.
Failed to run torch.nan_to_num under torch.compile: backend='inductor' raised:
RuntimeError: Complex dtype is not supported for isneginf, got dtype torch.complex64
While executing %nan_to_num : [num_users=1] = call_function[target=torch.nan_to_num](args = (%l_args_0_,), kwargs = {})
Original traceback:
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/external_utils.py", line 48, in inner
return fn(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,851,298,314
|
Inconsistent data type casting decision when using `torch.addmv` under torch.compile and eager
|
meetmul
|
open
|
[
"triaged",
"bug",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
### 🐛 Describe the bug
I think this is caused by the inconsistent type casting between torch.compile and eager. When `input` is float but `mat` and `vec` are integer, the **output under eager mode is integer but the output under torch.compile is float**. This inconsistent type casting will lead to inconsistent results for some cases.
Here is the code I used to find this issue:
```python
import torch
from torch import nn
class AddMVModel(nn.Module):
def __init__(self):
super(AddMVModel, self).__init__()
def forward(self, input, mat, vec):
return torch.addmv(input, mat, vec)
model = AddMVModel()
optimized_model = torch.compile(model)
input = torch.tensor([2.7], dtype=torch.float32)
mat = torch.tensor([[1, 2], [3, 4]], dtype=torch.int32)
vec = torch.tensor([1, 2], dtype=torch.int32)
out1 = model(input,mat,vec)
out2 = optimized_model(input,mat,vec)
print("eager: ", out1)
print("after torch.compile: ", out2)
```
Actual output:
```
eager: tensor([ 7, 13], dtype=torch.int32)
after torch.compile: tensor([ 7.7000, 13.7000])
```
Expected output: maybe eager and torch.compile's result can be consistent.
### Error logs
```
eager: tensor([ 7, 13], dtype=torch.int32)
after torch.compile: tensor([ 7.7000, 13.7000])
```
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,851,261,401
|
Use 2022 as default VC_YEAR for windows tests
|
atalman
|
open
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Same as: https://github.com/pytorch/pytorch/pull/147053
New Windows AMI does not have Visual Studio 2019. Hence use 2022 as default.
See: pytorch/test-infra#6226
| true
|
2,851,087,439
|
[inductor][refactor] Make _compile_file only used for fbcode
|
desertfire
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: _compile_file in codecache.py only handles specific cpp compilation in fbcode. The next step is to consolidate it with cpp_builder.
Test Plan: CI
Differential Revision: D69592025
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,851,077,715
|
[AOTI] Update test runner to use the new APIs
|
desertfire
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147105
Summary: Switch to the newer aoti_compile_and_package APIs. Some tests still kept using legacy APIs, and will follow up with internal test refactoring.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D69609685](https://our.internmc.facebook.com/intern/diff/D69609685)
| true
|
2,851,016,823
|
[ARM] Unit test TestSelectAlgorithmCPU.test_linear_with_embedding fails on non-bf16 Aarch64
|
robert-hardwick
|
open
|
[
"module: tests",
"triaged",
"module: arm"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
https://github.com/pytorch/pytorch/actions/runs/13290922608/job/37112338971
```
=================================== FAILURES ===================================
_ TestSelectAlgorithmCPU.test_linear_with_embedding_batch_size_384_in_features_196_out_features_384_bias_False_cpu_bfloat16 _
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_cpu_select_algorithm.py", line 951, in test_linear_with_embedding
self.common(mod, (idx, x), atol=atol, rtol=rtol)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 472, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1405, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1060, in codegen_and_compile
graph.run(*example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1440, in run_node
result = super().run_node(n)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 236, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1143, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1133, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 462, in wrapped
out = decomp_fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/kernel/mm.py", line 684, in tuned_addmm
return autotune_select_algorithm(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 2284, in autotune_select_algorithm
return _ALGORITHM_SELECTOR_CACHE(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 1925, in __call__
timings = do_autotuning(precompile_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 1854, in do_autotuning
timings = self.lookup(
File "/var/lib/jenkins/workspace/test/inductor/test_cpu_select_algorithm.py", line 56, in skip_cache
timings = benchmark(choices)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 1835, in autotune
return make_benchmark_fn()(choices)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 2019, in benchmark_in_current_process
inputs = get_inputs(choices)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 1982, in get_inputs
choices[0].benchmark(*example_inputs_extern, out=out_extern)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 1410, in benchmark
return super().benchmark(*args, out=out)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 4398, in benchmark
return benchmarker.benchmark(algo, args, {"out": out})
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/benchmarking.py", line 39, in wrapper
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/benchmarking.py", line 91, in benchmark
return self.benchmark_cpu(_callable, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/benchmarking.py", line 39, in wrapper
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/benchmarking.py", line 129, in benchmark_cpu
run_for(warmup)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/benchmarking.py", line 122, in run_for
_callable()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/benchmarking.py", line 89, in <lambda>
_callable = lambda: fn(*fn_args, **fn_kwargs) # noqa: E731
torch._inductor.exc.InductorError: LoweringException: RuntimeError: self and mat2 must have the same dtype, but got Float and BFloat16
```
```
To execute this test, run the following from the base repo dir:
python test/inductor/test_cpu_select_algorithm.py TestSelectAlgorithmCPU.test_linear_with_embedding_batch_size_384_in_features_196_out_features_384_bias_False_cpu_bfloat16
```
We see this test failure on non BF16 hw supported Aarch64
https://github.com/pytorch/pytorch/blob/e21181642f6e4d2522c6912a7dee676c21f07428/test/inductor/test_cpu_select_algorithm.py#L941
```
class M(torch.nn.Module):
def __init__(self, bias):
super().__init__()
self.linear = torch.nn.Linear(in_features, out_features, bias).to(
dtype=dtype
)
self.emb = torch.nn.Embedding(64, out_features)
def forward(self, idx, x):
return self.emb(idx) + self.linear(x)
```
It seems that on an Aarch64 without BF16 hw support , self.emb(idx) = float32 , self.linear(x) = bfloat16, and therefore the resulting tensor is float32, which causes the test to fail, as inductor path outputs bfloat16 tensor.
I attempted to fix this by setting self.emb dtype e.g.
`self.emb = torch.nn.Embedding(64, out_features).to(dtype=dtype)`, however I ran into this assertion error
`self.assertEqual(counters["inductor"]["cpp_epilogue_fusion_counter"], 1)`
This makes me think the eager path is working correctly and there is an issue in inductor. I will disable this test on Aarch64 with BF16 hw support.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model name: Neoverse-N1
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r3p1
BogoMIPS: 243.75
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+gitf94426b
[conda] No relevant packages
cc @mruberry @ZainRizvi @malfet @snadampal @milpuz01
| true
|
2,850,953,651
|
DISABLED test_output_match_linalg_cholesky_cpu_float32 (__main__.TestConsistencyCPU)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: macos",
"skipped"
] | 1
|
NONE
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_output_match_linalg_cholesky_cpu_float32&suite=TestConsistencyCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37150595954).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_output_match_linalg_cholesky_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13302190425/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_mps.py", line 12625, in test_output_match
mps_out = op(*mps_args, **mps_kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13302190425/lib/python3.9/site-packages/torch/testing/_internal/opinfo/core.py", line 1188, in __call__
return self.op(*args, **kwargs)
torch._C._LinAlgError: linalg.cholesky: (Batch element 0): The factorization could not be completed because the input is not positive-definite (the leading minor of order 1072234504 is not positive-definite).
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13302190425/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 3161, in wrapper
method(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13302190425/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13302190425/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13302190425/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 1626, in wrapper
fn(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13302190425/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 11: SampleInput(input=Tensor[size=(2, 0, 0), device="cpu", dtype=torch.float32], args=(), kwargs={'upper': 'False'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=11 python test/test_mps.py TestConsistencyCPU.test_output_match_linalg_cholesky_cpu_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_mps.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_mps.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @malfet @albanD
| true
|
2,850,949,299
|
Fix init CUDA preload: get correct versions (#147001)
|
aowenson-imm
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Fixes #147001
Main change is in `cuda_libs` dict. For each lib, specify two patterns:
1) specific version e.g. `libcudart.so.12*`
2) the original less-specific pattern, as a backup
Supporting change in `_preload_cuda_deps`, sorting multiple matches by version to prefer newer lib.
| true
|
2,850,898,326
|
Optimize `_inductor/debug.py` *args : Any with typing_extensions.TypeVarTuple
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
Fixes part of #146249
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,850,835,369
|
[inductor] SIGSEGV due to missing negative stride check in `torch.as_strided`
|
WLFJ
|
open
|
[
"module: crash",
"triaged",
"bug",
"oncall: pt2",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
When running the following test case with `torch.compile`, a segmentation fault (SIGSEGV) occurs. Without `torch.compile`, the expected `RuntimeError` is raised instead.
# Test Case:
```python
import torch
@torch.compile
def f(*args):
sym_0, sym_1, sym_2, sym_3 = args
var_374 = torch.tril_indices(row=sym_0, col=sym_1, offset=0)
var_483 = torch.as_strided(var_374, size=sym_2, stride=sym_3, storage_offset=None)
return var_483 + 1.
res = f(751, 0, (1,), (-1,))
print(res)
```
# Observed Behavior:
With `torch.compile`, running the above code results in a segmentation fault (SIGSEGV). However, when running the function without `torch.compile`, the following error is correctly raised:
```
Traceback (most recent call last):
File "test.py", line 10, in <module>
res = f(751, 0, (1,), (-1,))
^^^^^^^^^^^^^^^^^^^^^^
File "test.py", line 7, in f
var_483 = torch.as_strided(var_374, size=sym_2, stride=sym_3, storage_offset=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: as_strided: Negative strides are not supported at the moment, got strides: [-1]
```
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,850,818,722
|
Optimize `graph.py` typing
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 12
|
CONTRIBUTOR
|
Optimize `graph.py` methods type annotation.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,850,802,960
|
The unit of the return value of torch.cuda.clock_rate
|
cdzhan
|
closed
|
[
"module: docs",
"module: cuda",
"triaged"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
According to 
The unit of the return value should be MHz.
```bash
root@cambricon-PowerEdge-C4140:/workspace# python -c "import torch;print(torch.cuda.clock_rate())"
1312
root@cambricon-PowerEdge-C4140:/workspace# nvidia-smi --query-gpu=clocks.sm --format=csv
clocks.current.sm [MHz]
1312 MHz
1312 MHz
135 MHz
1312 MHz
```
### Versions
main
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @ptrblck @msaroufim @eqy
| true
|
2,850,797,173
|
Remove code for Python < 3.9
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,850,583,714
|
[torch.export] Exporting PaliGemma2 model fails due to data-dependent guarding issue
|
chohk88
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 9
|
NONE
|
### 🐛 Describe the bug
**Title:** [torch.export] Exporting PaliGemma2 Model Fails Due to Data-Dependent Guarding Issue
**🐛 Describe the bug**
Attempting to export the `google/paligemma2-3b-pt-224` model using `torch.export` fails due to a data-dependent guard. The error originates from https://github.com/huggingface/transformers/blob/298b3f19303294293f7af075609481d64cb13de3/src/transformers/models/paligemma/modeling_paligemma.py#L508.
Even when bypassing the error at `modeling_paligemma.py:508`, a similar issue arises at another location (https://github.com/huggingface/transformers/blob/298b3f19303294293f7af075609481d64cb13de3/src/transformers/cache_utils.py#L1657), further indicating an underlying problem with the handling of dynamic symbolic shapes.
**Error Message:**
```
W0213 09:47:01.171000 226363 site-packages/torch/fx/experimental/symbolic_shapes.py:6578] failed during evaluate_expr(Ne(u0, 589824), hint=None, size_oblivious=False, forcing_spec=False
E0213 09:47:01.172000 226363 site-packages/torch/fx/experimental/recording.py:299] failed while running evaluate_expr(*(Ne(u0, 589824), None), **{'fx_node': False})
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.16/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/.pyenv/versions/3.10.16/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/root/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 71, in <module>
cli.main()
File "/root/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 501, in main
run()
File "/root/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 351, in run_file
runpy.run_path(target, run_name="__main__")
File "/root/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 310, in run_path
return _run_module_code(code, init_globals, run_name, pkg_name=pkg_name, script_name=fname)
File "/root/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 127, in _run_module_code
_run_code(code, mod_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
File "/root/.vscode-server/extensions/ms-python.debugpy-2025.0.1-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 118, in _run_code
exec(code, run_globals)
File "/opt/torch_tensorrt/examples/dynamo/torch_export_paligemm2.py", line 61, in <module>
exported_program = _export(
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/export/_trace.py", line 1046, in wrapper
raise e
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/export/_trace.py", line 1019, in wrapper
ep = fn(*args, **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/export/_trace.py", line 2101, in _export
export_artifact = export_func( # type: ignore[operator]
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/export/_trace.py", line 1880, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/export/_trace.py", line 769, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/export/_trace.py", line 1810, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1345, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1584, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 671, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
flat_f_outs = f(*flat_f_args)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 879, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/export/_trace.py", line 1794, in forward
tree_out = mod(*args, **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/models/paligemma/modeling_paligemma.py", line 508, in forward
if inputs_embeds[special_image_mask].numel() != image_features.numel():
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/__init__.py", line 736, in __bool__
return self.node.bool_()
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 581, in bool_
return self.guard_bool("", 0)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 519, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6569, in evaluate_expr
return self._evaluate_expr(
File "/root/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6786, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Ne(u0, 589824) (unhinted: Ne(u0, 589824)). (Size-like symbols: u0)
Caused by: (transformers/models/paligemma/modeling_paligemma.py:508 in forward)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
```
**To Reproduce**
Run the following script to attempt exporting the model:
```python
import torch
from transformers import PaliGemmaProcessor, PaliGemmaForConditionalGeneration
from transformers.image_utils import load_image
from torch.export._trace import _export
# 1. Model setup
DEVICE = torch.device("cuda:0")
model_id = "google/paligemma2-3b-pt-224"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
image = load_image(url)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, torch_dtype=torch.float16
).eval().to(DEVICE)
processor = PaliGemmaProcessor.from_pretrained(model_id)
prompt = ""
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(DEVICE)
input_len = model_inputs["input_ids"].shape[-1]
# 2. PyTorch
with torch.inference_mode():
pyt_generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
pyt_generation = pyt_generation[0][input_len:]
pyt_decoded = processor.decode(pyt_generation, skip_special_tokens=True)
print("=============================")
print("PyTorch generated text:")
print(pyt_decoded)
print("=============================")
# (a) Dummy inputs
batch_size = 1
dummy_input_ids = model_inputs["input_ids"]
dummy_attention_mask = model_inputs["attention_mask"]
dummy_pixel_values = model_inputs["pixel_values"]
dummy_inputs = {
"input_ids": dummy_input_ids,
"attention_mask": dummy_attention_mask,
"pixel_values": dummy_pixel_values,
}
# (b) Dynamic shape
BATCH = torch.export.Dim("batch", min=1, max=2)
SEQ_LEN = torch.export.Dim("seq_len", min=1, max=1024)
dynamic_shapes = {
"input_ids": {0: BATCH, 1: SEQ_LEN},
"attention_mask": {0: BATCH, 1: SEQ_LEN},
"pixel_values": {0: BATCH},
}
# (c) ExportedProgram
# torch.export.export(
# model,
# args=(),
# kwargs=dummy_inputs,
# dynamic_shapes=dynamic_shapes,
# strict=False,
# )
exported_program = _export(
model,
args=(),
kwargs=dummy_inputs,
dynamic_shapes=dynamic_shapes,
strict=False,
allow_complex_guards_as_runtime_asserts=True,
)
```
**Additional context**
- The issue occurs at [transformers/models/paligemma/modeling_paligemma.py#L508](https://github.com/huggingface/transformers/blob/298b3f19303294293f7af075609481d64cb13de3/src/transformers/models/paligemma/modeling_paligemma.py#L508)
- Even after bypassing this line, another similar error arises, indicating a deeper issue.
Would appreciate any insights on how to handle data-dependent guards in `torch.export`. Thanks!
### Versions
PyTorch version: 2.7.0.dev20250212+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.16 (main, Feb 12 2025, 15:05:21) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250212+cu124
[pip3] torch-tensorrt==2.7.0.dev0+2368e63ef
[pip3] torchvision==0.22.0.dev20250212+cu124
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,850,565,577
|
Fix the Problems About Defining Static Variable in Inline Function
|
FFFrog
|
open
|
[
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cpp",
"ci-no-td"
] | 34
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147095
Refer to https://github.com/pytorch/pytorch/issues/125465 for more informations
- Remove unused header files
- Move the inline function that defines the static variable to .cc
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,850,536,732
|
[torch.export] torch._dynamo.exc.Unsupported: dynamic shape operator: aten.bincount.default
|
riestmo-nxp
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 0
|
NONE
|
### 🐛 Describe the bug
When trying to export a model that uses the torch.bincount operation, I get the following error:
```
torch._dynamo.exc.Unsupported: dynamic shape operator: aten.bincount.default; Operator does not have a meta kernel that supports dynamic output shapes, please report an issue to PyTorch
```
This code snippet can be used to reproduce the error:
```python
import torch
input = torch.randint(0, 8, (5,), dtype=torch.int64)
class BincountDummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
weights = torch.linspace(0, 1, steps=5)
bc = x.bincount(weights)
return bc
device = "cpu"
model = BincountDummyModel().to(device)
exported_model = torch.export.export(model, (input,))
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-3335 CPU @ 3.40GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 6
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 6800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 20 MiB (16 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,850,493,389
|
DISABLED test_view_dtype_upsize_errors_xla_uint8 (__main__.TestViewOpsXLA)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_dtype_upsize_errors_xla_uint8%22%2C%22TestViewOpsXLA%22%5D)).
| true
|
2,850,493,238
|
DISABLED test_view_dtype_upsize_errors_xla_uint8 (__main__.TestViewOpsXLA)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_dtype_upsize_errors_xla_uint8%22%2C%22TestViewOpsXLA%22%5D)).
| true
|
2,850,491,826
|
DISABLED test_view_dtype_upsize_errors_xla_uint8 (__main__.TestViewOpsXLA)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_dtype_upsize_errors_xla_uint8%22%2C%22TestViewOpsXLA%22%5D)).
| true
|
2,850,491,031
|
DISABLED test_view_dtype_upsize_errors_xla_uint8 (__main__.TestViewOpsXLA)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_dtype_upsize_errors_xla_uint8%22%2C%22TestViewOpsXLA%22%5D)).
| true
|
2,850,487,623
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,487,384
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,487,151
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,486,877
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,486,677
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,486,416
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,486,174
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,485,537
|
OpenReg: Run test_openreg in CI
|
Zhenbin-8
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
The current CI will skip the test codes under test/cpp_extensions, so I move `test_openreg.py` to the test directory to allow the CI to run.
| true
|
2,850,484,371
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,484,052
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,483,328
|
DISABLED test_conj_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_conj_imag_view_lazy_complex128%22%5D)).
| true
|
2,850,468,304
|
[torch][cuda] Remove redundant getting of pynvml handler
|
cdzhan
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,850,407,457
|
[inductor] SIGSEGV when using `torch.compile` with `torch.as_strided_copy`
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"oncall: pt2",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
When running the following test case with `torch.compile`, a segmentation fault (SIGSEGV) occurs. Without `torch.compile`, the expected `RuntimeError` is raised instead.
# Test case
```python
import torch
@torch.compile
def f(*args):
input, sym_1, sym_2 = args
return torch.as_strided_copy(input, size=sym_1, stride=sym_2, storage_offset=None)
res = f(torch.tensor([]), (1,), (0,),)
print(res)
```
# Observed Behavior:
With `torch.compile`, running the above code results in a segmentation fault (SIGSEGV). However, when running the function without `torch.compile`, the following error is correctly raised:
```
Traceback (most recent call last):
File "test.py", line 7, in <module>
res = f(torch.tensor([]), (1,), (0,),)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "test.py", line 5, in f
return torch.as_strided_copy(input, size=sym_1, stride=sym_2, storage_offset=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: setStorage: sizes [1], strides [0], storage offset 0, and itemsize 4 requiring a storage size of 4 are out of bounds for storage of size 0
```
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @malfet @chauhang @penguinwu
| true
|
2,850,407,244
|
How to check grads in each step of model?
|
ElinLiu0
|
closed
|
[
"module: onnx",
"triaged"
] | 7
|
NONE
|
Hi there:
I've implement a Pytorch version of [Retrieval-based-Voice-Conversion(RVC for short)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI) at [here](https://github.com/ElinLiu0/RVCTorch/blob/master/POC_Torch.ipynb).
The question is,when i wanna export my implementation pipeline into ONNX using below code:
```python
with torch.inference_mode(), torch.cuda.amp.autocast(enabled=False):
torch.onnx.export(
pipeline,
(audio.cuda(),),
"pipeline.onnx",
input_names=["input"],
output_names=["output"],
opset_version=14
)
```
It rasing below error:
```python
RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
Tensor:
0.6670
[ torch.cuda.HalfTensor{1} ]
```
Typically rasing with an `nn.BatchNorm2d` cell called at [rmvpe.py](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/rmvpe.py) at line 244.
So how could i fix this error,since this implementation finally will deploy on C# or model serving platform like NVIDIA Triton.
| true
|
2,850,364,581
|
[inductor] Performance Degradation and Hang in `torch.diff`
|
WLFJ
|
open
|
[
"module: performance",
"triaged",
"oncall: pt2",
"module: inductor",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
I encountered a significant performance issue when using `torch.diff` within a `torch.compile` function. The issue occurs when increasing the `n` parameter of `torch.diff`, leading to extreme slowdowns.
test case:
```python
import torch
@torch.compile
def f(*args):
sym_0, sym_1, sym_2 = args
var_540 = torch.ones(size=sym_0)
return torch.diff(var_540, n=sym_1, dim=sym_2, prepend=None, append=None)
res = f((3505,), 30, 0)
print(res)
```
# Observed Behavior
* When `sym_1 = 10`, execution completes in **~4 seconds**:
```
tensor([0., 0., 0., ..., 0., 0., 0.])
________________________________________________________
Executed in 4.02 secs fish external
usr time 9.42 secs 822.00 micros 9.42 secs
sys time 0.75 secs 210.00 micros 0.75 secs
```
* When `sym_1 = 20`, execution completes in **~4 seconds**:
```
tensor([0., 0., 0., ..., 0., 0., 0.])
________________________________________________________
Executed in 3.99 secs fish external
usr time 9.27 secs 511.00 micros 9.27 secs
sys time 0.76 secs 0.00 micros 0.76 secs
```
* When `sym_1 = 30`, compilation and execution fail to complete even after **500 seconds**:
```
KeyboardInterrupt
________________________________________________________
Executed in 511.90 secs fish external
usr time 514.55 secs 0.00 millis 514.55 secs
sys time 1.75 secs 1.07 millis 1.75 secs
```
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,850,307,769
|
[DONT MRGE][XPU] Add arl-h AOT target for windows cd
|
chuanqi129
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 9
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,850,284,758
|
[DO NOT MERGE] Update oneDNN to the latest main branch
|
jiayisunx
|
open
|
[
"module: mkldnn",
"open source",
"topic: not user facing",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147855
* #147360
* __->__ #147073
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,850,252,220
|
[Inductor] Set prop_kind to forward_inference when grad is not needed for mkldnn_linear_pointwise and mkldnn_convolution_pointwise
|
jiayisunx
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147855
* #147360
* #147359
* #147073
* __->__ #147072
Summary:
The `prop_kind` of `mkldnn._linear_pointwise`, `mkldnn._linear_pointwise.binary`, `mkldnn._convolution_pointwise.binary` and `mkldnn._convolution_pointwise_.binary` are always `dnnl_forward`, i.e., `dnnl_forward_training` , regardless of whether `grad` is needed. Setting `prop_kind` to `dnnl_forward_inference` for these ops when `grad` is not needed could have better performance.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,850,232,112
|
[Inductor][CPU] SIGSEGV in `torch.slice_copy` with large step value
|
WLFJ
|
closed
|
[
"high priority",
"module: crash",
"bug",
"oncall: pt2",
"oncall: cpu inductor",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
The following test case causes a SIGSEGV (Segmentation Fault) when run with `torch.compile`:
```python
import torch
@torch.compile
def f(input):
var_17 = torch.slice_copy(input, dim=0, start=449, end=None, step=9223372036854775807)
return torch.reciprocal(var_17)
input = torch.randn((875,))
res = f(input)
print(res)
```
This leads to:
```
fish: Job 1, 'python3 test.py' terminated by signal SIGSEGV (Segmentation fault)
```
However, when running the same function without `@torch.compile`, the expected output is:
```
tensor([])
```
Additionally, when executed on `torch.compile` + CUDA, the issue does not occur.
The combination of `torch.slice_copy` with an extremely large step value (`9223372036854775807`, which is `INT64_MAX`) might be causing incorrect memory access in the compiled mode, leading to a segmentation fault. Since eager mode correctly handles this case by returning an empty tensor (`tensor([])`), `torch.compile` may be missing a necessary bounds check or handling an invalid pointer.
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,850,183,262
|
[inductor][cpu] SIGILL with `torch.randint`
|
WLFJ
|
closed
|
[
"module: crash",
"bug",
"oncall: pt2",
"oncall: cpu inductor",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
When running the following test case with `torch.compile`, a SIGILL (Illegal Instruction) error occurs:
```python
import torch
@torch.compile
def f(*args):
sym_0, sym_1 = args
return torch.randint(high=sym_0, size=sym_1)
res = f(0, (3960,))
```
This leads to:
```
fish: Job 2, 'python3 test.py' terminated by signal SIGILL (Illegal instruction)
```
However, when running the same function without `@torch.compile`, the expected error is raised instead:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
res = f(0, (3960,))
^^^^^^^^^^^^^
File "test.py", line 6, in f
return torch.randint(high=sym_0, size=sym_1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: random_ expects 'from' to be less than 'to', but got from=0 >= to=0
```
It seems that `torch.compile` with CPU does not properly validate the range constraints of `torch.randint`, leading to undefined behavior that results in a crash instead of a controlled error message.
CUDA works fine with `torch.compile`.
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @chauhang @penguinwu
| true
|
2,850,141,267
|
[Inductor][CPP] Avoid transpose with cpp micro-gemm for FlexAttention
|
CaoE
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147069
* #147068
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,850,141,006
|
[Inductor][CPP] Add transposed B matrix support for CppMicroGemmFP32Vec
|
CaoE
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147069
* __->__ #147068
* Add transposed B support for CppMicroGemmFP32Vec.
* Add support for cases where N is not divisible by `block_n`.
Expand CppMicroGemmFP32Vec to generate gemm kernel that supports transposed B and N of arbitrary size.
This is the basis for https://github.com/pytorch/pytorch/pull/147069 to get better performance.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,850,140,775
|
Separate transpose from memory load/store and add load size support for convert_to_int32
|
CaoE
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147069
* #147068
* __->__ #147067
Separate transpose from memory load/store and add load size support for convert_to_int32 to facilitate the expansion for CppMicroGemmFP32Vec.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,850,136,211
|
OpenReg: Fix releasing tensor issue when using pin_memory
|
Zhenbin-8
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
# Fail when exiting process
When executing the following code:
```
import pytorch_openreg
import torch
if __name__ == "__main__":
a = torch.tensor(1).pin_memory()
```
The process will exit with error. This is the same issue as https://github.com/pytorch/pytorch/pull/140936
# Fail when exiting python generator
When executing the following code, error will happen on python 3.9:
```
import pytorch_openreg
import torch
def generator():
t = torch.tensor(1).pin_memory()
yield
if __name__ == "__main__":
iter = generator()
next(iter)
try:
next(iter) # Error happens here on python 3.9
except StopIteration:
print("success") # python 3.10+
```
This is the same issue as https://github.com/pytorch/pytorch/pull/141815#issuecomment-2547303381
cc @albanD
| true
|
2,850,132,294
|
Issue with FBGEMM Operators in Exported PyTorch AOT Model When Running in C++: Cound not find schema for fbgemm:xxx
|
siluzhou-pku
|
closed
|
[
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 1
|
NONE
|
### 🐛 Describe the bug
**Description**
I am encountering an issue when exporting a PyTorch model that uses `torch.ops.fbgemm.asynchronous_complete_cumsum` and running it in C++. The model works correctly in Python after adding `import fbgemm_gpu`, but fails when running in a C++ environment.
---
**Steps to Reproduce**
1. **Model Definition**
In my PyTorch model, I use the `torch.ops.fbgemm.asynchronous_complete_cumsum` function as follows:
```python
class gul_grs_user_model(torch.nn.Module):
def forward(self, xxx):
# ...
x_offsets = torch.ops.fbgemm.asynchronous_complete_cumsum(past_lengths)
# ... other fbgemm ops ...
return output
```
2. **Exporting the FX Graph**
I export the model using `torch.export.export` and then perform symbolic tracing:
```python
exported_program_model: torch.export.ExportedProgram = torch.export.export(
warp_model, args=(), kwargs=self.inputs_dict
)
graph_module: torch.fx.GraphModule = torch.fx.symbolic_trace(exported_program_model.module())
graph_module.to_folder(os.path.join(self.fx_folder, self.model_name), self.model_name)
```
3. **Generated `module.py`**
The exported `module.py` contains code similar to:
```python
import torch
from math import inf
from math import nan
NoneType = type(None)
import torch
from torch import device
import torch.fx._pytree as fx_pytree
import torch.utils._pytree as pytree
from torch.nn import *
class gul_grs_user_model(torch.nn.Module):
def forward(self, xxx):
# ...
asynchronous_complete_cumsum_default = torch.ops.fbgemm.asynchronous_complete_cumsum.default(view_default)
dense_to_jagged_forward_default = torch.ops.fbgemm.dense_to_jagged_forward.default(
mul_tensor_1, [asynchronous_complete_cumsum_default]
)
mul_tensor_1 = None
# ...
```
4. **Python Error When Loading the Module**
When loading the exported module in Python without modification, I encounter the following error:
```
Traceback (most recent call last):
File "fx_model_plugin.py", line 132, in <module>
args.static).build()
File "fx_model.py", line 116, in build
fx_model._model = self.load_model()
File "fx_model.py", line 80, in load_model
user_model = torch.fx.symbolic_trace(user_model)
File "torch/fx/_symbolic_trace.py", line 1281, in symbolic_trace
graph = tracer.trace(root, concrete_args)
File "torch/fx/_symbolic_trace.py", line 823, in trace
(self.create_arg(fn(*args)),),
File "module.py", line 83, in forward
asynchronous_complete_cumsum_default = torch.ops.fbgemm.asynchronous_complete_cumsum.default(view_default)
File "torch/_ops.py", line 1225, in __getattr__
raise AttributeError(...)
```
5. **Workaround in Python**
By adding `import fbgemm_gpu` at the beginning of `module.py`, the module loads and runs successfully in Python:
```python
import torch
import fbgemm_gpu # Added import
from math import inf
from math import nan
# ... rest of the code ...
```
6. **Compiling the Model**
I compile the model using `torch._export.aot_compile`:
```python
dynamicLib_path = torch._export.aot_compile(
self.model,
args=tuple(list(self._inputs_dict.values())),
dynamic_shapes={**self._dynamic_shapes},
options={
"aot_inductor.output_path": os.path.join(self.dynamicLib_output_folder, dynamicLib_name),
"max_autotune": True,
},
)
```
7. **Error When Running in C++**
However, when attempting to run the compiled module in C++, I receive the following error:

---
**Environment**
- **PyTorch version**: `torch-2.5.1+cu124`
- **fbgemm_gpu version**: `1.0.0`
- **Python version**: `3.12`
- **CUDA version**: `12.4`
- **C++ Build Information**:

---
**Question**
How can I resolve the issue of `torch.ops.fbgemm.*` functions not being found when running the compiled module in C++? Is there a proper way to include or register the `fbgemm_gpu` custom operations in the C++ environment so that the model runs successfully?
---
**Additional Information**
- In Python, adding `import fbgemm_gpu` resolves the issue, which suggests that the `fbgemm_gpu` module needs to be imported to register the custom operations.
- In C++, I am unsure how to perform an equivalent operation to ensure that `fbgemm_gpu` functions are available.
- The C++ application links against `libtorch.so`, but it seems that it doesn't include `fbgemm_gpu` operations by default.
- I suspect that I need to include or link against `fbgemm_gpu` when building the C++ application, but I couldn't find clear documentation on how to do this.
---
**Thank you for your assistance!**
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Alibaba Cloud Linux 3 (Soaring Falcon) (x86_64)
GCC version: (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3.8 2.32)
Clang version: 17.0.6 (Alibaba Cloud Compiler 17.0.6.4-24.11.20.alios7)
CMake version: version 3.26.5
Libc version: glibc-2.32
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.32
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 2900.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[pip3] fbgemm_gpu==1.0.0
[conda] No relevant packages
cc @chauhang @penguinwu
| true
|
2,850,090,484
|
[torch/elastic] unexpected behavior of torch elastic
|
shinytang6
|
open
|
[
"oncall: distributed",
"triaged",
"module: elastic"
] | 17
|
NONE
|
### 🐛 Describe the bug
Hi all, I conducted some simple tests using torch elastic to understand its behavior under node failures, and I encountered several unexpected outcomes against the official doc.
## Fault Tolerance & Elasticity test
Master node A command:
```shell
$ torchrun --nnodes=1:2 --nproc-per-node=1 --rdzv-id=0 --rdzv-backend=c10d --rdzv-endpoint=MASTER_ADDR:MASTER_PORT --max-restarts=10 elastic-demo.py
```
Worker node B command:
```
$ torchrun --nnodes=1:2 --nproc-per-node=1 --rdzv-id=0 --rdzv-backend=c10d --rdzv-endpoint=MASTER_ADDR:MASTER_PORT --max-restarts=10 elastic-demo.py
```
### Case 1
* Both nodes start the task simultaneously, and the training begins normally.
* After terminating the worker node B task (using ctrl+c or kill -15), master node A hangs and the training still stalls.
* Restarting the worker node B task sometimes results in an error (torch.distributed.elastic.rendezvous.api.RendezvousClosedError), but it occasionally restarts successfully. This behavior is irregular and the `--max-restarts` parameter does not seem to take effect; it occurs regardless of increasing or decreasing its value and appears to depend on the timing of the rejoining(not sure about that).
### Case 2
* Both nodes start the task simultaneously, and the training begins normally.
* After terminating the worker node B task (using kill -9), master node A hangs and the training stalls.
* Restarting the worker node B task allows the training to restart, but the `--max-restarts` parameter does not seem to take effect too.
### Case 3
* Both nodes start the task simultaneously, and the training begins normally.
* After terminating master node A’s task (using ctrl+c, kill -15, or kill -9), the entire training crashes immediately.
The detailed error message:
```python
Traceback (most recent call last):
File "/opt/conda/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 901, in main
run(args)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 255, in launch_agent
result = agent.run()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 124, in wrapper
result = f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 680, in run
result = self._invoke_run(role)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 829, in _invoke_run
self._initialize_workers(self._worker_group)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 124, in wrapper
result = f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 652, in _initialize_workers
self._rendezvous(worker_group)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 124, in wrapper
result = f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 489, in _rendezvous
rdzv_info = spec.rdzv_handler.next_rendezvous()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1125, in next_rendezvous
self._op_executor.run(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 667, in run
raise RendezvousClosedError
torch.distributed.elastic.rendezvous.api.RendezvousClosedError
```
So my questions are:
1. Is the behavior of different signals (SIGINT, SIGTERM, SIGKILL) expected?
2. Why does the `--max-restarts` parameter not seem to affect the restart behavior? Is there something I'm missing in the configuration or use of this parameter?
### Versions
torch version:
```python
$ pip show torch
Name: torch
Version: 2.4.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /opt/conda/lib/python3.8/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-nccl-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: accelerate, bitsandbytes, deepspeed, flash_attn, flash_attn_1, peft, torchaudio, torchpippy, torchvision, transformer_engine, trl
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @dzhulgakov
| true
|
2,850,086,396
|
[DEBUG ONLY] vec flex attention and add UT
|
chunyuan-w
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,850,040,979
|
[Feature Request] Release original parameters by layer when turning on `freezing_discard_parameters`
|
leslie-fang-intel
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 8
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
`Freezing` is an Inductor configuration that converts input arguments into frozen parameters and applies constant folding to transform frozen parameters accordingly. There is an additional flag, `freezing_discard_parameters`, which, when enabled, discards parameters from the eager module to reduce memory usage ([code reference](https://github.com/pytorch/pytorch/blob/43eb39d7c832b5560f7bfa8d29cc7919ac21c0ca/torch/_inductor/freezing.py#L124C15-L126)). However, `freezing_discard_parameters` takes effect only at the end of the freezing pass, meaning peak memory usage during the process may still exceed the threshold for large language models.
In this feature request, we aim to explore solutions for discarding eager module parameters layer by layer to minimize peak memory usage.
### Alternatives
Some points to implement this feature:
- Step 1: Mapping `_frozen_param` to Eager Module Buffers
- We need to record the mapping between each `_frozen_param` and buffers from the eager module. It’s unclear if this can be done purely through name analysis; otherwise, we may need to check the data_ptr for each `_frozen_param` and compare it with buffers from the eager module.
- To simplify below analysis and safely apply this feature, we will also ensure that all `_frozen_param` have no aliasing between each other.
- Step 2: Handling `ConstantFolder` Runs
- During each node execution in `ConstantFolder`, we check whether a `_frozen_param` is generated with new storage. Some constant folding operations (e.g., permute nodes) may generate a new `_frozen_param` that still shares storage with the original eager module buffer.
- Step 3: Applying the Optimization
- If a `_frozen_param` with new storage is detected, we further verify whether the original FX node has only one user. Once the above conditions are met, we can discard the corresponding buffer from the eager module. Additionally, we may need to delete other Python object references if necessary to free up memory effectively.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @eellison
### Additional context
_No response_
| true
|
2,849,963,534
|
try print stacktrace for error
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Differential Revision: D69573525
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,951,908
|
check if config.autotune_fallback_to_aten before using aten as a fallback
|
henrylhtsang
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D69213269
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,938,625
|
AttributeError: type object 'torch._C._distributed_c10d.BackendType' has no attribute 'XCCL'.
|
oraluben
|
open
|
[
"oncall: distributed",
"triaged",
"module: xpu"
] | 8
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Found on 2.6+cu126 on aarch64
```
(venv) root@7dc30e9f3e4f:/workspace# pip3 install torch --index-url https://download.pytorch.org/whl/cu126
Looking in indexes: https://download.pytorch.org/whl/cu126, https://pypi.ngc.nvidia.com
Collecting torch
Downloading https://download.pytorch.org/whl/cu126/torch-2.6.0%2Bcu126-cp312-cp312-linux_aarch64.whl.metadata (26 kB)
Collecting filelock (from torch)
Downloading https://download.pytorch.org/whl/filelock-3.13.1-py3-none-any.whl.metadata (2.8 kB)
Collecting typing-extensions>=4.10.0 (from torch)
Downloading https://download.pytorch.org/whl/typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting setuptools (from torch)
Downloading https://download.pytorch.org/whl/setuptools-70.2.0-py3-none-any.whl.metadata (5.8 kB)
Collecting sympy==1.13.1 (from torch)
Downloading https://download.pytorch.org/whl/sympy-1.13.1-py3-none-any.whl (6.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 13.0 MB/s eta 0:00:00
Collecting networkx (from torch)
Downloading https://download.pytorch.org/whl/networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
Collecting jinja2 (from torch)
Downloading https://download.pytorch.org/whl/Jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch)
Downloading https://download.pytorch.org/whl/fsspec-2024.6.1-py3-none-any.whl.metadata (11 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy==1.13.1->torch)
Downloading https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 119.1 MB/s eta 0:00:00
Collecting MarkupSafe>=2.0 (from jinja2->torch)
Downloading https://download.pytorch.org/whl/MarkupSafe-2.1.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (29 kB)
Downloading https://download.pytorch.org/whl/cu126/torch-2.6.0%2Bcu126-cp312-cp312-linux_aarch64.whl (2462.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 GB 37.1 MB/s eta 0:00:00
Downloading https://download.pytorch.org/whl/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Downloading https://download.pytorch.org/whl/filelock-3.13.1-py3-none-any.whl (11 kB)
Downloading https://download.pytorch.org/whl/fsspec-2024.6.1-py3-none-any.whl (177 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 177.6/177.6 kB 3.0 MB/s eta 0:00:00
Downloading https://download.pytorch.org/whl/Jinja2-3.1.4-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.3/133.3 kB 2.2 MB/s eta 0:00:00
Downloading https://download.pytorch.org/whl/networkx-3.3-py3-none-any.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 20.1 MB/s eta 0:00:00
Downloading https://download.pytorch.org/whl/setuptools-70.2.0-py3-none-any.whl (930 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 930.8/930.8 kB 41.7 MB/s eta 0:00:00
Installing collected packages: mpmath, typing-extensions, sympy, setuptools, networkx, MarkupSafe, fsspec, filelock, jinja2, torch
Successfully installed MarkupSafe-2.1.5 filelock-3.13.1 fsspec-2024.6.1 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 setuptools-70.2.0 sympy-1.13.1 torch-2.6.0+cu126 typing-extensions-4.12.2
(venv) root@7dc30e9f3e4f:/workspace# python -c 'import torch'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/venv/lib/python3.12/site-packages/torch/__init__.py", line 2108, in <module>
from torch import _VF as _VF, functional as functional # usort: skip
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/torch/functional.py", line 7, in <module>
import torch.nn.functional as F
File "/venv/lib/python3.12/site-packages/torch/nn/__init__.py", line 8, in <module>
from torch.nn.modules import * # usort: skip # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/torch/nn/modules/__init__.py", line 1, in <module>
from .module import Module # usort: skip
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 29, in <module>
from torch.utils._python_dispatch import is_traceable_wrapper_subclass
File "/venv/lib/python3.12/site-packages/torch/utils/__init__.py", line 8, in <module>
from torch.utils import (
File "/venv/lib/python3.12/site-packages/torch/utils/data/__init__.py", line 1, in <module>
from torch.utils.data.dataloader import (
File "/venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 20, in <module>
import torch.distributed as dist
File "/venv/lib/python3.12/site-packages/torch/distributed/__init__.py", line 122, in <module>
from .device_mesh import DeviceMesh, init_device_mesh
File "/venv/lib/python3.12/site-packages/torch/distributed/device_mesh.py", line 40, in <module>
from torch.distributed.distributed_c10d import (
File "/venv/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 234, in <module>
class Backend(str):
File "/venv/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 285, in Backend
XCCL: ProcessGroup.BackendType.XCCL,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: type object 'torch._C._distributed_c10d.BackendType' has no attribute 'XCCL'. Did you mean: 'NCCL'?
```
x86 is fine
https://github.com/pytorch/executorch/issues/7692 looks like a same issue
Also note that 2.6+cu124 aarch64 is missing:
```
(venv) root@7dc30e9f3e4f:/workspace# pip3 install torch --index-url https://download.pytorch.org/whl/cu124
Looking in indexes: https://download.pytorch.org/whl/cu124, https://pypi.ngc.nvidia.com
Collecting torch
Downloading https://download.pytorch.org/whl/cu124/torch-2.5.1-cp312-cp312-linux_aarch64.whl (2359.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/2.4 GB 1.3 MB/s eta 0:30:15
ERROR: Operation cancelled by user
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (aarch64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.4.16-linuxkit-aarch64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Vendor ID: Apple
Model name: -
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 6
Socket(s): -
Cluster(s): 1
Stepping: 0x0
BogoMIPS: 48.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] torch==2.6.0+cu126
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,849,870,079
|
DISABLED test_comprehensive_nn_functional_interpolate_linear_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_interpolate_linear_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37071120963).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_nn_functional_interpolate_linear_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2268, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1626, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1548, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1196, in test_comprehensive
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1156, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 631, in check_model_gpu
check_model(
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 472, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1405, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1125, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1990, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2032, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2758, in load_by_key_path
mod = _reload_python_module(key, path)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpl_pslh1a/py/cpyiyckui3wrzg7avzyn26ctxka6bqy3eoqqwzkaqezgnq6a5lq6.py", line 356, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 421, in wait
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3237, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 312, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 272, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 427, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1068, in make_launcher
binary._init_handles()
File "/var/lib/jenkins/triton/python/triton/compiler/compiler.py", line 390, in _init_handles
self.module, self.function, self.n_regs, self.n_spills = driver.active.utils.load_binary(
torch._inductor.exc.InductorError: RuntimeError: Triton Error [HIP]: Code: 209, Messsage: no kernel image is available for execution on the device
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3126, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3126, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1626, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(2, 3, 4), device="cuda:0", dtype=torch.float16], args=((3)), kwargs={'scale_factor': 'None', 'mode': "'linear'", 'align_corners': 'True'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_nn_functional_interpolate_linear_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,849,869,647
|
DISABLED test_comprehensive_nn_functional_interpolate_bilinear_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_interpolate_bilinear_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37070019968).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_nn_functional_interpolate_bilinear_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,849,837,329
|
[Inductor][CPP] Fix node name for wgt delete
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147056
**Summary**
This is a regression issue caused by a change in the FX node name. In commit 71010bf0972834e35a155e6a187e5c6649a5a36b, both the node name and target for the `get_attr` node in `V.graph.graph.nodes` were `_frozen_param2`. However, in the latest main, the node name has changed to `_reorder_linear_weight`. This PR fixes the regression by using the node's target instead of its name.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_select_algorithm.py -k test_cpp_weight_prune
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,836,496
|
INTERNAL ASSERT FAILED or SEGFAULT when JITting a function that can return different types
|
MaigoAkisame
|
open
|
[
"oncall: jit"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Put the following code in `foo.py`. The `wtf` function may return either an int or a list.
```python
import torch
from typing import Any
@torch.jit.script
def wtf(flag: bool) -> Any:
return 1 if flag else list((2,))
```
Run `python foo.py`, and it'll trigger an `INTERNAL ASSERT FAILED` error:
```
Traceback (most recent call last):
File "/tmp/foo.py", line 5, in <module>
def wtf(flag: bool) -> Any:
File "/home/yunwang/.fbpkg_conda_envs/xlformers_llama4_evals_conda-034f809/lib/python3.10/site-packages/torch/jit/_script.py", line 1429, in script
ret = _script_impl(
File "/home/yunwang/.fbpkg_conda_envs/xlformers_llama4_evals_conda-034f809/lib/python3.10/site-packages/torch/jit/_script.py", line 1205, in _script_impl
fn = torch._C._jit_script_compile(
RuntimeError: r INTERNAL ASSERT FAILED at "/mnt/code/pytorch/aten/src/ATen/core/jit_type_base.h":556, please report a bug to PyTorch.
```
If the last term is written as a list literal, it will even trigger a `Segmentation fault (core dumped)`:
```python
import torch
from typing import Any
@torch.jit.script
def wtf(flag: bool) -> Any:
return 1 if flag else [2]
```
This can only work if we annotate the type of the last term, like:
```python
import torch
from typing import Any, List
@torch.jit.script
def wtf(flag: bool) -> Any:
return 1 if flag else torch.jit.annotate(List[int], [2])
```
Is this a known limitation of PyTorch in inferring the types of expressions?
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0a0+git88e338f
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk20_zion_2830_g3e5ab162667d-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.154.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnxruntime==1.20.1
[pip3] optree==0.12.1
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-metric-learning==2.8.1
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.6.0a0+git88e338f
[pip3] torch-audiomentations==0.12.0
[pip3] torch_pitch_shift==1.2.5
[pip3] torchaudio==2.5.0a0+97ed7b3
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.0a0+c36025a
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,849,832,933
|
Fix for issue #142834, Segmentation fault in replication_pad2d_backward
|
AmalDevHaridevan
|
open
|
[
"module: cpu",
"triaged",
"open source",
"Stale"
] | 3
|
NONE
|
Fixes #142834
# Before fix
```python
import torch
grad_output = torch.full((2, 0, 6, 8,), 1, dtype=torch.float)
self = torch.full((2, 2, 4, 4,), 1, dtype=torch.float)
padding = [2, 2, 1, 1]
print("="*50)
print("input_tensor:")
print(self)
print("="*50)
print("output_tensor:")
print(grad_output)
print("="*50)
torch.ops.aten.replication_pad2d_backward(grad_output, self, padding)
```
```
==================================================
input_tensor:
tensor([[[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]],
[[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]]])
==================================================
output_tensor:
tensor([], size=(2, 0, 6, 8))
==================================================
Segmentation fault (core dumped)
```
# After fix
```==================================================
input_tensor:
tensor([[[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]],
[[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]]])
==================================================
output_tensor:
tensor([], size=(2, 0, 6, 8))
==================================================
Traceback (most recent call last):
File "/home/harid/pytorch/../test.py", line 44, in <module>
torch.ops.aten.replication_pad2d_backward(grad_output, self, padding)
File "/home/harid/pytorch/torch/_ops.py", line 1156, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: grad output tensor is empty
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,849,817,138
|
Use 2022 as default VC_YEAR for windows builds
|
atalman
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
New Windows AMI does not have Visual Studio 2019. Hence use 2022 as default.
See: https://github.com/pytorch/test-infra/pull/6226
| true
|
2,849,802,615
|
[ONNX] Implement aten.stft
|
justinchuby
|
open
|
[
"module: onnx",
"triaged",
"OSS contribution wanted"
] | 4
|
COLLABORATOR
|
Otherwise it is decomp to unfold and fft, which is more memory consuming I think.
| true
|
2,849,788,364
|
Inductor Triton Gemm Autotune broke on the latest Triton
|
xuzhao9
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
On Triton latest main branch (commit 06941f490322679231aae20bfe20b61e9885ad4) and the latest PyTorch nightly branch, run the following script:
```
import torch
import torch._inductor.config as inductor_config
import triton
M = 20120
K = 512
N = 1536
a = torch.randn([M,N]).cuda()
b = torch.randn([M,K]).cuda()
c = torch.randn([K,N]).cuda()
def mm():
return torch.addmm(a, b, c)
with inductor_config.patch(
max_autotune=True,
max_autotune_gemm_backends="TRITON",
autotune_fallback_to_aten=False,
):
pt2_mm = torch.compile(mm, dynamic=False)
pt2_mm()
if __name__ == "__main__":
pt2_mm()
```
TorchInductor gemm autotune will break:
```
AUTOTUNE addmm(20120x1536, 20120x512, 512x1536)
triton_mm_13 1.4602 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=64, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_9 1.4848 ms 98.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=128, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_6 1.4889 ms 98.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=64, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=4
triton_mm_5 1.5155 ms 96.4% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=16, BLOCK_M=64, BLOCK_N=64, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=4
triton_mm_10 1.7459 ms 83.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=128, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=4, num_warps=8
triton_mm_14 1.7510 ms 83.4% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=64, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=4, num_warps=8
triton_mm_15 1.8029 ms 81.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=128, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=8
triton_mm_16 2.0388 ms 71.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=128, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_7 2.0695 ms 70.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=64, BLOCK_N=64, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=8
triton_mm_11 2.1268 ms 68.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=64, BLOCK_N=128, B_PROLOGUE_CAST_TYPE=None, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
SingleProcess AUTOTUNE benchmarking takes 14.4064 seconds and 0.0000 seconds precompiling for 19 choices
E0212 21:27:28.480000 109916 subproc_pool.py:321] Error in subprocess
E0212 21:27:28.480000 109916 subproc_pool.py:321] concurrent.futures.process._RemoteTraceback:
E0212 21:27:28.480000 109916 subproc_pool.py:321] """
E0212 21:27:28.480000 109916 subproc_pool.py:321] Traceback (most recent call last):
E0212 21:27:28.480000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker
E0212 21:27:28.480000 109916 subproc_pool.py:321] r = call_item.fn(*call_item.args, **call_item.kwargs)
E0212 21:27:28.480000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0212 21:27:28.480000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_worker/subproc_pool.py", line 340, in do_job
E0212 21:27:28.480000 109916 subproc_pool.py:321] return pickler.dumps(result)
E0212 21:27:28.480000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^^^^^^^
E0212 21:27:28.480000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_worker/subproc_pool.py", line 100, in dumps
E0212 21:27:28.480000 109916 subproc_pool.py:321] return pickle.dumps(obj, pickle.HIGHEST_PROTOCOL)
E0212 21:27:28.480000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0212 21:27:28.480000 109916 subproc_pool.py:321] AttributeError: Can't pickle local object 'JITFunction.__init__.<locals>.<lambda>'
E0212 21:27:28.480000 109916 subproc_pool.py:321] """
E0212 21:27:28.480000 109916 subproc_pool.py:321]
E0212 21:27:28.480000 109916 subproc_pool.py:321] The above exception was the direct cause of the following exception:
E0212 21:27:28.480000 109916 subproc_pool.py:321]
E0212 21:27:28.480000 109916 subproc_pool.py:321] Traceback (most recent call last):
E0212 21:27:28.480000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_worker/subproc_pool.py", line 319, in callback
E0212 21:27:28.480000 109916 subproc_pool.py:321] result = future.result()
E0212 21:27:28.480000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^
E0212 21:27:28.480000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/concurrent/futures/_base.py", line 449, in result
E0212 21:27:28.480000 109916 subproc_pool.py:321] return self.__get_result()
E0212 21:27:28.480000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^^^^^
E0212 21:27:28.480000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
E0212 21:27:28.480000 109916 subproc_pool.py:321] raise self._exception
E0212 21:27:28.480000 109916 subproc_pool.py:321] AttributeError: Can't pickle local object 'JITFunction.__init__.<locals>.<lambda>'
E0212 21:27:28.486000 109916 subproc_pool.py:321] Error in subprocess
E0212 21:27:28.486000 109916 subproc_pool.py:321] concurrent.futures.process._RemoteTraceback:
E0212 21:27:28.486000 109916 subproc_pool.py:321] """
E0212 21:27:28.486000 109916 subproc_pool.py:321] Traceback (most recent call last):
E0212 21:27:28.486000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker
E0212 21:27:28.486000 109916 subproc_pool.py:321] r = call_item.fn(*call_item.args, **call_item.kwargs)
E0212 21:27:28.486000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0212 21:27:28.486000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_worker/subproc_pool.py", line 340, in do_job
E0212 21:27:28.486000 109916 subproc_pool.py:321] return pickler.dumps(result)
E0212 21:27:28.486000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^^^^^^^
E0212 21:27:28.486000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_worker/subproc_pool.py", line 100, in dumps
E0212 21:27:28.486000 109916 subproc_pool.py:321] return pickle.dumps(obj, pickle.HIGHEST_PROTOCOL)
E0212 21:27:28.486000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0212 21:27:28.486000 109916 subproc_pool.py:321] AttributeError: Can't pickle local object 'JITFunction.__init__.<locals>.<lambda>'
E0212 21:27:28.486000 109916 subproc_pool.py:321] """
E0212 21:27:28.486000 109916 subproc_pool.py:321]
E0212 21:27:28.486000 109916 subproc_pool.py:321] The above exception was the direct cause of the following exception:
E0212 21:27:28.486000 109916 subproc_pool.py:321]
E0212 21:27:28.486000 109916 subproc_pool.py:321] Traceback (most recent call last):
E0212 21:27:28.486000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_worker/subproc_pool.py", line 319, in callback
E0212 21:27:28.486000 109916 subproc_pool.py:321] result = future.result()
E0212 21:27:28.486000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^
E0212 21:27:28.486000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/concurrent/futures/_base.py", line 449, in result
E0212 21:27:28.486000 109916 subproc_pool.py:321] return self.__get_result()
E0212 21:27:28.486000 109916 subproc_pool.py:321] ^^^^^^^^^^^^^^^^^^^
E0212 21:27:28.486000 109916 subproc_pool.py:321] File "/home/xz/miniconda3/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
E0212 21:27:28.486000 109916 subproc_pool.py:321] raise self._exception
E0212 21:27:28.486000 109916 subproc_pool.py:321] AttributeError: Can't pickle local object 'JITFunction.__init__.<locals>.<lambda>'
Traceback (most recent call last):
File "/home/xz/1.py", line 22, in <module>
pt2_mm()
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1405, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1125, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1990, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/graph.py", line 2032, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 2758, in load_by_key_path
mod = _reload_python_module(key, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_xz/dg/cdgmewjcirhyxskflqjjf2d4zjuqxgbjtxtnhqoxwrs6zc53e2ck.py", line 146, in <module>
async_compile.wait(globals())
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/async_compile.py", line 421, in wait
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 3237, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/site-packages/torch/_inductor/async_compile.py", line 311, in get_result
kernel = task.result()
^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/xz/miniconda3/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
torch._inductor.exc.InductorError: AttributeError: Can't pickle local object 'JITFunction.__init__.<locals>.<lambda>'
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
Note:
- Rerun the script will *NOT* reproduce, to reproduce it again, you will need to remove the cache at `/tmp/torchinductor_${USER}`
- This is a recent change, the program ran well on triton hash ae1a8f1e but broken on triton hash 08d7f64d
Reported by Tritonbench CI: https://github.com/pytorch-labs/tritonbench/actions/runs/13269027574/job/37122665658
### Versions
Pytorch Nightly: 2.7.0.dev20250212+cu126
Triton lastest main branch: https://github.com/triton-lang/triton/commit/06941f490322679231aae20bfe20b61e9885ad48
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,849,727,585
|
More precise check for shared storage check in inductor/reinplace pass
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147050
Currently if two tensor share storage we have some logic to avoid re-inplacing. Before this PR two tensors share storage if use same underlying storage even if they do not overlap. This diff enhance the checks to avoid cases when we know tensors do not overlap easily.
mitigate https://github.com/pytorch/pytorch/issues/139628 but does not fix the inductor issue in it.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,715,553
|
fake_tensor: Handle op errors more gracefully
|
c00w
|
open
|
[
"Stale",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147049
if we have a operator error (i.e. incompatible dimensions etc... from
torch._check) within a faketensor, then it fails with
torch._dynamo.exc.TorchRuntimeError rather than gracefully falling back to be
unimplemnted and letting eager mode fail
Since fake_tensor had no dependency on dynamo, I did not add one, and instead
relied up existing fake tensor unsupported exceptions. Let me know if there is
a preference to instead use ObservedExceptions here.
I've added a specific test case which triggers a torch._check failure whose
stack trace lines up with errors I've observed in production
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,849,687,783
|
Feature Request: rsample for Von Mises distribution
|
dario-loi
|
open
|
[
"module: distributions",
"triaged",
"needs research"
] | 0
|
NONE
|
## 🚀 The feature, motivation and pitch
The von Mises-Fisher distribution implemented in `torch.distribution` should get an `.rsample()` method.
## Motivation
Backpropagating through vMF is essential to train Hyperspherical VAEs, which have drastically better performance for directional data, for example in graph reconstruction (https://arxiv.org/abs/1804.00891).
Without `.rsample` one is limited to gaussian priors for VAE training.
## Additional context
An implementation that exposes an `.rsample()` method is available [in this repo](https://github.com/nicola-decao/s-vae-pytorch/blob/master/hyperspherical_vae/distributions/von_mises_fisher.py).
Moreover I can see that in the original Von Mises distribution feature request (#13811), comments are still stating that the distribution only works for 3D, if that's the case, then we really need an N-D implementation before we need `.rsample()` (but ideally we would like both).
### Alternatives
_No response_
### Additional context
_No response_
cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
2,849,683,648
|
DISABLED test_comprehensive_sub_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_sub_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37064610656).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_sub_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2268, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1626, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1548, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 955, in inner
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 947, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1196, in test_comprehensive
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1156, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 631, in check_model_gpu
check_model(
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 472, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1402, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1122, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1990, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2032, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2758, in load_by_key_path
mod = _reload_python_module(key, path)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmp8a_fq99r/3r/c3rzu2oegaufd6pkc6k7dwcm72ndfggbcgeh5mowo3dtkew2cjnk.py", line 84, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 421, in wait
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3237, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 312, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 272, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 427, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1068, in make_launcher
binary._init_handles()
File "/var/lib/jenkins/triton/python/triton/compiler/compiler.py", line 390, in _init_handles
self.module, self.function, self.n_regs, self.n_spills = driver.active.utils.load_binary(
torch._inductor.exc.InductorError: RuntimeError: Triton Error [HIP]: Code: 209, Messsage: no kernel image is available for execution on the device
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3126, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3126, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1626, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 5: SampleInput(input=Tensor[size=(5, 10, 5), device="cuda:0", dtype=torch.float16], args=TensorList[Tensor[size=(5, 10, 5), device="cuda:0", dtype=torch.float16]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=5 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_sub_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,849,677,907
|
[dynamo] Make SliceVariable a subclass of VariableTracker
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147046
* #146995
* #146819
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,849,666,326
|
[cond] make cond call fake kernel in dynamo
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147130
* __->__ #147045
* #146954
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @StrongerXi
| true
|
2,849,656,944
|
Clean up backend_type_map from distributed_c10d
|
H-Huang
|
open
|
[
"oncall: distributed",
"triaged",
"better-engineering",
"module: c10d"
] | 0
|
MEMBER
|
Try to remove `backend_type_map` since it doesn't look needed anymore and validate CI / internal tests pass.
https://github.com/pytorch/pytorch/blob/67cbbb29e075af848d95c936eca79e6645208107/torch/distributed/distributed_c10d.py#L282
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,849,644,440
|
UNSTABLE pull / win-vs2022-cpu-py3 / build
|
huydhn
|
closed
|
[
"module: windows",
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
The failure shows up after the new AMI ami-0403662469a2d1e25 rolls out. cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @seemethere @malfet @pytorch/pytorch-dev-infra @atalman @Camyll
Same as https://github.com/pytorch/pytorch/issues/147041
| true
|
2,849,631,166
|
UNSTABLE trunk / win-vs2022-cuda12.1-py3 / build
|
huydhn
|
closed
|
[
"module: windows",
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
The failure shows up after the new AMI ami-0403662469a2d1e25 rolls out. cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @seemethere @malfet @pytorch/pytorch-dev-infra @atalman @Camyll
Same as https://github.com/pytorch/pytorch/issues/147041
| true
|
2,849,629,955
|
UNSTABLE trunk / win-vs2022-cpu-py3
|
huydhn
|
closed
|
[
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
The failure shows up after the new AMI `ami-0403662469a2d1e25` rolls out. cc @seemethere @malfet @pytorch/pytorch-dev-infra @atalman @Camyll
| true
|
2,849,627,598
|
Updated test_cuda.py to rerun tests
|
BLOrange-AMD
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/rocm"
] | 13
|
CONTRIBUTOR
|
Initially test_cuda::TestCudaMallocAsync::test_clock_speed and test_cuda::TestCudaMallocAsync::test_power_draw are skipped in this [commit](https://github.com/ROCm/pytorch/commit/d4871750d9ea0c36cfd5ff8a19a0b6aeedb729ad).
Pulled ROCm nightly image and verified these two tests run fine locally. Filed this PR to enable them.
| true
|
2,849,607,029
|
[DCP] Introduce process based async checkpointing
|
MeetVadakkanchery
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: new features",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 26
|
CONTRIBUTOR
|
Summary:
### Context
Background checkpoint upload thread interfering with trainer thread:
In [async save API](https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/state_dict_saver.py#L239-L248), the background thread spends a considerable amount of time on CPU-bound tasks (pickling/unpickling several metada objects a.k.a SavePlans) on rank0 during the collective operation; this kind of asymmetric computation heavily contends for GIL with the trainer thread causing GPU util to suffer significantly for the E2E checkpoint duration.
### Solution:
Introduce async save via a checkpoint daemon process. This daemon process will be created once (during the first save attempt) and can serve async checkpoint requests for the remainder of training lifetime.
Test Plan: Added E2E UTs for process based async save.
Differential Revision: D69272583
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @mhorowitz @pradeepfn @ekr0
| true
|
2,849,595,740
|
[Inductor] Graph Partition
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: new features",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
This PR implements inductor graph partition. Previously, 1 dynamo graph is mapped to 1 inductor graph, and further mapped to 1 call function. In this PR, we allow 1 dynamo graph mapped to multiple inductor graphs and multiple `graph_partition` functions in the generated code. This allows applying different further optimizations to different `graph_partition`.
Design Doc: [link](https://docs.google.com/document/d/1qPgOfy25l7SIYnrQrvU-TO1mdHMslCwv_SLmeXID6tM/edit?usp=sharing)
Example: [Generated code before and after this diff](https://www.internalfb.com/intern/diffing/?paste_number=1737334601)
In the follow-up PR, we will extend the work to cudagraph, which allows applying cudagraph to parts of the generated code (#125864).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,591,699
|
Add CUDA 12.8 windows nightly build
|
tinglvv
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 10
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/issues/145570
windows AMI is deployed to prod today, prepping the windows cuda 12.8 build
cc @atalman @malfet @ptrblck @nWEIdia
| true
|
2,849,587,639
|
test - bump up benchmarked epi choices
|
eellison
|
open
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147036
* #147008
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,577,638
|
fix pt2e block wise quantization test
|
cccclai
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D69559217
https://github.com/pytorch/pytorch/pull/145941 breaks the unit test added for prepare pt2e + block wise quantization. Fixing
| true
|
2,849,573,447
|
[ROCm] [TunableOp] Enable logging of BLAS parameters
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
This PR supports a logging feature that is being requested.
```
PYTORCH_TUNABLEOP_BLAS_LOG=1
```
Enables the logging of BLAS parameters with either offline or online (in-situ) tuning.
The BLAS parameters are written to the CSV file.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,849,556,310
|
[Inductor-CPP] If all of the activation scale dims are 1, make it a 0D tensor
|
sanchitintel
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
COLLABORATOR
|
For int8 dynamically quantized activation & int8 quantized weights, add a workaround for some indexing issue that expected an empty index ( so, was expecting a 0D tensor) in epilogue creator when the activation scale was sized [1, 1] by converting it into a 0D tensor.
The issue was discovered while running LLaMA2 quantized with torchao's `int8_dynamic_activation_int8_weight` quantization on CPU with max-autotune enabled (although this error would've occurred regardless).
The final hidden states tensor that's activation to LM head is of shape `[batch_size, sequence_length, hidden_dim]` during decoding. For decoding one token at a time with batch size 1, sequence length is 1. The activation scale is shaped `[1, 1]` (reshaped from `[1, 1, 1]`). However, Inductor epilogue creator expects a 0D tensor in this case (my guess is that the corresponding logic in Inductor expects a 0D tensor if a tensor has only one element, even if it's 1D?).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,849,555,123
|
[NJT] fix flop counter for SDPA & test
|
davidberard98
|
closed
|
[
"module: nestedtensor",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: nested tensor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147032
Fixes 3 issues:
1. The test wasn't actually testing SDPA: both were checking cuda, and the inputs to SDPA were not transposed.
2. FlopCounterMode has been renamed _FlopCounterMode (and a wrapper named FlopCounterMode has been added)
3. offsets_to_list also needs to ignore the actual offset values if offsets is a meta tensor.
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @YuqingJ
Differential Revision: [D69558785](https://our.internmc.facebook.com/intern/diff/D69558785)
| true
|
2,849,539,442
|
Add self to CODEOWNERS for fx/proxy.py; warn against adding new node arg types
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147031
* #147013
* #147012
* #147016
Not sure if there's a better way
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,849,539,316
|
[inline_inbuilt_nn_modules] Move export to inline_inbuilt_nn_modules
|
anijain2305
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
For export, we should not lift the parameters and buffers as inputs. We can register them in the Dynamo Fx graph. This will maintain the input signature constraint required by the export.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.